Upload compression
Moderators: Site Moderators, FAHC Science Team
Upload compression
Is it possible to implement upload compression especially when you getting 15MB result files for the SMP client p30xx.
-
- Site Admin
- Posts: 1288
- Joined: Fri Nov 30, 2007 9:37 am
- Location: Oxfordshire, UK
Re: Upload compression
Results are already compressed before upload.
Code: Select all
[06:39:15] Finished Work Unit:
[06:39:15] - Reading up to 230280 from "work/wudata_00.arc": Read 230280
[06:39:15] - Reading up to 173868 from "work/wudata_00.xtc": Read 173868
[06:39:15] goefile size: 0
[06:39:15] logfile size: 119813
[06:39:15] Leaving Run
[06:39:20] - Writing 594025 bytes of core data to disk...
[06:39:20] Done: 593513 -> 449135 (compressed to 75.6 percent)
[06:39:20] ... Done.
[06:39:21] - Shutting down core
[06:39:21]
[06:39:21] Folding@home Core Shutdown: FINISHED_UNIT
Re: Upload compression
Does it still compress when it does this
Code: Select all
[06:44:27] Finished Work Unit:
[06:44:27] - Reading up to 3721632 from "work/wudata_02.arc": Read 3721632
[06:44:27] - Reading up to 1776020 from "work/wudata_02.xtc": Read 1776020
[06:44:27] goefile size: 0
[06:44:27] logfile size: 25510
[06:44:27] Leaving Run
[06:44:29] - Writing 5527562 bytes of core data to disk...
[06:44:29] ... Done.
[06:44:29] - Failed to delete work/wudata_02.sas
[06:44:29] - Failed to delete work/wudata_02.goe
[06:44:29] Warning: check for stray files
[06:44:29] - Shutting down core
[06:46:29]
[06:46:29] Folding@home Core Shutdown: FINISHED_UNIT
[06:46:29]
[06:46:29] Folding@home Core Shutdown: FINISHED_UNIT
[06:46:32] CoreStatus = 64 (100)
[06:46:32] Unit 2 finished with 80 percent of time to deadline remaining.
[06:46:32] Updated performance fraction: 0.750828
[06:46:32] Sending work to server
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: Upload compression
-verbosity 9 on in that example?
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 170
- Joined: Sun Dec 02, 2007 12:45 pm
- Location: Oklahoma
Re: Upload compression
Some "WUs greater than 5MB in size", or single- client Bonus WUs are known to be around 45MB in size... not for dial- up.butc8 wrote:Is it possible to implement upload compression especially when you getting 15MB result files for the SMP client p30xx.
Facts are not truth. Facts are merely facets of the shining diamond of truth.
Re: Upload compression
yip7im wrote:-verbosity 9 on in that example?
When will Stanford start compressing results files?
I have two computers running SMP units and the size of the results files returned is about to cause me to be penalized by my ISP because of the excess bandwidth being used. I'm on a satellite connection and they allow me 2.3 gb upload bandwidth, and I'm using about 3.0 gb.
I experimented with the smp result files and found that the 22 mb file sizes can be reduced to about 15 mb by compressing using "7zip". So why isn't Stanford compressing the files? With result file sizes seeming to grow, this could become a problem for more people than just me.
Pat
Merged new post with existing topic. PM Sent. 7im
I experimented with the smp result files and found that the 22 mb file sizes can be reduced to about 15 mb by compressing using "7zip". So why isn't Stanford compressing the files? With result file sizes seeming to grow, this could become a problem for more people than just me.
Pat
Merged new post with existing topic. PM Sent. 7im
Re: Upload compression
I suspect they need to find a balance between computational power needed to compress units (and hence time to compress) vs file sizes. I'm not sure how old the compression method used is (ie is it the same as was used for v4? v3? v2? but it may be worth looking at alernatives).