MMM
Page 11 of 12 FirstFirst ... 89101112 LastLast
Results 251 to 275 of 284

Thread: Cherryville - SSD 520

  1. #251
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I suspect that using a "fixed" spare-area vs RAISE was based on findings and it could be a wise decision.

    There is no compression going on "in-memory" and so both the pagefile and the hibernation file should be easily compressible.

    In the end it's all highly dependent on what data the user processes, I for one am using the SF drives for VM's (data and code is highly compressible) and so I'm pretty sure that WA is well below 1.0, most likely between 0.5 and 0.7.
    In my case, 90+% of the contents are somewhere in between 46% and 8% (based the compression ratio used in my benchmark) and so it is definitely performing a lot better than incompressible data would suggest.
    I'll make sure to check flash writes vs Host writes at first opportunity.

    As of Office 2007 one can't really compress the files by 85% as they are already compressed, unless one uses the old format . (some are still using the old format, not many I suspect)
    I haven't checked ODF though, I would think it does compression as well.

    I suspect the SF advantage is more likely to be found in "business" scenarios rather than the typical home user working with a lot of incompressible data like video/music/images, otoh, most users are using HDDs for "storage" rather than SSDs so there might not be that much incompressible data on SSDs.

    I've been thinking of creating a test based on populating a database with data and then performing some tasks on the data, it would be the most likely scenario where SF should show it's strength vs others. (could be real life tasks like importing/exporting/searching/updating/indexing/... data)
    -
    Hardware:

  2. #252
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    There is no compression going on "in-memory" and so both the pagefile and the hibernation file should be easily compressible.
    Have you actually checked a hibernation file? I agree with you about a pagefile, but a hibernation file is a different story. The hibernation file is written out in one chunk, basically a memory dump, just before the computer hibernates. It would be rather stupid of MS not to do at least simple RLE compression on the hibernation file. I don't know whether there is any compression done on it, but I certainly would not assume that it is uncompressed without some evidence.

  3. #253
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I have now!

    It was on XP though using 7Zip 9.20 32bit

    No apps were running except for the default background services and AV.

    Compression ratio using

    Hiberfil (typical size = available memory)

    Fastest -> 38%
    Fast -> 36%
    Normal -> 33%

    Pagefile (typical size = available memory * 1.5)

    Fastest -> 28%
    Fast -> 16%
    Normal -> 13%

    It can't be done while the computer is running as the files are locked so one has to connect the drive as a secondary drive.

    It would be interesting to know how a 64bit installation would do, I expect it would be close but highly dependent on what apps that are active.
    (will try to do a test later today)

    edit:

    Using the built in compression (W7) the ratio is

    Hiberfil = 44%
    Pagefile = 35%
    Last edited by Anvil; 02-15-2012 at 05:31 AM.
    -
    Hardware:

  4. #254
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    According to Wiki Windows 7 introduced the ability to compress the hibernation file using powercfg.exe, so I guess there would not be much left for the SF controller to compress.

    http://en.wikipedia.org/wiki/Hiberna...)#cite_note-11

    Edit: from a MS Technical Paper.

    http://download.microsoft.com/downlo...Footprint.docx

    Windows supports hibernation by copying the contents of memory to disk. The system compresses memory contents before preserving them on the disk, which reduces the required disk space to less than the total amount of physical memory on the system.

    Windows reserves disk space for hibernate in the hibernation file, which is named Hiberfil.sys. For Windows 7, the default size of the hibernation file is equal to 75 percent of the total physical memory on the system. For example, on a computer that has 2 GB of RAM, the default hibernation file size is 1.5 GB.

    Reducing the hibernation file size from 100 percent of total physical memory helps reduce the disk footprint that is associated with hibernate and frees disk space for user programs and data. This reduction is very important on systems that have limited disk capacity.

    However, some rare workloads have a memory footprint that is larger than 75 percent of the total physical memory on the system, even when they are compressed. A system administrator can adjust the size of the hibernation file to as high as 100 percent of total physical memory to account for these conditions.
    Last edited by Ao1; 02-15-2012 at 06:14 AM.

  5. #255
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    powercfg hibernate options for W7

    -HIBERNATE, -H
    Enables-Disables the hibernate feature. Hibernate timeout is not supported on all systems.

    Usage:
    POWERCFG -H <ON|OFF>
    POWERCFG -H -Size <PercentSize>
    -Size Specifies the desired hiberfile size in percentage of the total memory. The default size cannot be smaller than 50. This switch will also enable the hiberfile automatically.
    -
    Hardware:

  6. #256
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by Anvil View Post
    I suspect that using a "fixed" spare-area vs RAISE was based on findings and it could be a wise decision.
    I have been of the belief that ditching RAISE was a great idea, if only just to make room for OP.

    Is the steady(er) state performance of Cherryville still substantially reduced vs. fresh?

  7. #257
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    According to Wiki Windows 7 introduced the ability to compress the hibernation file using powercfg.exe, so I guess there would not be much left for the SF controller to compress.
    Good link!

    So, it seems Windows XP hiberfil can be compressed significantly (based on Anvil's test), but since Windows 7 compresses the hiberfil, it is unlikely that Sandforce can compress it any further.

    That eliminates hibernation as a possibility for significant benefits from Sandforce compression.

    It would seem the only remaining application that benefits directly from Sandforce compression is the VMs that Anvil mentioned. It would be interesting to see some data on that. If Anvil were to record the current host writes and flash writes on his SSD, then run his VMs normally for a week or so, and then post the new host writes and flash writes, that would be very interesting.

  8. #258
    Registered User
    Join Date
    Jun 2006
    Posts
    43
    Quote Originally Posted by johnw View Post
    It would seem the only remaining application that benefits directly from Sandforce compression is the VMs that Anvil mentioned. It would be interesting to see some data on that. If Anvil were to record the current host writes and flash writes on his SSD, then run his VMs normally for a week or so, and then post the new host writes and flash writes, that would be very interesting.
    Now get this.. I actually use compression on my VM images This is simply to get more out of the price/performance curve with SSD storage. Anyway, such an approach is quite common in 'enterprise' SSD storage systems (e.g. PureStorage, SolidFire Nimbus, etc). I wouldn't mind if the SF controller handled the compression for me -- if the net result was that I had greater available 'free space'

  9. #259
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'm repeating the test on a W7 x64 install on a laptop, 8GB memory, not much running.
    (default hibernation settings)

    hiberfil.sys = 5.96GB

    7zip
    Normal -> 190MB
    Fast -> 209MB
    Fastest -> 220MB (3%)

    W7 built in compression -> 227MB

    --

    So it allocates the hiberfil.sys according to settings and that results in a ~6GB file, if memory consumption is low it will result in lots of compressible data.

    The computer I used earlier today had 512MB of memory and so most of the file was used, if not all.

    I'll repeat the test on the laptop with a few more apps running.
    Last edited by Anvil; 02-15-2012 at 02:55 PM.
    -
    Hardware:

  10. #260
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I just checked my winsxs folder and it is currently 10GB. Using 7Zip I can get that down to 4.5GB so let’s assume another 4.5GB to the controller (not the end user).

    Office and Win 7 can be compressed by 50% yet that makes for an indiscernible difference to the installation times when compared to SSD’s that can’t compress data.

    http://www.hardwareheaven.com/review...all-times.html

    No speed gain and no extra space. I’ve only used a 40GB V2 and a 60GB V3 but large xfers of non compressible data are notably slower than an 160GB X25-M, which is turn is notably slower than a 60GB 830.

  11. #261
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    I'm repeating the test on a W7 x64 install on a laptop, 8GB memory, not much running.
    (default hibernation settings)

    hiberfil.sys = 5.96GB

    7zip
    Normal -> 190MB
    Fast -> 209MB
    Fastest -> 220MB (3%)

    W7 built in compression -> 227MB
    Hmmm, so Windows only compresses part of the hiberfil, just enough to fit it into 75% of your RAM space? That is really odd. I think MS made a bad design decision there. Simple RLE or LZW77 compression can be done at VERY high speed, 500MB/s should be easy on a modern system, so there is really no reason not to do it.

    Anyway, I guess MS's bad decision means that Sandforce compression may be beneficial for the hibernation file.

    EDIT:

    Hold on a second, are you sure that the entire file was actually written to? I'm not familiar with NTFS, but on linux ext4 or XFS, you can fallocate() a big file without actually writing anything to it. What if Windows only actually wrote a few hundred megabytes to the file, leaving the rest empty?
    Last edited by johnw; 02-15-2012 at 04:14 PM.

  12. #262
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The file is fully allocated.
    There is of course not much in the file, thus the size setting only specifies what size to allocate, I expect most users could easily set it for 50% and still have some space left. (based on 8GB memory or more)

    If the file wasn't pre-allocated it could simply write the compressed stream to a new file and the file would be 100% utilized, that's not what's happening though.
    -
    Hardware:

  13. #263
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    The file is fully allocated.
    There is of course not much in the file, thus the size setting only specifies what size to allocate, I expect most users could easily set it for 50% and still have some space left. (based on 8GB memory or more)

    If the file wasn't pre-allocated it could simply write the compressed stream to a new file and the file would be 100% utilized, that's not what's happening though.
    But that doesn't answer my question. Did Windows explicitly write 6GB of data to the file? If you monitor the host writes to the SSD, do they actually increase by 6GB?

  14. #264
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The whole point in pre-allocating the 6GB file is to ensure that the space is reserved/available.
    I looked up fallocate and unless you specify FALLOC_FL_KEEP_SIZE (in effect 0-fill) the space is not guaranteed to be availabe, meaning that writes can fail due to lack of disk space.

    If it wasn't pre-allocated in windows, 7zip would have stopped processing as soon as the "used" portion was done, it processed the whole file as in reading + compressing 6GB.
    -
    Hardware:

  15. #265
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    The whole point in pre-allocating the 6GB file is to ensure that the space is reserved/available.
    I looked up fallocate and unless you specify FALLOC_FL_KEEP_SIZE (in effect 0-fill) the space is not guaranteed to be availabe, meaning that writes can fail due to lack of disk space.

    If it wasn't pre-allocated in windows, 7zip would have stopped processing as soon as the "used" portion was done, it processed the whole file as in reading + compressing 6GB.
    We don't seem to be communicating.

    The file could be fully allocated, but Windows could still not write 6GB before hibernating. It may only write to the first 200MB of the 6GB file that has been allocated.

    Hence, my question. If you monitor the host writes to the SSD before and after hibernation, does it increase by 6GB? Or if that is to small to measure, if you hibernate, say 10 times, do the host writes increase by 60GB?

  16. #266
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'll check next time I get to that laptop.

    I'd expect it to write only what's needed.
    -
    Hardware:

  17. #267
    Registered User
    Join Date
    Jun 2006
    Posts
    43
    I assume all you want to determine is whether or not the hibernation file is a sparse file? TBH I also use Linux primarily but I'm pretty sure you can select a file in Windows Explorer and view the properties of it. Then the "Size" vs "Size on disk" fields distinguish between logical size and physical size. Here's another tool I just found that seems to check files for 0-ranges: http://www.opalapps.com/sparse_check...e_checker.html

  18. #268
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Here is my guess at how Windows handles the hibernation file (some of it based on the link Ao1 posted).

    When hibernation is first configured (probably when Windows is installed), a file is created with size equal to 75% of RAM, but no data is actually written to the file -- the initial file contents are whatever happened to be on the sectors of the drive when the files extents were assigned.

    Then, when Windows goes into hibernation, it compresses RAM to whatever is the smallest size that it can and then writes the compressed data out to the beginning of the hibernation file. If Windows manages to compress RAM down to 200MB, then it only writes to the first 200MB of the file, leaving the rest of the file as unwritten (whatever happened to be in those sectors already).

    If my guess is accurate, then you cannot get an idea of how compressible are hibernation writes by simply trying to compress the whole hibernation file. You either have to take just the beginning of the hibernation file (if you can figure out how much of it was actually written to), or else you have to measure actual writes to the hibernation file using an IO monitor or host writes on the SSD.

  19. #269
    Xtreme Member
    Join Date
    May 2009
    Location
    Italy
    Posts
    328
    Intel 520 240GB - Degradation and Steady-State Performance
    http://www.xbitlabs.com/articles/sto...0_4.html#sect0


    Last edited by Gilgamesh; 02-22-2012 at 12:43 AM.

  20. #270
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    It does not look like Intel changed anything with regards to how TRIM operates. (21 seconds after deleting ~30GB of files on a 60GB drive)

    Click image for larger version. 

Name:	520.png 
Views:	343 
Size:	14.5 KB 
ID:	124210

    Out the box device statics:

    Device Model: INTEL SSDSC2CW060A3
    Firmware Version: 400i


    Device Statistics (GP Log 0x04)
    Page Offset Size Value Description
    1 ===== = = == General Statistics (rev 2) ==
    1 0x008 4 11 Lifetime Power-On Resets
    1 0x010 4 7 Power-on Hours
    1 0x018 6 172343310 Logical Sectors Written
    1 0x028 6 145784342 Logical Sectors Read
    4 ===== = = == General Errors Statistics (rev 1) ==
    4 0x008 4 0 Number of Reported Uncorrectable Errors
    4 0x010 4 59 Resets Between Cmd Acceptance and Completion
    6 ===== = = == Transport Statistics (rev 1) ==
    6 0x008 4 59 Number of Hardware Resets
    6 0x010 4 84 Number of ASR Events
    6 0x018 4 0 Number of Interface CRC Errors
    7 ===== = = == Solid State Device Statistics (rev 1) ==
    7 0x008 1 255 Percentage Used Endurance Indicator
    Last edited by Ao1; 02-23-2012 at 11:48 AM.

  21. #271
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I wonder how long it will take before someone susses out how to update a non Intel SF drive with Intel f/w.

    EDIT:

    So according to device statistics I have 12 power on hours. When new there were 7 hours, so that is about right. According to SMART there is a raw value of 894,806. If that is displayed as a LOW word it comes out as 1785 ~day or just under 5 years. The vendor specific value states threshold not expected to be exceeded.

    I've learnt before that a guess when it comes to storage will almost certainly be (badly) wrong, so I'm not going to guess that it would be reasonable to assume LTT/off based on the power on hours tweak

    Click image for larger version. 

Name:	SF LTT.jpg 
Views:	336 
Size:	143.2 KB 
ID:	124211
    Last edited by Ao1; 02-26-2012 at 06:23 AM.

  22. #272
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    I would be extremely surprised to see LTT on the 520.

  23. #273
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Interestingly, it seems more SFs will be following Cherryville in ditching RAISE plus whatever else SF does with it's share of NAND capacity.

    http://www.adata.com.tw/index.php?ac..._id=355&lan=en

    Adata is releasing some new SF drives, identical to the rest, except for new FW which will allow for no RAISE/OP/etc. I guess it would be like Cherryville without the overprovisioning. You'll get the binary capacity now. A formerly 120GB SF 2281 will now become 128GB, a 480 becomes 512.

    This may vindicate my feelings that RAISE and all that other SF junk could be causing some of the problems. Or it could just be to enhance the price/GB ratio. Intel probably had the right idea in just straight over provisioning, but why the other manufactures don't follow suit (and using that space just for OP) is a mystery -- unless this a naked attempt to make a better argument when it comes to price. As far as I know, over provisioning never hurt anyone.
    Last edited by Christopher; 02-24-2012 at 01:17 PM.

  24. #274
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    I find it odd that the Intel 320 has its own equivalent of RAISE, as well as power-loss-protection capacitors, while the Intel 520 has neither of those things, despite being a more expensive, higher-end product.

    Intel's SSD strategy seems random these days.

  25. #275
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    I've been operating under the assumption that Intel disabled RAISE in the Intel 520, but I was looking for a reference from Intel that clearly states that RAISE is disabled, and I have not been able to find one.

    Intel specifies UBER=10^-16

    http://www.intel.com/content/www/us/...ification.html

    while Sandforce says the RAISE (combined with ECC) gives UBER=10^-29

    http://www.sandforce.com/index.php?id=174&parentId=3

    which suggests that RAISE is disabled on the 520.

    Also, reading the product brief on the 520, it does not mention surplus flash, which the Intel 320 product brief does mention, "The Intel® SSD 320 Series also offers an array of surplus NAND flash memory should the controller encounter a faulty NAND array"

    http://www.intel.com/content/www/us/...20-series.html

    But has an Intel spec sheet (or Intel representative) ever specifically stated that RAISE is disabled on the 520?

Page 11 of 12 FirstFirst ... 89101112 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •