Page 22 of 24 FirstFirst ... 12192021222324 LastLast
Results 526 to 550 of 598

Thread: Sandforce Life Time Throttling

  1. #526
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    I've noticed on my other sandforce drive (Pyro 60) high write amplification during low usage times.

    My instinct says that the drive is performing static wear leveling while doing garbage collection, freeing up unworn blocks and refreshing read bitflipped data.

  2. #527
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    It could be, but I believe that SF drives do not carry out aggressive GC when idle, rather they wait for new writes so they can limit the amount of WA.

    I believe the issue is more likely due to small xfers at a low QD that cannot be write combined. It could also be that the controller cannot compress data in this type of scenario which adds to the WA.

    It will be interesting to see how the rest of the week goes.

    Later I will try a manual trim to see what happens to the writes. (I have 12.5GB free on a 55.8GB drive)

    Edit: A manual trim did not change any of the SMART values.

    Edit 1 - as I am using a trim enabled system perhaps the drive did not need to be cleaned up.
    Last edited by Ao1; 03-06-2012 at 12:58 AM.

  3. #528
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Quote Originally Posted by Ao1 View Post
    It could be, but I believe that SF drives do not carry out aggressive GC when idle, rather they wait for new writes so they can limit the amount of WA.

    Edit 1 - as I am using a trim enabled system perhaps the drive did not need to be cleaned up.
    Same thing really. The operating system will always be firing filesystem updates (journal, registry and other tiny updates) at the drive. I don't think there is a huge cost to these though, but the controller may take advantage of these small writes to do some garbage collection, cleaning partially cleared, old or unworn blocks. Given that the drive must write some metadata (LBA block maps) each time it commits user data to the drive, it can minimise write amplification by combining the small user writes with other blocks it was going to garbage collect anyway.

    Of course, the above is only speculation.

    A TRIM enabled system should never have any ophaned blocks to reclaim via a manual TRIM cycle.

  4. #529
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    When I have finished this exercise I will SE the drive and install Windows again. I won’t install any apps, so that I have as much free space as possible. I will then just leave the drive to idle. That way there will be lots of “clean” blocks for OS generated writes, which should help isolate what is going on.

    I’m fairly sure however that the “problem” is a lack of write combining and quite possibly a lack of ability to compress small random writes.

  5. #530
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    "Real" random writes will lead to orphaned blocks, however, there aren't many of those for the normal windows user.

    I have never found any of my SF drives to perform cleaning during idle time.

    I do run Intel's Toolbox cleaning on the X25 series drives from time to time (not often) and it does show that it works if comparing HDTune maps before and after.
    We should perform a test to check if the Toolbox cleaning actually does or leads to anything at all on Intel's SF drives.
    -
    Hardware:

  6. #531
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Quote Originally Posted by Anvil View Post
    "Real" random writes will lead to orphaned blocks, however, there aren't many of those for the normal windows user.
    Overwritten blocks aren't orphaned. The controller knows the old copy is not needed and will give it to the garbage collector to eventually clean, like any blocks that the OS issues a TRIM command on.

    In any case, all drives will have to eventually do clean up because existing data must occasionally be refreshed due to bit decay and read disturb.

    The X25 toolbox probably just forces the drive to run garbage collection before it normally would. Considering most X25 drives won't be babied like this, it is probably safe to assume that medium/long term performance isn't significantly affected by using the toolbox cleaning.

  7. #532
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    They aren't really orphaned of course but they won't get TRIMmed when/if the original file is deleted.

    i.e. the random writes part of the Endurance test will lead to such issues, it will eventually be cleaned when the block is needed or during a Toolbox induced cleaning.
    (the toolbox cleaning just allocates all free space and deletes the files, it looks to be allocated without actually writing anything to the files, when deleting the file it leads to a normal TRIM on the affected LBAs and thus it is cleaned)
    -
    Hardware:

  8. #533
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Quote Originally Posted by Anvil View Post
    They aren't really orphaned of course but they won't get TRIMmed when/if the original file is deleted.
    Of course the operating system won't explicitly TRIM sectors that are overwritten. However, the drive will still perform the TRIM internally because it knows that you have overwritten an LBA that is already allocated. Overwriting a used sector, for the drive, is equivalent to TRIMing that sector, then writing it.

    This is how modern solid state drives are able to maintain reasonable performance in operating systems that don't have TRIM support. However, if there is not enough spare unused space on the drive (eg 60 gig sandforce drives with only 64gig NAND) and you hammer it with writes, the drive has difficulty cleaning blocks because the small number of blocks ready for garbage collection become scattered over the NAND surface and writing to it means moving a lot of existing data. This slows the drive down considerably and increases write amplification.

    This is why many enterprise drives (intel 710) have a LOT of spare space (200gig drive with 320gig NAND). This is why the initial sandforce drives came with 25% or so of over provisioning. If you give the drive enough spare space, you can continuously hammer the drive with random writes and there will always be enough unused space to erase and use for the writes.

    Edit: Please ignore if you know all the above ... sometimes I like to ramble on arrogantly

  9. #534
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by canthearu View Post

    The X25 toolbox probably just forces the drive to run garbage collection before it normally would. Considering most X25 drives won't be babied like this, it is probably safe to assume that medium/long term performance isn't significantly affected by using the toolbox cleaning.
    Intel’s Toolbox generates a series of temp files that fill unused capacity. The temp files are then deleted. I’m intrigued that this did not register NAND or Host writes, so I will also look it to this further at some stage.

  10. #535
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Quote Originally Posted by Ao1 View Post
    Intel’s Toolbox generates a series of temp files that fill unused capacity. The temp files are then deleted. I’m intrigued that this did not register NAND or Host writes, so I will also look it to this further at some stage.
    If the free space in the NAND was significantly fragmented, this would force the controller to move data so it could free a lot of this space so the temp files could be written. Then, when these blocks are TRIMed by the delete, there will be a lot of full NAND blocks that can be erased immediately and put to use.

    I can however confirm that host/NAND writes aren't logged for my Intel 520 when I run the cleanup tool. So what I just said is wrong to a degree However:

    a) My drive probably isn't highly fragmented free space because it is running ASU, so maybe little NAND cleanup was required.
    b) I didn't notice any speed difference with ASU afterwards.

  11. #536
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @canthearu

    No, it won't perform the internal TRIM as if the file was deleted.
    (the block is of course somehow flagged but that's not the same as it's being TRIMmed)

    I'll show you a practical example of what goes on on the Intel's and the Crucials, it's easily seen using a few tools. (will have to be later today/tonight)
    That's not to say that it's how it works on the SF drives though but I'm pretty sure that it can be replicated.

    The key is how the LBAs are handled/reporting.

    Quote Originally Posted by Ao1 View Post
    Intel’s Toolbox generates a series of temp files that fill unused capacity. The temp files are then deleted. I’m intrigued that this did not register NAND or Host writes, so I will also look it to this further at some stage.
    I've been playing a lot with this and there are tricks to perform this without actually writing to the drive, the LBAs are fully allocated though, interesting and it works.
    (there will be a few metadata operations though)
    -
    Hardware:

  12. #537
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    We should test.

    Take an Intel 520 SSD, Secure erase it, fill it up mostly with data then pound on it with random writes on for an hour. No deleting of files or other optimisation. Do some benchmarks and make sure it is suffering.

    Then put it on a XP box and delete the the file. Then run the Intel toolbox and the cleanup wizard. Then do some benchmarks. This is testing the drive with the Intel toolbox cleaning it up.

    Then take the Intel 520 SSD, and do the same thing as before, fill it up and pound on it. Redo benchmarks to ensure it is in a sufficiently pounded state.

    Then put it on a win 7 box and delete the file. Then do some benchmarks. This is testing the drive with normal TRIM cleaning it up. If performance is still bad, run the Intel toolbox and rerun benchmarks, see if it makes any further difference.

  13. #538
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Quote Originally Posted by Anvil View Post
    @canthearu
    No, it won't perform the internal TRIM as if the file was deleted.
    (the block is of course somehow flagged but that's not the same as it's being TRIMmed)
    There is nothing special about TRIM. Files are a structure that lives above the level TRIM lives at. When a file is deleted, the file table is updated and blocks used to store the file and it's meta-data are cleaned up issuing a TRIM command for the blocks that are deleted.

    The difference between a system with explicit TRIM and a system without is that the system with explicit TRIM will have a more orderly recovery and then reallocation of NAND blocks, which is easier on the drive. Systems without TRIM will suffer a more chaotic scattering of free NAND. If there is not enough spare space to provide the writing thread with full clean NAND blocks, then this chaotic arrangement of free and used space will be self reinforcing, with little productivity for the amount of work the drive has to do.

    Quote Originally Posted by Anvil View Post
    I'll show you a practical example of what goes on on the Intel's and the Crucials, it's easily seen using a few tools. (will have to be later today/tonight)
    That's not to say that it's how it works on the SF drives though but I'm pretty sure that it can be replicated.

    The key is how the LBAs are handled/reporting.
    Very interested in seeing this Even if I am wrong, I'd prefer to see it then pretend I am right.

    I've been playing a lot with this and there are tricks to perform this without actually writing to the drive, the LBAs are fully allocated though, interesting and it works.
    (there will be a few metadata operations though)
    Yeah, just mulling it over in my head ... all the Intel tool might do (speculation again, because I certainly don't work for Intel) is create the temp file, then look at what LBAs the file would have used if it was written. Then TRIM those LBAs, possibly with a message to perform immediate garbage collection .... maybe.

    Sorry if I'm dragging this thread way off topic.

  14. #539
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    canthearu

    You should download hIOmon or have a look at the hIOmon thread that Ao1 initially created, most of this stuff is already covered in the hIOmon thread and it would be the correct place to to continue this discussion.
    -
    Hardware:

  15. #540
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post

    I've been playing a lot with this and there are tricks to perform this without actually writing to the drive, the LBAs are fully allocated though, interesting and it works.
    (there will be a few metadata operations though)
    Sounds like you might be on to something. It would certainly explain why the SMART values don't change.


    @ canthearu

    In case you can't find the hIOmon thread (following the XS clear up) it can be found here.

    http://www.xtremesystems.org/forums/...&daysprune=365

    Also you can find a phenomenal podcast on "How Your Data Gets to Your SSD - Exploring the Software/Hardware Stack") at the following SNIA SSS Education web page:

    http://www.snia.org/forums/sssi/knowledge/education

  16. #541
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    Sounds like you might be on to something. It would certainly explain why the SMART values don't change.
    I don't think there is any real "trick" involved. It probably does something similar to what I mentioned when we were discussing the hibernation file (that discussion got preempted by the XS downtime).

    Basically, you create a new file of a certain size but do not write to it (I know this is possible with XFS and ext4, and I assume you can do it with NTFS). The filesystem just allocates the extents, but the contents of the file will be whatever was already in the sectors before the extents were allocated. Nevertheless, it is a real file (just with undefined contents), and so real LBA assigned to it, so when you TRIM the file, the correct LBAs to TRIM will be sent to the SSD. There will be minimal writes to storage, only the filesystem metadata that changes when the extents are allocated.
    Last edited by johnw; 03-06-2012 at 11:32 AM.

  17. #542
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    johnw

    That is correct, I've been fiddling with both that and sparse files and the contents will be whatever is in those free blocks on the drive.

    So for a clean drive (as is the case on new installs) the hibernation file will contain nothing at all, that is occupying virtually 0 bytes on the SF controllers.
    -
    Hardware:

  18. #543
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Day 3

    Click image for larger version. 

Name:	day 3.png 
Views:	227 
Size:	17.6 KB 
ID:	124417

  19. #544
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Day 4. Host writes go up, nand writes go down. (Proportionately)

    Click image for larger version. 

Name:	day 4.png 
Views:	193 
Size:	18.2 KB 
ID:	124452
    Last edited by Ao1; 03-08-2012 at 02:43 AM.

  20. #545
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Day 5

    Click image for larger version. 

Name:	day 5.png 
Views:	193 
Size:	18.7 KB 
ID:	124464

  21. #546
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Just after this update I had to install Net4 and DWGTrueView. It would appear that the SF controller has no problems compressing installation files, but conversely typical OS/ app activity results in high WA.

    Click image for larger version. 

Name:	day 5 update.png 
Views:	186 
Size:	19.9 KB 
ID:	124465

  22. #547
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    No surprise, but here is a snap shot of OS idle time activity. Sporadic write activity triggered by periodical lazy write flushes, consisting of small xfers that predominantly execute via multiple random IO operations.

    Looking at the WA being incurred the SF controller clearly can’t manage that workload very well. Presumably the work load can’t be compressed and presumably the controller can’t write combine.

    Click image for larger version. 

Name:	write activity.png 
Views:	175 
Size:	29.3 KB 
ID:	124472

  23. #548
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Hmmm, your NAND writes seems to be running a bit high. To be doing almost 2TB of writes per year just in background activity is more then I've seen sandforce drives do in the real world.

    In the 45 hours your computer was mostly idle, it did 0.44444 GiB of NAND writes per hour.

    My sandforce 1 drive in my desktop, which is not really babied at all (It has pagefile, browser caches on it, my normal documents directories, I even occasionally defrag it *shock horror*), has 11191 hours on it and 3456 GiB of NAND writes. That equals 0.3088 GiB per hour

    So something isn't quite adding up with your drive or the workload you have.

  24. #549
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I am using the OS for everyday tasks as I normally would, so writes aren’t just limited to OS background tasks. I’ve been working with AutoCAD files, office docs, pdf’s, email, webmail, browsing, SketchUp, Skype, games etc. Basically apps that don’t induce huge quantities of write activity. All my static data is on separate drives so I’m not copying large files.

    It appears that heavier write activity is able to significantly reduce WA, but it would always be good to compare and get a broader picture if anyone else wanted to compare daily write activity, especially if heavy write activity is not being induced, which would help to verify what I am seeing.
    Last edited by Ao1; 03-09-2012 at 03:15 PM.

  25. #550
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Heavier write activity would possibly reduce write amplification ... but it really wouldn't reduce overall NAND writes per hour would it?

    I can start looking at this on my Patriot Pyro 60gig .... it is running on a HTPC which is only doing endurance testing and playing the odd video.

    The only write activity it is currently really suffering is the odd pagefile access and smartlog, which updates a few files every minute. I can point smartlog to a USB drive if you want me to remove that from the test as well.

Page 22 of 24 FirstFirst ... 12192021222324 LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •