Page 7 of 220 FirstFirst ... 456789101757107 ... LastLast
Results 151 to 175 of 5492

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #151
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    11.6TB.....Wear out 99%.

  2. #152
    SLC One_Hertz's Avatar
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,953
    24TB. 88.

    Ok this workload is much too light. All we are doing is sequentially writing to the SSD and slowly wearing out the NAND. 2TB per percent that I am seeing right now would mean 200TB to reach 1, or 5000 cycles on each NAND cell, which happens to exactly be the spec.

    Anvil, how about adding some random writes, meaning, making changes within the generated files?

    My 320 has the TRIM hang you speak of as well, but it only lasts 1.5-2 seconds. After this, the speed drops to about 39MB/s over the period of 30 seconds. During the next 30 seconds it slowly recovers to 42-43.

  3. #153
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by One_Hertz View Post

    My 320 has the TRIM hang you speak of as well, but it only lasts 1.5-2 seconds. After this, the speed drops to about 39MB/s over the period of 30 seconds. During the next 30 seconds it slowly recovers to 42-43.
    That is more or less exactly what I am seeing. (Except hangs are longer).

    If you go by perfmon the hang is near on a 10 second duration. hIOmon however shows ~5 seconds. I'm inclined to believe hIOmon.

  4. #154
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Ao1 View Post
    11.6TB.....Wear out 99%.
    Finally

    Quote Originally Posted by One_Hertz View Post
    24TB. 88.
    Anvil, how about adding some random writes, meaning, making changes within the generated files?
    I can do that, although a lot of these writes are actually "random".
    What I can do is to enable part of th benchmark and have a fixed 1-2TB file where random writes takes place at some interval.
    Those writes would never be handled by TRIM as TRIM can only do cleansing when a file is "deleted".

    edit:
    The TRIM hang is more like One_Hertz's except for that it building up speed for a short while and within a few minutes it slowly drops to about 32-33MB/s. (it peaks at ~39MB/s)
    Last edited by Anvil; 05-23-2011 at 06:44 AM.
    -
    Hardware:

  5. #155
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    If you were streaming video or audio files onto the SSD it would be a disaster if TRIM caused a lock up, even for a fraction of a second.
    Anvil can confirm (or deny), but I suspect the program is sending TRIM commands for a large number of files at once. Thousands?

    For streaming video or audio, I don't think most people are likely to have thousands of files to delete at once. Or if they do, they are doing something wrong.

  6. #156
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Yepp, I can confirm that we are talking about thousands of files, but, I'm pretty sure it's more related to the size (in GB) of the files rather than the number of files, could be both of course.
    -
    Hardware:

  7. #157
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by johnw View Post
    Anvil can confirm (or deny), but I suspect the program is sending TRIM commands for a large number of files at once. Thousands?

    For streaming video or audio, I don't think most people are likely to have thousands of files to delete at once. Or if they do, they are doing something wrong.
    Someone should try streaming a video to a Vertex drive and then we would soon know. I can stream audio so I'll try that later. I doubt if it matters much what has been written, in terms of how many files were generated. I'd guess it's about how large the TRIM operation is, i.e. the total size of data to be TRIMMED.

    Anvil, I had a look at the OCZ fw release notes and I see it is indeed documented. I was not aware of the beforehand.

    Known open issues:
    •TRIM command response times are longer than desired

    EDIT: Anvil I'm guessing all the temp files are deleted at the same time, i.e at the end of a loop?
    Last edited by Ao1; 05-23-2011 at 07:15 AM.

  8. #158
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Closing in on 20TB
    19_82_tb_host_writes.PNG

    6,7 Million files

    I'll update the 1st post with an updated graph a little later.
    -
    Hardware:

  9. #159
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    Yepp, I can confirm that we are talking about thousands of files, but, I'm pretty sure it's more related to the size (in GB) of the files rather than the number of files...
    I'd take the opposite side of that bet. I think that if you send a TRIM with thousands (or millions) of LBAs, it is going to take a long time. But if you send a single TRIM with contiguous LBAs, say, 2048 - 40,000,000, that it will not take so long.

  10. #160
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    I can stream audio and set the file to split at 64MB, 650MB, 700MB or 2,048MB. (By using Tracktor).

    I can also stream in any audio format. Wave eats up disk space. EDIT: but can be compressed to 10th the size by converting it to an mp3)

    Going start working on it now, so we will soon see.

  11. #161
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by johnw View Post
    I'd take the opposite side of that bet. I think that if you send a TRIM with thousands (or millions) of LBAs, it is going to take a long time. But if you send a single TRIM with contiguous LBAs, say, 2048 - 40,000,000, that it will not take so long.
    Both would be directly related to the # of LBAs.
    In my case the 1.5 second delay was caused by deleting 1 single file. (a 1,5GB file)
    Now how would that translate to a single 15GB file?, would it be 15 seconds?
    The amount of data deleted by Ao1 is approximately 15GB.

    I don't know, hIOmon lists the LBA range for TRIMs but I don't think it would for thousands of files.
    -
    Hardware:

  12. #162
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @alfaunits

    Here is a sample testfile.
    Attached Files Attached Files
    -
    Hardware:

  13. #163
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    ...It looks very much like a TRIM operation is causing the "hang". The hang duration coincides with ~5 seconds that are recorded against the "responsetime_max-control, which I believe is TRIM related. (Hopefully overthere can confirm.)...
    Ao1 is correct regarding the "ResponseTime_Max_Control" shown within the hIOmon WMI Browser reflecting the maximum response time observed by hIOmon for a "TRIM" command (technically, a "Device Control" I/O operation that specified a "Manage Data Set Attributes (MDSA)" request for a "TRIM" action).

    When the hIOmon "SSD I/O Performance Analysis Add-On" script configuration option is used, the hIOmon software is specifically and automatically configured to monitor control I/O operations for the specified physical volume/device(s); read and write I/O operations can also optionally be monitored by hIOmon.

    Moreover, this monitoring of control I/O operations by hIOmon is limited to "MDSA" requests (consequently, other control I/O operations such as Flush Buffers are explicitly ignored by hIOmon).

    So overall in this case, the various "control I/O operation" metrics captured by hIOmon reflect TRIM-related control I/O operations only. Similarly, the "control I/O operation" metrics shown within the hIOmon Presentation Client displays provided by Ao1 also reflect only TRIM-related control I/O operations.

  14. #164
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    ^ this makes things very easy

    Here are some results from 3 delete operations. Nothing else is running in the background.

    What I notice; TRIM only occurs when the file is deleted from the Recycle bin. In all cases there is a delay of a few seconds before the TRM command is executed.

    This tells me that when running the loop, as the new loop starts it is stopped by the TRIM command being executed a couple of seconds after the delete has occured.

    • File folder = 612MB 179 files in 4 folders
    • AVI = 697MB
    • File folder = 6.83GB 635 files in 96 folders
    Attached Images Attached Images
    • File Type: png 1.png (4.7 KB, 880 views)
    • File Type: png 2.png (4.3 KB, 877 views)
    • File Type: png 3.png (4.5 KB, 878 views)

  15. #165
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Ao1 View Post
    EDIT: Anvil I'm guessing all the temp files are deleted at the same time, i.e at the end of a loop?
    Yes, the files are deleted one by one.
    The writing doesn't re-start until all the files are deleted.
    -
    Hardware:

  16. #166
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Anvil View Post
    ...hIOmon lists the LBA range for TRIMs but I don't think it would for thousands of files.
    hIOmon can collect I/O operation metrics that are automatically summarized as well as an I/O operation trace. Both of these options can be used separately (with no dependence upon each other) or concurrently.

    Some of the "summarized" TRIM-related metrics captured by hIOmon are shown within Ao1's prior post #84 using the hIOmon WMI Browser.

    Similarly, Ao1's post #125 shows a snippet from a hIOmon Presentation Client display that includes several control I/O operation metrics which are TRIM-related.

    The hIOmon Presentation Client can be configured to display additional TRIM-related metrics as shown in Anvil's post #148 post within the hIOmon thread. A brief description of these metrics is provided within the subsequent post #149.

    Of course, these are all displays of the "summary" metrics, which are collected typically upon some periodic basis and obviously represent an overall summary of the I/O operation activity observed by hIOmon during that time period (and overall).

    hIOmon can also be configured so that it captures an I/O operation trace of the "TRIM" control I/O operations. In this case, the captured I/O operation information will include the "Data Set Ranges (DSR)" specified by each individual TRIM control I/O operation.

    A DSR identifies the starting offset (i.e., essentially the starting block/sector address for the particular set of blocks/sectors) along with the overall combined length of these blocks. A single TRIM control I/O operation can specify one or more DSRs.

    So this technique can be used to explicitly identify the particular LBAs that have been requested by the TRIM control I/O operation(s) to be "trimmed".

  17. #167
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    To compare here is a delete using the same 6GB file on an X25-M 160GB drive.

    The TRIM command execution is also delayed by a couple of seconds after the file is deleted.
    Attached Images Attached Images
    Last edited by Ao1; 05-23-2011 at 09:50 AM.

  18. #168
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    So, is your data saying TRIMing a 6GB file takes 0.02sec on an X25-M, and 5.9sec on a Vertex 2?

  19. #169
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    johnw

    That sounds about right.

    Intel does nothing except for "releasing" the LBA when "trimming" the data, the SF controller does a lot more apparently.

    I'll try to do the same on one of the V3's I've got, TRIM looks to behave differently on the SF-2XXX controllers.

    --

    edit:
    @overthere

    So how would deleting say 4000 files look like if the LBAs weren't contiguous, lets say there were 500 ranges?
    Last edited by Anvil; 05-23-2011 at 09:28 AM.
    -
    Hardware:

  20. #170
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by One_Hertz View Post
    24TB. 88.

    Ok this workload is much too light. All we are doing is sequentially writing to the SSD and slowly wearing out the NAND. 2TB per percent that I am seeing right now would mean 200TB to reach 1, or 5000 cycles on each NAND cell, which happens to exactly be the spec.

    Anvil, how about adding some random writes, meaning, making changes within the generated files?
    But won't the cache on your 320 turn the writes into sequential? Maybe that is why the F3 appeared so fast, as it has 32MB of onboard cache.

  21. #171
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    John, I tried streaming with audio. Even if I use a Wave format (uncompressed) I can only get 1MB/s. Tracktor has a buffer so unless a large file was deleted whilst recording it would be unlikely to have an effect.

    That said it is now clear that the SSD does not schedule TRIM during a low activity period. Whatever is running when it executes will become unresponsive for however long the TRIM command takes to execute. If you work with large files that might become a problem.

  22. #172
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Delta between most-worn and least-worn Flash blocks: 4
    Wear out 99%
    12TB

  23. #173
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    That said it is now clear that the SSD does not schedule TRIM during a low activity period. Whatever is running when it executes will become unresponsive for however long the TRIM command takes to execute. If you work with large files that might become a problem.
    With the Vertex 2. On the X25-M, your data showed only a 0.02sec time for the TRIM, right? So probably not an issue with an Intel SSD.

  24. #174
    Admin Vapor's Avatar
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    13,107
    I wonder if the Vertex2 is taking the wear out value from the 'best' of the NAND? That is, SF wear leveling may be ineffective with this usage scenario and SandForce (or OCZ's firmware tweaks) may be taking the value from the NAND with the least wear.

    Intel seems to be doing it with average wear (as was said, 1% per 2TB is right in line with 5000 write/erase cycles), or they have really effective wear leveling (maybe both?).

  25. #175
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by johnw View Post
    With the Vertex 2. On the X25-M, your data showed only a 0.02sec time for the TRIM, right? So probably not an issue with an Intel SSD.
    Not with the X25-M, but maybe the 320 is different.

    EDIT; with the SF drive I think there is a clue as to what happens when TRIM is executed in that its the same for compressed or uncompressed data. I'm going to guess it's mostly due to the processor on the SSD, rather than the actual delete operation.
    Last edited by Ao1; 05-23-2011 at 10:27 AM.

Page 7 of 220 FirstFirst ... 456789101757107 ... LastLast

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •