Page 8 of 220 FirstFirst ... 5678910111858108 ... LastLast
Results 176 to 200 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #176
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    Not with the X25-M, but maybe the 320 is different.
    Anything is possible, but I think the X25-M and 320 are very similar in most respects, including TRIM implementation.

  2. #177
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Anvil View Post
    @overthere

    So how would deleting say 4000 files look like if the LBAs weren't contiguous, lets say there were 500 ranges?
    That's a good question.

    Unfortunately I do not know what particular algorithm(s) the filesystem would use under such circumstances. I could speculate, but I am loathe to do so - and least generally

    There are, however, some empirical metrics captured by hIOmon that might provide some general clues perhaps - which BTW is intentional by design.

    For example, if you take a look at Ao1's post #69, you will see a value of 1 for the "Control_MDSA_Trim_DSR_Count_IOP_Min" metric. This metric indicates the observed minimum total number of DSRs specified by a single TRIM control I/O operation (i.e., the combined total number of DSRs specified by a single TRIM I/O operation).

    On the other hand, the "Control_MDSA_Trim_DSR_Count_IOP_Max" metric has a value of 62, which indicates that the maximum total number of DSRs specified by a single TRIM control I/O operation was 62 DSRs. (I seem to recall hearing that the ATA spec notes some option (?) about limiting the count to 64 or so, but I'm not sure about this).

    From an overall perspective, the "Control_MDSA_Trim_DSR_Count" metric indicates that there was a combined total of 274 102 DSRs specified by the TRIM control I/O operations so far (the "IOPcount_Control_MDSA_Trim" metric - not shown - would indicate the actual total number of observed TRIM control I/O operations associated with this grand total of 274 102 DSRs).

    One other quick note in regards to a single TRIM control I/O operation: The "Control_MDSA_Trim_DSR_Length_Total_IOP_Min" metric indicates the minimum total combined lengths (in bytes) of the DSRs specified by a single TRIM control I/O operation. The value shown in Ao1's post is 430 231 552 (which again reflects the minimum total number of bytes specified by a single TRIM control I/O operation).

    The "Control_MDSA_Trim_DSR_Length_Total_IOP_Max" metric (not shown in Ao1's post) indicates the maximum total number of bytes specified by a single TRIM control I/O operation.

  3. #178
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @john
    It should be very similar but there are a few new feature in the 320 series that could make a small difference.

    Snipped from TechReport
    "The Intel 320 Series protects against any data loss that might occur in this situation by employing "surplus data arrays." Also referred to as XOR—the logical operation often used to calculate parity bits for RAID arrays—this redundancy scheme is capable of recovering data from a failed bit, block, or even an entire failed die. Intel describes XOR as a NAND-level RAID 4, making it sound rather similar to the RAISE technology employed by SandForce controllers.

    RAISE is described as more of a RAID 5 than a RAID 4, though. SandForce says RAISE spreads redundancy data across the entire drive, and that the storage capacity lost amounts to the capacity of one flash die.
    Intel isn't specific about the amount of storage consumed by XOR, but it does say the redundancy data is rolled into the 7% of total flash capacity reserved for use by the controller. According to Intel, XOR is governed by a mix of hardware and firmware that doesn't introduce any performance-sapping overhead. The only time it'll slow the drive down is when data is being recovered in the event of a flash failure."

    So, redundancy without overhead?
    Last edited by Anvil; 05-23-2011 at 11:58 AM.
    -
    Hardware:

  4. #179
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by overthere View Post
    That's a good question.
    ...
    On the other hand, the "Control_MDSA_Trim_DSR_Count_IOP_Max" metric has a value of 62, which indicates that the maximum total number of DSRs specified by a single TRIM control I/O operation was 62 DSRs. (I seem to recall hearing that the ATA spec notes some option (?) about limiting the count to 64 or so, but I'm not sure about this).
    ...
    I've read somthing that resembels that limitation, there were/are some issues with the default Windows drivers behaviour that did/does not follow the specifications for TRIM commands, Intels driver does.

    I'll have a look at the links.

    I'll dive back into hIOmon as soon as possible, time to check what's new on the SF-2XXX controller and the new m4's on TRIM
    -
    Hardware:

  5. #180
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    So, redundancy without overhead?
    Sure, like RAID 1, as long as you can write in parallel, it does not take any longer to write it multiple times. With RAID 4, you have to compute the parity, but that should be much quicker than the bottleneck of writing to flash, so again, redundancy without slowing down writes.

  6. #181
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is some data for a single TRIM loop. 11 seconds!
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	1098 
Size:	28.0 KB 
ID:	114584  

  7. #182
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    and here is a shot after a few more loops. The offsets are ~the same.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	1084 
Size:	28.6 KB 
ID:	114585  

  8. #183
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    FYI, the values shows in Ao1's hIOmon WMI Browser screenshots above for the "Control_MDSA_Trim_DSR_Length_Total_IOP_Min" and the "Control_MDSA_Trim_DSR_Length_Total_IOP_Max" metrics are transposed/reversed.

    That is, the "Control_MDSA_Trim_DSR_Length_Total_IOP_Min" value shown is actually the "Control_MDSA_Trim_DSR_Length_Total_IOP_Max" value and the "Control_MDSA_Trim_DSR_Length_Total_IOP_Max" value is actually the value shown for the "Control_MDSA_Trim_DSR_Length_Total_IOP_Min" metric.

    In other words, for example, the "Control_MDSA_Trim_DSR_Length_Total_IOP_Min" value in post #182 should be shown as 18 588 976 and the "Control_MDSA_Trim_DSR_Length_Total_IOP_Max" value is actually 14 539 702 272.

    This is due to a reporting defect in the hIOmon support for the hIOmon WMI Browser. The other means for reporting these metrics (e.g., the hIOmon CLI and the hIOmon Presentation Client) display these values correctly.

    This reporting error has already been corrected in the upcoming new release of the hIOmon software. Sorry for any confusion.

  9. #184
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    maybe the performance would drop as the HDD became fragmented, but with HDD I was getting 90MB/s. The X25-M was 67MB/s and the 40GB drives are coming in at 30 to 45MB/s.
    woah...at what files size?? under normal usage?? and what is the latency of that access? 8-9 ms?


    12TB.....Wear out 99%.
    sorry, this leads me to believe that something is wrong with this testing regimen with this particular drive, or smart data is drastically wrong. there is no way it is truly that durable, they would market the living crap out of that. 12X+ the endurance of any other drive?
    yea right. something is amiss.


    EDIT; with the SF drive I think there is a clue as to what happens when TRIM is executed in that its the same for compressed or uncompressed data. I'm going to guess it's mostly due to the processor on the SSD, rather than the actual delete operation.
    low powered processor deleting compressed data. has to uncompress, process, re-compress?
    Last edited by Computurd; 05-23-2011 at 09:17 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  10. #185
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Delta between most-worn and least-worn Flash blocks: 5
    Approximate SSD life Remaining: 98%
    Number of bytes written to SSD: 14080 GB

    Seems wear is not linear.

  11. #186
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    Seems wear is not linear.
    interesting. 12 GB for a change of one percent, then 2 GB more for change of 1 more percent. I wonder how many GB for the next percentage point? weirdness.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  12. #187
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The wearout indicator isn't exact science, my drive started at 99% having 120GB Host writes and looks to be wearing more than 25nm NAND when one should think it was the opposite.

    Ao1's drive also has about 4TB of easily compressible data written to it, MWI (SSD Life) started dropping after having performed a FW upgrade (could be an coincidence).

    It's much to early to conclude anything yet
    -
    Hardware:

  13. #188
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Quote Originally Posted by Anvil View Post
    It's much to early to conclude anything yet
    Indeed. Just cannot wait to see how much abuse these drives will take !

    Thanks !

  14. #189
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Anvil View Post
    I can do that, although a lot of these writes are actually "random".
    How are they random? Are you somehow telling the filesystem which clusters to place those files into? I didn't know you could do that.

    26TB, 86%.

  15. #190
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    It's not random in the IOmeter kind of way, working on it, you can expect something this week.

    Starting to get re-allocations, up from 2 to 4 last 24hours and MWI is at 86.

    22_51tb_host_writes.PNG
    -
    Hardware:

  16. #191
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    How about reducing the "Min GB free" setting? Would that not increase the span being written to thereby inducing more wear? It might also be able to do it without slowing down writes in the process.

    EDIT:

    Delta between most-worn and least-worn Flash blocks: 6
    Approximate SSD life Remaining: 97%
    Number of bytes written to SSD: 15,936 GB
    Last edited by Ao1; 05-24-2011 at 10:02 AM.

  17. #192
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I suggest we fill it up with more static data, leaving 22GB free space and then we change the min free space to 9 or 10GB.
    (e.g. incompressible data like mp3's, mpeg or jpg's)

    Whatever changes we do, seq. speed won't change, the only factor that can push the drive is by including more random writes.
    I don't think I will personally introduce more random IO until 35TB is made and from then on introducing e.g. 500MB of random 4K writes per loop would probably do it.

    I'm working on it, those things needs to be logged and then exported to e.g. Excel.

    Here's the latest status on Media Wearout

    Endurance_XS.png
    Last edited by Anvil; 05-24-2011 at 11:05 AM.
    -
    Hardware:

  18. #193
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Maybe we are just getting impatient

    The main thing for me is that the workload is reasonably representative of what could be expected in real life.

    Increasing static data will increase the amount of NAND that will never be used and reduce wear levelling. Writing full span over the space with no static data is also unrealistic.

  19. #194
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by nn_step View Post
    Any questions or comments before I begin the test on May 24, 002011 @ 12noon EST?
    Do we have takeoff?

  20. #195
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'm not impatient
    It's a slow process with my drive, much slower than yours

    Some more static data wouldn't hurt though.
    Too much random writes is whats unrealistic, desktop users don't do much random writes "the IOmeter way" at all.
    The small files we are creating is what is normal "random" IO, the difference is that we are deleting them over and over again and so the LBAs are all "cleaned" for each loop.

    The random part I'm thinking about including in the loop would never (never or very seldom) be deleted and that would put more strain on the process, too much would be unrealistic though so maybe 500MB is a bit to much, will have do do some math on it
    Well, I've already had a brief look at it and on my drive 500MB per loop would mean about 85-90GB per day of random writes, that's quite a bit of random writes.
    If the random write limit was 7.5TB it would mean 85 days and we've already used quite a bit of that reserve I'd expect.

    Make no mistake, this is not going to be done in a few weeks, the site I originally linked to struggled for a year , if we introduce a bit of random writes we might be done within a few months.

    When I reach the 35TB limit (or so) I'll be moving it to a different computer (a home server) where it can just sit until it's done, having that extra computer is what makes this possible.
    -
    Hardware:

  21. #196
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    During normal use some changes are made within existing files. We need to introduce a bit of this activity IMO...

  22. #197
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Yeah, I'll make it a setting where we can put any number of MBs we'd like, would be handy for making adjustments if the random IO part shows to be really effective.
    -
    Hardware:

  23. #198
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    According to SSDLife I'm going to be running this for another 9 years.

    To be fair it has only just started monitoring, so it's probably not adjusted to the amount writes being incurred.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	953 
Size:	8.6 KB 
ID:	114604  

  24. #199
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Intel specs are 20GB per day for 5 years = 36,500GB
    Assume 3,000GB per day with the endurance app.(post #4)
    3,000GB = 150 days at 20GB a day.
    In 1 day the drives are therefore incurring a 150 days of writes.
    5 years = 1,825 days/ 150 = ~12 days.

    In theory the chart should look like this.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	941 
Size:	12.0 KB 
ID:	114606  

  25. #200
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    29TB. 85%. I got my first reallocated sector!

Page 8 of 220 FirstFirst ... 5678910111858108 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •