Page 9 of 220 FirstFirst ... 67891011121959109 ... LastLast
Results 201 to 225 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #201
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    24.41TB
    4 Reallocated sectors, (and is it sectors or is it really a page)
    Media Wearout is at 85

    and FFFF (E2-E4) returned sometime last night, will have to run smartmontools to clear it up.

    24_41_tb_hw.PNG
    -
    Hardware:

  2. #202
    Xtreme Monster
    Join Date
    May 2006
    Location
    United Kingdom
    Posts
    2,182
    My drive matched Anvil's drive wear out level, One Herts's drive has written 5TB more at no wear out cost as it stands, Ao1's drive has surprised me by wearing out rapidly after a point. Th Sandforce drive seems interesting to look at.

  3. #203
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    SSDLife reassesses the situation.

    Delta between most-worn and least-worn Flash blocks: 7
    Approximate SSD life Remaining: 96%
    Number of bytes written to SSD: 18432 GB
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	1496 
Size:	8.7 KB 
ID:	114617  

  4. #204
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Metroid View Post
    My drive matched Anvil's drive wear out level, One Herts's drive has written 5TB more at no wear out cost as it stands, Ao1's drive has surprised me by wearing out rapidly after a point. Th Sandforce drive seems interesting to look at.
    Don't forget that the SF started off writing highly compressible data. 4TB or 5TB worth.

    It will be interesting to see how linear the wear is with uncompressible data.

    Is wear levelling with SF is 100% dependent on how compressible the data is?

    Intel drives don't have that advantage, so they must rely on different techniques. The fact that the 320 is lasting longer (so far) than the X25-V with NAND that has ~40% less PE cycles is quite remarkable. It is able to write faster and last longer with less PE.

    Assuming that the only wear levelling technique is compression the SF drive should start to rapidly deteriorate, but we will soon see.

    I've been thinking about why read speeds get throttled with SF drives. My guess is that the channel that restricts write speeds deals with 2 way traffic. You can't slow down writes without slowing down reads at the same time.

    "If" that is the case it is not that sophisticated, as there should be no reason to slow down read speeds.

    Throttling read speeds and the poor implementation of TRIM seem to be the Achilles heel of an otherwise great technology.

    It would be interesting to see what Intel could do with the SF controller.

  5. #205
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Quote Originally Posted by Ao1 View Post
    It would be interesting to see what Intel could do with the SF controller.
    From the leaked Intel SSD roadmap slides we can expect to see an Intel SSD with SF controller pretty soon ( Q4 2011 ) !

    http://thessdreview.com/latest-buzz/...ce-capacities/

  6. #206
    Xtreme Monster
    Join Date
    May 2006
    Location
    United Kingdom
    Posts
    2,182
    Quote Originally Posted by Ao1 View Post
    Is wear levelling with SF is 100% dependent on how compressible the data is?
    Unfortunately it is looking that way.

    Assuming that the only wear levelling technique is compression the SF drive should start to rapidly deteriorate, but we will soon see.
    I want to be wrong on this too but likely the deviation will not be too far out.

    I've been thinking about why read speeds get throttled with SF drives. My guess is that the channel that restricts write speeds deals with 2 way traffic. You can't slow down writes without slowing down reads at the same time.

    "If" that is the case it is not that sophisticated, as there should be no reason to slow down read speeds.

    Throttling read speeds and the poor implementation of TRIM seem to be the Achilles heel of an otherwise great technology.
    The constraint is quite understandable if it's compressed otherwise I say very poor software implementation. I think that even with compressed data read speeds should not have been affected but at the end the trade-offs and decision making favoured write speeds and that is what we see. Sandforce may have improved the Vertex 3 on this regard. We can't be sure unless we test it.

    It would be interesting to see what Intel could do with the SF controller.
    Not sure if Intel would do the same trade-off.

  7. #207
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    The fact that the 320 is lasting longer (so far) than the X25-V with NAND that has ~40% less PE cycles is quite remarkable. It is able to write faster and last longer with less PE.
    all 25nm devices will last longer than the previous gen counterparts. And the next generation will go even further. When i was speaking with the head of R&D for LSI we were discussing flash in general (started with a WarpDrive discussion). Right now his stance is that the market doesnt need to be doing any more shrinks, there are so many gains to be made with the current generation of nand. The place where the increases are by far the greatest is in controller technology, with the coupling of current gen tech, you could be looking at 50 percent more durability from the current gen of controllers. Kind of like the quad core v six core debate. why keep going further if you arent even using what you have?

    no surprise that these controllers with 25nm nand are more durable than the previous generation. That is the whole purpose of this evolution of SSD. Alot of people are thinking the sky is falling with lower PE ratios, but the real true fact is that the situation is going the exact opposite direction. The endurance and durability and reliability is going upwards at a phenomenal rate. but of course that is what has been said all along, but there are always the chicken littles LOL
    no coincidence that Intel is going with 25nm MLC for its next gen enterprise series. Not only has the performance jumped from the controller usage, but MLC in general is being managed in much better way, especially with revolutionary technology such as ClearNAND.
    i will tell you this though, some industry insiders frown upon the transition from SLC to MLC, regardless of the maturation of the technology.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  8. #208
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Metroid View Post
    My drive matched Anvil's drive wear out level, One Herts's drive has written 5TB more at no wear out cost as it stands, Ao1's drive has surprised me by wearing out rapidly after a point. Th Sandforce drive seems interesting to look at.
    And your drive had been running 24/7 for about 1 year?
    Mine has been running for 350+- hours, doesn't look like that matters even though this is a test and yours has been running "for real".

    Quote Originally Posted by Ao1 View Post
    Is wear levelling with SF is 100% dependent on how compressible the data is?
    ...
    Assuming that the only wear levelling technique is compression the SF drive should start to rapidly deteriorate, but we will soon see.
    ...
    From what I've read, the SF controller doesn't tolerate high deltas on least/most worn "flash blocks", meaning that it starts shuffling static data when needed, don't know about other controllers, there may be some static wear-leveling but we'll probably never know.

    Most data in real life are compressible, at least for the OS drive so one can easily add 30% or more to the final result of this test. (as long as it stays at incompressible data)
    Testing with incompressible data is of course important but I it leaves a lot of questions to be answered.

    As for Intel using the SF controller, at first I thought it was a bit far fetched but when the Marvell controller popped up in the 510 I didn't know what to think, so, IMHO, it's not impossible.
    I do think for that to happen Intel probably would wan't to write their own firmware. (like Crucial did for the Marvell controller)
    There are some "bugs" or side effects or what ever one wants to call it that would never have been tolerated had it been an Intel SSD.
    Still, I really enjoy the SF drives, I've had the same level of maintenance with these drives as with other SSD's, nothing more nothing less.

    There has been quite a few fw updates on the SF drives but personally I've never been waiting for some fix, that I know of.
    There is that TRIM bug that never got fixed on the SF-1XXX series, that's about it.

    @CT
    Without the shrinking we would never get to 1TB 2.5" SSD's and prices would leave most people out in the cold.
    There is of course some truth in that things are happening too fast, 20nm NAND was already on the table (at least in the headlines) before 25nm drives were available.

    edit
    updated graph in the post #1
    Last edited by Anvil; 05-25-2011 at 09:40 AM.
    -
    Hardware:

  9. #209
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    @CT
    Without the shrinking we would never get to 1TB 2.5" SSD's and prices would leave most people out in the cold.
    There is of course some truth in that things are happening too fast, 20nm NAND was already on the table (at least in the headlines) before 25nm drives were available.
    yes, of course that is THE major driver. I guess we were speaking to performance/endurance more than anything. there is just so much left on the table. TBH the scaling on the controllers on the ssd themselves is actually not that great. they cant really handle the entire throughput of the nand at the end of each channel. of course this will be lessened somewhat with the newer generations, simply because of fewer channels.

    EDIT: this does create an interesting situation with certain models of ssd, say the maxiops..a new gen controller strapped on 'old' nand that has higher PE ratio...since it is optimized and much more efficient (for use with 25nm) shouldnt the maxiops line have some really awesome endurance?
    Last edited by Computurd; 05-25-2011 at 10:44 AM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  10. #210
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post
    As for Intel using the SF controller,
    I'm sure Intel would use their own fw as they did with the 510 Marvell controller. The time Intel take to test would mean it would either be an older SF controller or a specially developed controller for Intel.

    I've been impressed with the SF drive. I would have liked to play with it a bit before destroying it. For sure though a bit of Intel finesse would not hurt to slick things up.

    Anyway back on topic, I have emailed you an excel sheet with a few more stats thrown in. Just a thought, no worries if you don't want to use it.

  11. #211
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    They need to keep shrinking to get costs down. The speeds are more than enough for most users. The endurance is more than enough for most users. The price is way too high for most users.

    31TB/85% as of 6 hours ago. I just picked up my 320GB mlc iodrive so I will be playing with that soon

  12. #212
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by One_Hertz View Post
    I just picked up my 320GB mlc iodrive so I will be playing with that soon

    I guess the price was right

    @Ao1
    Will have a look in an hour or so, preparing something for you right now
    -
    Hardware:

  13. #213
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Anvil View Post

    I guess the price was right
    Only $1300USD!

    Are we changing the amount of static data and free space? I am quickly approaching 35TB.

  14. #214
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'll send you the new version where random data is part of the loop. (within 10 minutes)
    I'm just making the last few checks.

    We just need to agree on the runtime length (it's configured in ms)
    I think that's all we need to do for now, it adds by default a 1GB file for random writes.
    -
    Hardware:

  15. #215
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    25.72TB host writes

    No changes to MWI

    25_72_tb_hostwrites.PNG

    edit:

    + chart updated...
    Last edited by Anvil; 05-25-2011 at 02:10 PM.
    -
    Hardware:

  16. #216
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Delta between most-worn and least-worn Flash blocks: 8
    Approximate SSD life Remaining: 95%
    Number of bytes written to SSD: 21,376 GB

  17. #217
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    27.06TB Host Writes
    Media Wear Out 84%

    Re-allocated sectors unchanged (4)
    -
    Hardware:

  18. #218
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    35TB. 82%. Still 1 reallocated sector. I will switch the software to the new version this evening.

  19. #219
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Delta between most-worn and least-worn Flash blocks: 9
    Approximate SSD life Remaining: 94%
    Number of bytes written to SSD: 22,272 GB

    EDIT: Guys are you making a switch at 35TB? What settings?
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	1273 
Size:	7.4 KB 
ID:	114634  
    Last edited by Ao1; 05-26-2011 at 05:16 AM.

  20. #220
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Just got the time to look at the file randomness. Anvil, can you run a CPU benchmark of whatever generates that random file?
    That's the reason I asked for it, it looks like a hash of random bits rather than random bits.

    The reason I ask is if this can't do something like 500MB/s for generation, and it's done in the same thread as the writing, you are essentially doing sequential transfers mostly (the overall "write" speed would be an indicator)
    Regardless of how random the file is (compression-wise) or your internal file I/O, it seems the overall I/O is mostly sequential.

    Though I am not sure I understand any more what you guys are trying to achieve, but with such small random I/O, it is hardly real-world.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  21. #221
    Xtreme Enthusiast
    Join Date
    May 2004
    Location
    AB. Canada
    Posts
    827
    actualy to save on CPU usage so that there is no lag, all random data should be pregenerated.


    "Great spirits have always encountered violent opposition from mediocre minds" - (Einstein)

  22. #222
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by alfaunits View Post
    Regardless of how random the file is (compression-wise) or your internal file I/O, it seems the overall I/O is mostly sequential.
    In what regard?
    Not sure that I follow.

    Quote Originally Posted by alfaunits View Post
    Though I am not sure I understand any more what you guys are trying to
    achieve, but with such small random I/O, it is hardly real-world.
    There isn't much random IO in real life but there is some and thats why we are adding a small portion of random I/O. (on top of small file I/O)


    Quote Originally Posted by MadHacker View Post
    actualy to save on CPU usage so that there is no lag, all random data should be pregenerated.
    I have 3 alternating buffers so It's generally not an issue.
    1.5GB/s was what I measured using a small array on the Areca 1880.
    -
    Hardware:

  23. #223
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Quote Originally Posted by Anvil View Post
    In what regard?
    Not sure that I follow.
    Well, let's say (just for the explanation) that your random generator can produce 100MB/s. X25-M can also do 100MB/s of sequential.
    As a result, since (if I understood you correctly) they are done in the same thread, without backbuffering, the resuling write speed would be 50MB/s.
    If you get that overall speed, then X25-M is writing sequential data and not random I/O.

    I have 3 alternating buffers so It's generally not an issue.
    1.5GB/s was what I measured using a small array on the Areca 1880.
    It looks like you generate the random data in a separate thread then, so I misunderstood. I don't see how the entire system (as in the CPU/memory/PCI-e even) would be able to sustain those speeds.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  24. #224
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    What happens past the write instruction issued at the device level? Will the SSD controller not try to fill a complete block whenever possible to minimise read/ relocate/ write penalties?

    Presumably it would try to avoid 4K writes being scattered across the span of the drive. Isn't that what Intel refer to ask write combining?

    Would it also try to rotate locations of writes for wear levelling?

    Just asking as I don't really know

  25. #225
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @alfaunits
    Whether the random data generator is producing 10MB/s or 100MB/s has nothing to do with writing randomly or not.

    The Areca can sustain those speeds until the cache is filled or as long as the array can sustain that speed, I was using a small array incapable of coping with that speed and thus it wasn't sustained. (it lasted for a few seconds)
    The easy test is just to generate the random data without writing the data, that would tell the potential of the random "generator".

    Anyways this is out of topic, a separate thread would be needed and I haven't got the time right now.

    @Ao1
    We can only presume but that's the general idea of write combining.
    If the file spanned the whole drive everything would be random per say, what we have been doing so far is just producing writes (both small and large files) but no random IO within those files.
    Random writes withiin a file overwrites data in stead of just writing a new file, there is a huge difference between those two scenarios.

    edit
    Link to Anand
    -
    Hardware:

Page 9 of 220 FirstFirst ... 67891011121959109 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •