Page 17 of 220 FirstFirst ... 7141516171819202767117 ... LastLast
Results 401 to 425 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #401
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by chuchnit View Post
    If you do a fresh format, how much available space does windows say you have?
    As I've got the OS on it a format is out of the question, not sure where your going though?
    Do you think it would grow smaller?
    If that's what you're getting at then there is the spare area that would be "consumed" first, as it stands, the spare area is still at 99 out of 100.

    --

    Had a little scare as I moved it back to the AMD rig and it suddenly disappeared, looks like it just was a bad connection as it's back up again.
    Will move it back to the Intel computer as soon as I've upgraded the cooler.

    76.98TB Host writes
    MWI 57

    Nothing else has changed.
    -
    Hardware:

  2. #402
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    79.09TB Host writes
    MWI 56

    Re-allocated sectors up 1 from 5 to 6
    -
    Hardware:

  3. #403
    Xtreme Enthusiast
    Join Date
    Oct 2005
    Location
    Ottawa, Canada
    Posts
    573
    Quote Originally Posted by One_Hertz View Post
    99TB. 49% as of this morning. It sure is taking a long time to kill this thing. I still don't believe there is any way it will last close to 1PB.

    100TB is 50+gb per day for 5 years. Kind of funny how many people still do things like putting page files and browser cache on their HDD to minimize writes to the SSD...
    dont u make fun of me fool. im gonna go to ur house and test the physical endurance of that ssd.

    and no your not allowed to kick me in the head.
    we going shh around the corner

  4. #404
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    and no your not allowed to kick me in the head.
    actually he might, i think Hertz takes judo (some sort of self defense) or something, he mentioned it once....
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  5. #405
    Registered User
    Join Date
    Mar 2010
    Posts
    63
    Quote Originally Posted by Anvil View Post
    67TB Host writes
    MWI 63

    Attachment 114967

    edit:

    At 67.01TB MWI changed to 62
    Anvil,

    Have you tried Secure Erase? Will it reduce the E9 Media Wearout Indicator value?

  6. #406
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    701
    Quote Originally Posted by One_Hertz View Post
    100TB is 50+gb per day for 5 years. Kind of funny how many people still do things like putting page files and browser cache on their HDD to minimize writes to the SSD...
    Yeah those write paranoid people are truly nutso. *cough* computurd *cough*
    slowpoke:
    mm ascension
    gigabyte x58a-ud7
    980x@4.4ghz (29x152) 1.392 vcore 24/7
    corsair dominator gt 6gb 1824mhz 7-7-7-19
    2xEVGA GTX TITAN
    os: Crucial C300 256GB 3R0 on Intel ICH10R
    storage: samsung 2tb f3
    cooling:
    loop1: mcp350>pa120.4>ek supreme hf
    loop2: mcp355>2xpa120.3>>ek nb/sb
    22x scythe s-flex "F"

  7. #407
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Wonder what would have happened with only small random files.

  8. #408
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    they have increased the percentage of small random files, with no discernible difference.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  9. #409
    Xtreme Enthusiast
    Join Date
    Jul 2008
    Location
    Prestonsburg, KY
    Posts
    545
    Quote Originally Posted by Anvil View Post
    As I've got the OS on it a format is out of the question, not sure where your going though?
    Do you think it would grow smaller?
    If that's what you're getting at then there is the spare area that would be "consumed" first, as it stands, the spare area is still at 99 out of 100.
    Well after I made that post I questioned if the OS would even pick up any lost useable capacity since the cells should still be able to be read. My question though was how much storage space has been lost during this test? I mean technically you guys have lost some nand cells by now, right?


  10. #410
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by chuchnit View Post
    I mean technically you guys have lost some nand cells by now, right?
    Look at post #402:

    "Re-allocated sectors up 1 from 5 to 6 "

    It is not clear what unit of flash this represents. I doubt it is really a 512B sector. I think it would either be a 4KB page, or possibly a full erase block (512KB?). Regardless, there should be plenty of extra flash, since Intel SSD's normally keep about 7% of stated capacity in reserve.

  11. #411
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    81.06TB Host writes
    MWI 55

    --

    The smallest unit of space on an SSD must be the page size so one "sector" can't be smaller than the page size, even if one "sector" represents a full erase-block it would take quite a few of them to make an impact. (2048 per GB if the erase-block was 512K)
    -
    Hardware:

  12. #412
    Xtreme Enthusiast
    Join Date
    Jul 2008
    Location
    Prestonsburg, KY
    Posts
    545
    Quote Originally Posted by johnw View Post
    Look at post #402:

    "Re-allocated sectors up 1 from 5 to 6 "

    It is not clear what unit of flash this represents. I doubt it is really a 512B sector. I think it would either be a 4KB page, or possibly a full erase block (512KB?). Regardless, there should be plenty of extra flash, since Intel SSD's normally keep about 7% of stated capacity in reserve.
    Gotcha. I'm no storage guru here. I just have a bag of popcorn and F5 to see the results in this thread. Haven't Anvil and One_Hertz exceeded the expected life of these 40GB SSD's? Please correct that statement if I'm wrong. If so I wouldn't you expect a large amount of the nand to be bad?

    Quote Originally Posted by Anvil View Post
    81.06TB Host writes
    MWI 55

    --

    The smallest unit of space on an SSD must be the page size so one "sector" can't be smaller than the page size, even if one "sector" represents a full erase-block it would take quite a few of them to make an impact. (2048 per GB if the erase-block was 512K)
    So we don't know what the SMART data is referencing to for a sector?


  13. #413
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Computurd View Post
    actually he might, i think Hertz takes judo (some sort of self defense) or something, he mentioned it once....
    boxing + muay thai + mma

    107TB. 45%. My reallocated sectors are still at 5.

  14. #414
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    Quote Originally Posted by chuchnit View Post
    Gotcha. I'm no storage guru here. I just have a bag of popcorn and F5 to see the results in this thread. Haven't Anvil and One_Hertz exceeded the expected life of these 40GB SSD's? Please correct that statement if I'm wrong. If so I wouldn't you expect a large amount of the nand to be bad?
    Assuming perfect wear leveling such that every page/block is written to once for every 40GB and a write amplification of 1.0 then 1 program/erase cycle for every page/block on the whole drive represents 40 GB. So for a 25nm flash NAND 40 GB drive with a theoretical p/e c. of 3000 then 3000 x 40 GB = 117 TB of data. For a 34nm 40 GB drive it would be 5000 x 40 GB = 195 TB. And for a 50nm 40GB drive it would be 10,000 x 40 GB = 390 TB.

  15. #415
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    I'm watching this thread with great interest. I suspect people will be linking to these tests for a long time in any theoretical discussion of flash NAND SSDs. Since none of the drives have reached their p/e cycle limit yet the endurance portion of the testing is yet to come, but I'm very much looking forward to what will happen.

    So far what I have learned the most from this thread is about SandForce-OCZ life throttling. Thanks Ao1! I was worried that life throttling would kick in way before 30 TB of initial writing. It would be nice to get more testing to explore the parameters of the throttling. So around 30 TB in 7 days for a fresh drive. If we could work out the exact parameters we could monitor our writes to stay just below the throttle curve. The algorithms do seem complicated though since Ao1 could write as much as he wanted for the first 30 TB. It seems like the parameters change after that. Presumably this boundary would by higher on larger drives. I'd like to see more testing of the parameters of throttling after the boundary. Fascinating stuff.

    I so want to hack the Sandforce firmware and disable the throttling or just reset everything so it thinks it's a new drive again. Or maybe set the clock forward by 5 years. I did download some versions of mptool, but I don't really know what I'm doing, and I think you need a very specific Sandforce version of mptool which I don't have.

    I am quite disappointed by those incompressible data sequential write speeds which are much lower than even many older hard drives and frankly quite pathetic. I guess that's a limitation of flash NAND. I do realize that larger drives will be faster due to interleaving, but still. I guess SSDs have to live or die by random read/write speeds, which makes me interested in the question of exactly how much of my daily disk usage patterns is random, particularly when running games or other apps that I would put on an SSD.

    I think I bought too much into the whole Sandforce hype. With unrealistically compressible data the Sandforce sequential write numbers look so impressive, getting right up to the limit of the SATA interface. Using compression when your competition is not can give you a lot of at least somewhat bogus numbers. In fact I am wondering how a non-Sandforce drive would do if you used some kind of whole partition on-the-fly compression software with it. That sort of thing is probably what gave Sandforce the idea in the first place. Nevertheless Sandforce's on-the-fly hardware compression idea was a great one, and I hope competitors like Intel and Marvell will follow, but with less fascist firmware. Not that I really blame Sandforce for this. No one is forcing the the SSD vendors to leave the throttling enabled. They deserve most of the blame here.

    I still think the Sandforce drives seem pretty good compared to the competition, but I am tempted to boycott them anyway due to the corporate fascism of life throttling. I don't like being told what I can or cannot do with a product I have paid so much money for. It really pisses me off. If you don't want to warranty the product then don't, but don't try to force me to use it in a certain way. I didn't rent the thing. I bought it. And I think the numbers we will see in this testing will definitively show how absurd the whole life throttling idea is. And at 6 MB/sec it isn't even slow enough to accomplish their purpose. At continuous usage a 25nm 40 GB drive would still reach its p/e limit in about a year. I guess they figured if they made it any slower it would be considered a complete disabling of the drive and that might have legal repercussions. That's about the only thing that seems to stop corporations nowadays from knowing when to stop in their single-minded pursuit of money at any cost: the threat of getting sued. Also, OCZ is either lying or mistaken about the secure erase reset since it clearly doesn't work.

  16. #416
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    Quote Originally Posted by One_Hertz View Post
    boxing + muay thai + mma

    107TB. 45%. My reallocated sectors are still at 5.
    If you assume a 1.1 write amplification I think you have just surpassed theoretical lifetime writes for 25nm flash NAND. For 1.0 write amplification you still have 10 TB to go.

  17. #417
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by gojirasan View Post
    Assuming perfect wear leveling such that every page/block is written to once for every 40GB and a write amplification of 1.0 then 1 program/erase cycle for every page/block on the whole drive represents 40 GB. So for a 25nm flash NAND 40 GB drive with a theoretical p/e c. of 3000 then 3000 x 40 GB = 117 TB of data.
    I'd say that is a low-end estimate. The Intel SSD probably has 40GiB of flash on-board (not counting parity flash on the 320), and Intel has made some comments leading me to believe that they think their 25nm flash is specified at 5000 erase cycles, so we could estimate the specified writes at:

    40*(1024)^3*5000 / 10^12 = 214 TB

    With 1.1 WA, that comes to 195TB.

    Interestingly, One Hertz just reported 107TB and 45% wear indicator. That comes to:

    107 / (100% - 45%) = 194.55 TB
    Last edited by johnw; 06-15-2011 at 08:36 AM.

  18. #418
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by johnw View Post
    I'd say that is a low-end estimate. The Intel SSD probably has 40GiB of flash on-board (not counting parity flash on the 320), and Intel has made some comments leading me to believe that they think their 25nm flash is specified at 5000 erase cycles, so we could estimate the specified writes at:

    40*(1024)^3*5000 / 10^12 = 214 TB

    With 1.1 WA, that comes to 195TB.

    Interestingly, One Hertz just reported 107TB and 45% wear indicator. That comes to:

    107 / (100% - 45%) = 194.55 TB
    Looking at our linear results, I think all the wear indicator does is count the erase cycles and compare them to the NAND spec...

    I think we should perhaps occasionally start checking the SSDs for errors. Create a 40GB file on some other array; do an MD5 of that file; write that file to our test SSDs; re-do the MD5 and see if it matches.

    At this point I don't think we would know if the SSD is corrupting things or not. All we know is that it at least somewhat works and that it accepts write commands.

  19. #419
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by One_Hertz View Post
    Looking at our linear results, I think all the wear indicator does is count the erase cycles and compare them to the NAND spec...
    That's what it looks like to me, too.

    I like your idea about verifying the data written to see if any errors have been introduced in the writes (or reads).

  20. #420
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    Are there any publicly available specs on the flash NAND used in these drives that might list p/e cycles? It does sound like Intel may be expecting 5000 p/e which seems kind of on the high end for 25nm from what I have read. Either it is unusually good NAND or the standard estimates I have seen of 3k/5k/10k for 25nm, 34nm, and 50nm are a bit low. That very well may be and these sorts of tests can show exactly how low they really are. It should be interesting to see how gradually the writes start failing. If wear leveling is good enough I suppose it could all happen in a very short period of time at EOL.

  21. #421
    Xtreme Addict
    Join Date
    Feb 2006
    Location
    Potosi, Missouri
    Posts
    2,296
    Quote Originally Posted by gojirasan View Post
    Also, OCZ is either lying or mistaken about the secure erase reset since it clearly doesn't work.
    Don't confuse what Secure Erase is doing at the NAND level with what is happening at the firmware level. OCZ has never stated Secure Erase will reset DuraClass parameters.

  22. #422
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by One_Hertz View Post
    Looking at our linear results, I think all the wear indicator does is count the erase cycles and compare them to the NAND spec...

    I think we should perhaps occasionally start checking the SSDs for errors. Create a 40GB file on some other array; do an MD5 of that file; write that file to our test SSDs; re-do the MD5 and see if it matches.

    At this point I don't think we would know if the SSD is corrupting things or not. All we know is that it at least somewhat works and that it accepts write commands.
    By current results, if there is no controller failure or total flash failure, I would dare to say that the drives will resist to more that 500TB. Maybe even more than 1PB. If this proves to be true, it would be more interesting to see if the drive can keep the data for, lets say one month. What I would propose would be to stop endurance test at some milestone, write 1MB files, not a complete 40GB one, do a MD5 on all of them, then check the state after 1 month and continue the endurance test until next milestone is reached. These milestones could be from 100 to 100TB after first 500TB for example.

  23. #423
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'm actually expecting the drives to go on for closer to 1PB. (more hoping than expecting)

    I can't see why it shouldn't, the NAND specs are telling minimum endurance, not maximum so with a bit of "luck" we might go very far.

    As for error checking, sure we can do some tests, however, the file system (OS) would report errors for writes, as we don't check for reads I could fit a test that selects some random number of files and just read them before finally deleting, that would make some assurance.
    I'll have a look at MD5 testing as well.
    -
    Hardware:

  24. #424
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    As for error checking, sure we can do some tests, however, the file system (OS) would report errors for writes,
    The filesystem / OS cannot possibly know if there is a bit error on write or read without doing some sort of checksum on the data written, and then reading it back and verifying the checksum.

  25. #425
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The OS will report CRC errors, that doesn't mean that it will catch all sorts of "errors" of course.
    -
    Hardware:

Page 17 of 220 FirstFirst ... 7141516171819202767117 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •