Page 43 of 220 FirstFirst ... 33404142434445465393143 ... LastLast
Results 1,051 to 1,075 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #1051
    YouTube Addict
    Join Date
    Aug 2005
    Location
    Klaatu barada nikto
    Posts
    17,574
    Quote Originally Posted by johnw View Post
    Yeah, what are you testing and how are you testing it?
    http://www.xtremesystems.org/forums/...=1#post4854810

    The short version being that I am using a kernel level module to continuously write sectors to SSDs and read them back to check for errors. Throwing failures implies that a write/read failed. (aka the data read does not match the data written) All sectors received an equal number of writes. Once 90% of sectors fail, the drive is considered failed.

    Quote Originally Posted by Ao1 View Post
    Are the SF drives untrottled?
    All drives are all receiving an exactly equal distribution of writes at a constant speed of 50MB/s

    Quote Originally Posted by sergiu View Post
    What do you understand by "throwing failures but has yet to fail"? Could you give us some details about what is happening? Also, how did Intel drive failed? did it reported SMART errors that would have indicated an imminent failure?
    SMART errors are not even looked at.
    A failed sector is one that is unable to have a successful write and read operation after 200 attempts to write to the sector.
    A failure is that the data read from the sector is not the same as data written to the sector.
    The Intel drive was classified as failed the instant that 90% of all of the drive's sectors have failed.

    Closer analysis shows that less than 1% of the data written to failed sectors matches what was actually written and that errors tended to start rapidly collecting near the end of its life. It performed quite well, until the first sector failure then the drive died after a mere couple more days of testing. The second drive is currently at 42% failed and is expected to die by tomorrow night.
    Fast computers breed slow, lazy programmers
    The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
    http://www.lighterra.com/papers/modernmicroprocessors/
    Modern Ram, makes an old overclocker miss BH-5 and the fun it was

  2. #1052
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Could you also post a screenshot with SMART parameters for failed drive and the drive which is expected to fail in next hours? By sector I guess you are referring to a LBA 512Byte right? Also, have you tried to let the failed drive in idle and try again. In some document posted somewhere at the beginning of the thread I saw a wear model that stated a much higher endurance if some idle time is taken between consecutive writes.

  3. #1053
    YouTube Addict
    Join Date
    Aug 2005
    Location
    Klaatu barada nikto
    Posts
    17,574
    Quote Originally Posted by sergiu View Post
    Could you also post a screenshot with SMART parameters for failed drive and the drive which is expected to fail in next hours? By sector I guess you are referring to a LBA 512Byte right? Also, have you tried to let the failed drive in idle and try again. In some document posted somewhere at the beginning of the thread I saw a wear model that stated a much higher endurance if some idle time is taken between consecutive writes.
    Experiment is being done on a Linux box, that does not have Xorg installed.
    I classify a sector as 4KB of continuous block of flash.
    I will retest the drive in a few moments to check to see if the 90% sector failure is still true.
    However the statement "much higher endurance if some idle time is taken between consecutive writes", would mean endurance is better if you don't write to the drive much.(duh and your car will not run out of gas as quickly if you don't drive it much)
    Fast computers breed slow, lazy programmers
    The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
    http://www.lighterra.com/papers/modernmicroprocessors/
    Modern Ram, makes an old overclocker miss BH-5 and the fun it was

  4. #1054
    mgoldshteyn
    Guest
    Intel 510 Series (Elm Crest) SSDSC2MH120A2K5 - One drive failed completely July 24 @ 9:14am and second drive is throwing failures but has yet to fail.
    @nn_step, at what write size did your first Intel drive fail at (the one in the quote, above)?

  5. #1055
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    @nm...your data is very vague. a lack of data would be a better way to classify it.
    can you give us some specifics? amount of data written, time elapsed, etc?
    its hardly helping to come to some sort of understanding when all you say is : there was a failure.
    where, when, how, under what conditions? after what duration?
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  6. #1056
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by johnw View Post
    230.010 TiB, 631 hours, sa177: 1/1/18925
    237.913 TiB, 652 hours, sa177: 1/1/19564

    Average speed reported by Anvil's app has been steady at about 112MB/s.

    The other two unknown SMART attributes, 178 and 235, are still at 72/72/276 and 99/99/2, just as they were when the SSD was fresh out of the box.

  7. #1057
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by nn_step View Post
    Experiment is being done on a Linux box, that does not have Xorg installed.
    So post the text from smartctl instead of a screenshot.

    What program are you using to do the test on the SSDs? If it is not open source, what has been done to debug and validate the program?

  8. #1058
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by nn_step View Post
    Experiment is being done on a Linux box, that does not have Xorg installed.
    I classify a sector as 4KB of continuous block of flash.
    I will retest the drive in a few moments to check to see if the 90% sector failure is still true.
    However the statement "much higher endurance if some idle time is taken between consecutive writes", would mean endurance is better if you don't write to the drive much.(duh and your car will not run out of gas as quickly if you don't drive it much)
    Assuming you wrote data continuously for 24th May 2011 at 50MiB/s on Intel 510 model, then this would be translated to 61*86400*50MiB = ~251.3TB . If WA is around 1.1 like on other models from endurance test, than this is means around 2150 cycles.
    Also, according to endurance model posted by Ao1: http://www.xtremesystems.org/forums/...=1#post4861258 , if theory with recovery time proves to be right, then we should see much more cycles (it would take around 2000-2500 seconds between each page write at 50MiB/s) .
    Could there be other factors that are breaking the Intel model so early (like a faulty power supply or SATA issues)? it's hard for me to believe that both models are failing so fast and so near one of each other
    Last edited by sergiu; 07-25-2011 at 12:53 PM.

  9. #1059
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by nn_step View Post
    All drives are all receiving an exactly equal distribution of writes at a constant speed of 50MB/s

    So presumably you are either using compressible data or your SF drives are not throttled......

  10. #1060
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Will have a Vertex 2 60GB with LTT removed entering the testing within a week or two It's a V2 with 32nm Hynix NAND though

    C300 update and updated charts later today

  11. #1061
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    WOW nice to see that this thread really is starting to pick up some speed guys.

    Thanks for all the hard work everyone !

  12. #1062
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    241TiB. 31 reallocated sectors. MD5 OK. I THINK I found the hidden drive wear variable using Anvil's app (his special build for me). It is 120 right now and it is going up linearly.

  13. #1063
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    C300 Update, charts next post

    103.64TiB, 1750 raw wear, 65 MWI, reallocated still at 1 event / 2048 sectors, speed back up to 61.75MiB/sec, MD5 OK.

  14. #1064
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Updated charts

    Host Writes So Far

    Click image for larger version. 

Name:	Jul25BarHost.png 
Views:	2005 
Size:	21.5 KB 
ID:	118204

    Click image for larger version. 

Name:	Jul25BarNorm.png 
Views:	1594 
Size:	22.7 KB 
ID:	118205
    (bars with a border = testing stopped/completed)


    Raw data graphs

    Writes vs. Wear:
    Click image for larger version. 

Name:	Jul25Host.png 
Views:	1874 
Size:	62.1 KB 
ID:	118206

    MWI Exhaustion:
    Click image for larger version. 

Name:	Jul25MWIE.png 
Views:	1471 
Size:	23.8 KB 
ID:	118207

    Writes vs. NAND Cycles:
    Click image for larger version. 

Name:	Jul25NAND.png 
Views:	1588 
Size:	28.3 KB 
ID:	118208

    Click image for larger version. 

Name:	Jul25NANDlog.png 
Views:	1488 
Size:	24.5 KB 
ID:	118209


    Normalized data graphs
    The SSDs are not all the same size, these charts normalize for available NAND capacity.

    Writes vs. Wear:
    Click image for larger version. 

Name:	Jul25NormHost.png 
Views:	1544 
Size:	60.8 KB 
ID:	118210

    MWI Exhaustion:
    Click image for larger version. 

Name:	Jul25NormMWIE.png 
Views:	1493 
Size:	25.1 KB 
ID:	118211


    Write-days data graphs
    Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.

    Writes vs. Wear:
    Click image for larger version. 

Name:	Jul25WDHost.png 
Views:	1876 
Size:	63.1 KB 
ID:	118212

    MWI Exhaustion:
    Click image for larger version. 

Name:	Jul25WDMWIE.png 
Views:	1723 
Size:	25.3 KB 
ID:	118213


    Approximate Write Amplification
    Based on reported or calculated NAND cycles from wear SMART values divided by total writes.

    Click image for larger version. 

Name:	Jul25WriteAmp.png 
Views:	1504 
Size:	19.2 KB 
ID:	118214

  15. #1065
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by One_Hertz View Post
    I THINK I found the hidden drive wear variable using Anvil's app (his special build for me). It is 120 right now and it is going up linearly.
    If the current value is 120 and it's the same thing as =100-MWI (and MWI has gone negative), then your WA has dipped below its normal ~1.015x...and even dipped well below 1.00x.

    Reported NAND Cycles / Calculated NAND Cycles via manual writing:
    ( 120 / 100 * 5000 ) / ( 241 / 40 * 1024) = .9725x WA



    Reallocated sectors seem to have been moving linearly for the 320 recently, maybe it's related to that? Or maybe it is wear, but not comparable to MWI?

  16. #1066
    YouTube Addict
    Join Date
    Aug 2005
    Location
    Klaatu barada nikto
    Posts
    17,574
    Quote Originally Posted by mgoldshteyn View Post
    @nn_step, at what write size did your first Intel drive fail at (the one in the quote, above)?
    What exactly do you mean by write size.

    Quote Originally Posted by Computurd View Post
    @nm...your data is very vague. a lack of data would be a better way to classify it.
    can you give us some specifics? amount of data written, time elapsed, etc?
    its hardly helping to come to some sort of understanding when all you say is : there was a failure.
    where, when, how, under what conditions? after what duration?
    Amount of data written is easy to calculate given the posted times and fixed data rates

    My first post in regards to this test lists exactly the starting time of the test.

    Where : 70 degree F basement (mine)
    When : see above
    How : Kernel module that I wrote
    Conditions : All drives are written the exact same data at the exact same time, the data is random with high entropy.
    Duration : see posted times above

    Quote Originally Posted by johnw View Post
    So post the text from smartctl instead of a screenshot.

    What program are you using to do the test on the SSDs? If it is not open source, what has been done to debug and validate the program?
    One that I wrote myself, its only assumptions are the RAID cards being used and the timing chip that I am using

    Quote Originally Posted by sergiu View Post
    Assuming you wrote data continuously for 24th May 2011 at 50MiB/s on Intel 510 model, then this would be translated to 61*86400*50MiB = ~251.3TB . If WA is around 1.1 like on other models from endurance test, than this is means around 2150 cycles.
    Also, according to endurance model posted by Ao1: http://www.xtremesystems.org/forums/...=1#post4861258 , if theory with recovery time proves to be right, then we should see much more cycles (it would take around 2000-2500 seconds between each page write at 50MiB/s) .
    Could there be other factors that are breaking the Intel model so early (like a faulty power supply or SATA issues)? it's hard for me to believe that both models are failing so fast and so near one of each other
    Yes there certainly are other factors that could cause earlier failure:
    1) Intel drives are closer to ventilation than other drives.
    2) Intel drives received 3% less sunlight than the other drives
    3) Intel drives are connected to the leftmost power connector of the power supplies
    4) The failed Intel drives have sequential serial numbers and could have been part of a bad batch

    But I am continuing to check for other additional reasons for the failures.

    Quote Originally Posted by Ao1 View Post
    So presumably you are either using compressible data or your SF drives are not throttled......
    the data is random with high entropy and the RAID cards have no problems sustaining the write/read rates



    After 12 hours of off time, the Intel drive still has in excess of 90% of sectors failed.
    Fast computers breed slow, lazy programmers
    The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
    http://www.lighterra.com/papers/modernmicroprocessors/
    Modern Ram, makes an old overclocker miss BH-5 and the fun it was

  17. #1067
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by nn_step View Post
    After 12 hours of off time, the Intel drive still has in excess of 90% of sectors failed.
    Or your "kernel module" has bugs, since you avoided my question about how you debugged and qualified it, I assume you have not done so. How about posting the source code so others can look at it and test it for bugs?

    Why won't you post the SMART attributes as read by smartctl?

    How have you managed to write continuously to a Vertex 2 at 50MB/s without encountering warranty throttle?

    And why exactly did you write a "kernel module" to do such a basic task as writing data to SSDs? Certainly a user-space C program would be more than sufficient.
    Last edited by johnw; 07-25-2011 at 07:43 PM.

  18. #1068
    YouTube Addict
    Join Date
    Aug 2005
    Location
    Klaatu barada nikto
    Posts
    17,574
    Quote Originally Posted by johnw View Post
    Or your "kernel module" has bugs, since you avoided my question about how you debugged and qualified it, I assume you have not done so. How about posting the source code so others can look at it and test it for bugs?

    Why won't you post the SMART attributes as read by smartctl?

    How have you managed to write continuously to a Vertex 2 at 50MB/s without encountering warranty throttle?

    And why exactly did you write a "kernel module" to do such a basic task as writing data to SSDs? Certainly a user-space C program would be more than sufficient.
    The drive's firmware is causing errors when attempting the fundamental commands required for such work.
    What exactly is this "warranty throttle" you speak of.
    I made it a "kernel module" because I wanted to be exact in terms of timing and bandwidth. Also I find Linux's user-space file-system akin to trying to type with boxing gloves on for this sort of testing.


    And the source code for the curious.
    Code:
    define test as lambda (list *device drives, int_64s write_speed)
    {
       block data, test
       number index := 1
       loop
       {
          data := read("testfile.txt", 4096, loop)
          index := index + 1
          
          map( *device x in drives)
          {
             write(x, 4096, index%x.blockcount(), data)
          }
    
          map( *device x in drives)
          {
             read(x, 4096, index%x.blockcount()) =: test
             if ( test != data)
             {
                throw( "block failure: " +  index%x.blockcount().tostring() + "/newline drive: " + x.name().tostring()) 
              }
           }
        }
    }

    Please feel free to point out any errors that exist in the program (Yes it is written in the Rook programming language, please don't complain about how different it is from C or python)
    Last edited by nn_step; 07-25-2011 at 08:23 PM. Reason: added code
    Fast computers breed slow, lazy programmers
    The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
    http://www.lighterra.com/papers/modernmicroprocessors/
    Modern Ram, makes an old overclocker miss BH-5 and the fun it was

  19. #1069
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Where is the rest of the source code?

    Why won't you post the SMART attributes?

    Is your whole "test" just a hoax?

  20. #1070
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Vapor View Post
    Will have a Vertex 2 60GB with LTT removed entering the testing within a week or two It's a V2 with 32nm Hynix NAND though :
    Great news

    How will it be tested? 50% compressible for static data and uncompressible data for test? (seems fair )
    Last edited by Ao1; 07-26-2011 at 12:38 AM.

  21. #1071
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    187.41TB Host writes
    Reallocated sectors : 6

    MD5 OK.
    -
    Hardware:

  22. #1072
    Registered User
    Join Date
    Jun 2011
    Posts
    87
    Attachment 0
    Quote Originally Posted by Vapor View Post
    If the current value is 120 and it's the same thing as =100-MWI (and MWI has gone negative), then your WA has dipped below its normal ~1.015x...and even dipped well below 1.00x.

    Reported NAND Cycles / Calculated NAND Cycles via manual writing:
    ( 120 / 100 * 5000 ) / ( 241 / 40 * 1024) = .9725x WA



    Reallocated sectors seem to have been moving linearly for the 320 recently, maybe it's related to that? Or maybe it is wear, but not comparable to MWI?
    I'm not sure but I don't think it is MWI according to what I found and posted in my original Snip from the Intel pdf.
    OneHertz statement it is now at 120 after starting at 0 is too fast of an increase for MWI paramaters to be used in reverse.
    What it is based on is not explained though.

    Percentage Used Endurance Indicator- % used over what? MWI?

    Click image for larger version. 

Name:	Post MWI-1 Logs.PNG 
Views:	863 
Size:	126.7 KB 
ID:	118223
    Last edited by Hopalong X; 07-26-2011 at 04:07 AM.

  23. #1073
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Quote Originally Posted by johnw View Post
    Is your whole "test" just a hoax?
    LOL that is what I believe as well. It seems like a marketing gimmick too.

  24. #1074
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    @Anvil...How's the app looking for release?
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  25. #1075
    YouTube Addict
    Join Date
    Aug 2005
    Location
    Klaatu barada nikto
    Posts
    17,574
    Quote Originally Posted by johnw View Post
    Where is the rest of the source code?

    Why won't you post the SMART attributes?

    Is your whole "test" just a hoax?
    There is no rest, learn to understand the power of good programming languages

    Quote Originally Posted by bulanula View Post
    LOL that is what I believe as well. It seems like a marketing gimmick too.
    No it is not a hoax but feel free to completely ignore me.

    As for marketing, I didn't realize that the university I am working for was doing marketing in SSDs.


    This is nothing more than me, in a slightly more scientific fashion, testing to confirm/refute the anecdotal evidence that SSDs have shorter life spans than hard drives given a write rich environment.
    Fast computers breed slow, lazy programmers
    The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
    http://www.lighterra.com/papers/modernmicroprocessors/
    Modern Ram, makes an old overclocker miss BH-5 and the fun it was

Page 43 of 220 FirstFirst ... 33404142434445465393143 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •