Page 124 of 220 FirstFirst ... 2474114121122123124125126127134174 ... LastLast
Results 3,076 to 3,100 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #3076
    Xtreme Member
    Join Date
    Jul 2010
    Location
    california
    Posts
    150
    I have a question here: it looks like ASU's Endurance testing would drastically increase the MFT size of NTFS, which is probably responsible for WinHex being extremely slow to traverse a partition for a snapshot. How do I shrink the size of MFT if I no longer plan to run Endurance testing?
    Last edited by minpayne; 12-26-2011 at 05:44 PM.
    This guy is xtremely lazy

  2. #3077
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Paragon looks to do the trick. (the free version should work)

    If you ran the test without static files on a large capacity drive then one can expect MFT to grow beyond normal size, for normal usage/size it should not be an issue.

    Link
    -
    Hardware:

  3. #3078
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    588.58TB Host writes
    Reallocated sectors : 05 16 Increased by 1
    Available Reserved Space : E8 99
    POH 5326
    MD5 OK

    33.14MiB/s on avg (~69 hours)

    --

    Corsair Force 3 120GB

    01 92/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 46 (Wear range delta)
    E6 100 (Life curve status)
    E7 10 (SSD Life left)
    E9 599788 (Raw writes) ->586TiB
    F1 798326 (Host writes) ->780TiB

    MD5 OK

    106.05MiB/s on avg (~69 hours)

    power on hours : 2308
    -
    Hardware:

  4. #3079
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I’ve just been trying to figure out the Samsung 830 data.

    I’ve assumed:

    • 5K P/E cycles.
    • Host writes = F1*512/1,073,741,824 = GiB
    • B1 RAW = P/E cycles count

    It seems strange that the Host Writes per P/E cycles ratio remained consistently wrong until the theoretical P/E cycle count had expired. Once the theoretical P/E cycle count had expired the WA came out at; Avg 3.93.

    It would be interesting to see how these results compared to another 64GB 830, however it seems it is possible to calculate host writes and approximate the MWI/ NAND Writes & WA using the basic SMART data that is provided. W/A is quite savage, but then again so are the sustained write speeds (at least until you get to MWI = 0).

    Did anything come of the 470 autopsy? Is the 320 still going?

    Last edited by Ao1; 12-27-2011 at 01:26 PM.

  5. #3080
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    The 830 has a serious defect in the sense that PE cycle counts do not increase until a reboot (essentially, though they do increase slightly). To get the 'true' PE count I'd have to reboot, which I will do now. I'm not there physically, so if anything was to go wrong, it would be several days before I could be there to fix it.

    6672 is what its at now, but a reboot did not do anything. It needs a power cycle which I cannot perform until I get back.
    Last edited by Christopher; 12-27-2011 at 02:25 PM.

  6. #3081
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    Christmas update:
    Kingston V+100
    315.6061 TiB
    1638 hours
    Avg speed 25.38 MiB/s
    AD still 1.
    168= 1 (SATA PHY Error Count)
    P/E?
    MD5 OK.
    Reallocated sectors : 00


    Intel X25-M G1 80GB
    178,5906 TiB
    19877 hours
    Reallocated sectors : 00
    MWI=152 to 151
    MD5 =OK
    46.71 MiB/s on avg


    m4
    135.3233 TiB
    494 hours
    Avg speed 78.39 MiB/s.
    AD gone from 24 to 21.
    P/E 2380.
    MD5 OK.
    Reallocated sectors : 00


    The power cut of again from 03 to 14 today but everything is back up now. Only strange thing is that the X25-M and the m4 has dropped their write speed with a couple of MiB/s.
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

  7. #3082
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    320 is still going...

    7425 reallocated sectors (up by 1), 684.5tb.

  8. #3083
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Christopher View Post
    The 830 has a serious defect in the sense that PE cycle counts do not increase until a reboot (essentially, though they do increase slightly). To get the 'true' PE count I'd have to reboot, which I will do now. I'm not there physically, so if anything was to go wrong, it would be several days before I could be there to fix it.

    6672 is what its at now, but a reboot did not do anything. It needs a power cycle which I cannot perform until I get back.
    Did you have to power cycle every day to get the revised B1 values that you have reported? What I was trying to point out is that the host writes per P/E count came out consistently at ~630GiB at every reading, until the P/E count hit 5,000. From that point onwards it came out consistently at ~16GiB.
    The approx. theoretical writes capacity = 64 x 5,000 = 320,000 GiB, which was exceeded by the time host writes had reached 87,476GiB, giving a WA factor of 3.98 (Avg WA after 5,000 PE = 3.93).

  9. #3084
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    590.94TB Host writes
    Reallocated sectors : 05 17 Increased by 1 again, up 3 during the last week.
    Available Reserved Space : E8 99
    POH 5347
    MD5 OK

    33.03MiB/s on avg (~90 hours)

    --

    Corsair Force 3 120GB

    01 94/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 44 (Wear range delta)
    E6 100 (Life curve status)
    E7 10 (SSD Life left)
    E9 605659 (Raw writes) ->591TiB
    F1 806136 (Host writes) ->787TiB

    MD5 OK

    105.98MiB/s on avg (~90 hours)

    power on hours : 2329

    --

    @Ao1

    Any particular reason to why you are using 5K P/E on the Samsung 830, the 470 series points to 1K or 3K, hard to tell though.
    It's an interesting drive, the 830's I've got are all for OS X so I won't be doing any testing on the drives I've currently got. (I think )
    -
    Hardware:

  10. #3085
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838


    Chart(s) are updated/placed in post#1
    I'm looking into making some of the other charts that Vapor made.
    -
    Hardware:

  11. #3086
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post

    @Ao1

    Any particular reason to why you are using 5K P/E on the Samsung 830, the 470 series points to 1K or 3K, hard to tell though.
    It's an interesting drive, the 830's I've got are all for OS X so I won't be doing any testing on the drives I've currently got. (I think )
    Hmm… seems it uses 32nm with 3K P/E cycles.

    “The 470 Series uses Samsung K8HDGD8U5M flash chips manufactured by Samsung on a 32-nano fabrication process. Moving to finer process technologies allows chip makers to squeeze more NAND dies out of a single wafer, but it also reduces the lifespan of those chips. Flash fabbed on a 50-nano process typically carries a write-erase endurance of 10,000 cycles, while 34-nano parts are generally rated for 5,000 cycles. According to Samsung, the 470 Series' flash chips have a write-erase endurance of only 3,000 cycles, which is on par with the longevity of 25-nano flash chips currently rolling off the line at Micron”.

    http://techreport.com/articles.x/20087

    If the WA for the endurance app workload is 3.93 the drive could “only” write ~50,000GIB of host writes before the MWI expired. If the B1 value only changes after a power cycle it would be worth re testing with frequent power cycles to more closely monitor changes to B1.

  12. #3087
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post

    Chart(s) are updated/placed in post#1
    I'm looking into making some of the other charts that Vapor made.
    Nice work on the first post summary. If anyone has been logging attributes using SMARTLOG I would be happy to make a graph that plots out all of the SMART attributes on one graph, which might reveal some interesting correlations.

  13. #3088
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1
    I'll consider running the Endurance test on the 128GB Samsung 830 for a few hours if that could help shed some light on the Samsung 830 series.
    It should be the same NAND process but there might be unknown differences to e.g page size.

    I'll send you a copy of the logged attributes on my drives.
    I have considered enabling SMART logging in ASU as well but it won't work on system drives. (for some strange reason)
    The work is done it just needs to be stored somewhere, it could be a supplement to the logging done by SMARTLOG.
    -
    Hardware:

  14. #3089
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is a graph using SMARTLOG data collected by Anvil. It’s not showing anything of interest as it is only collecting data collected over a couple of days, however over time it would hopefully be possible to see correlating events, such as relocation events and error counts.

    (BTW, I shifted the decimal place by 6 for On-the-fly ECC & Soft ECC values to keep a reasonable scale. Writes are on the second axis for the same reason).


  15. #3090
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    Todays update:
    Kingston V+100
    317.5828 TiB
    1682 hours
    Avg speed 25.37 MiB/s
    AD still 1.
    168= 1 (SATA PHY Error Count)
    P/E?
    MD5 OK.
    Reallocated sectors : 00


    Intel X25-M G1 80GB
    182,2931 TiB
    19900 hours
    Reallocated sectors : 00
    MWI=151 to 150
    MD5 =OK
    49.70 MiB/s on avg


    m4
    141.6388 TiB
    517 hours
    Avg speed 80.42 MiB/s.
    AD gone from 21 to 18.
    P/E 2489.
    MD5 OK.
    Reallocated sectors : 00
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

  16. #3091
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    This one is still in the works as I need to check the accurate GiB values for some of the drives.

    -
    Hardware:

  17. #3092
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    I’ve just been trying to figure out the Samsung 830 data.

    I’ve assumed:

    • 5K P/E cycles.
    • Host writes = F1*512/1,073,741,824 = GiB
    • B1 RAW = P/E cycles count

    It seems strange that the Host Writes per P/E cycles ratio remained consistently wrong until the theoretical P/E cycle count had expired. Once the theoretical P/E cycle count had expired the WA came out at; Avg 3.93.

    It would be interesting to see how these results compared to another 64GB 830, however it seems it is possible to calculate host writes and approximate the MWI/ NAND Writes & WA using the basic SMART data that is provided. W/A is quite savage, but then again so are the sustained write speeds (at least until you get to MWI = 0).

    Did anything come of the 470 autopsy? Is the 320 still going?
    Could it be that WA is actually counting what was written in excess of 1? like 0.10 meaning a real 1.1 WA?

  18. #3093
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Okay, I've been neglecting my SSD testing duties, at least in the reporting aspect -- I'm promising to rectify the situation shortly. I've been collecting log information and taking screenshots, but I've been travelling non stop for a while now and it's getting old.

    Secondly, I have a new 64GB drive to test. I've been using it in my laptop to shake it down, but I have high hopes for it. Could this be the 64GB MLC drive that hits 1PB? I don't even know why I care, but I do.

    Lastly, I was going to test the Imation S Class SLC too. But after waiting weeks for the drive to show up in the mail, I ended up getting the MLC based M-Class instead (I was sent the wrong drive) which is Samsung NAND + a Jmicron controller, no NCQ or TRIM. Very diappointed -- an older SLC drive would be interesting to test as the high WA would help get the job done sooner. God only knows how long a good SLC drive would take to go tits-up under these conditions, but a Phison controlled drive would have massive WA.

  19. #3094
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    593.59TB Host writes
    Reallocated sectors : 05 17
    Available Reserved Space : E8 99
    POH 5372
    MD5 OK

    32.53MiB/s on avg (~115 hours)

    --

    Corsair Force 3 120GB

    01 92/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 43 (Wear range delta)
    E6 100 (Life curve status)
    E7 10 (SSD Life left)
    E9 612226 (Raw writes) ->598TiB
    F1 814872 (Host writes) ->796TiB

    MD5 OK

    104.49MiB/s on avg (~115 hours)

    power on hours : 2353

    --

    I'll be rebooting and doing some cleaning shortly. (SSD Toolbox on the X25-V to regain some speed)
    -
    Hardware:

  20. #3095
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    595.71TB Host writes
    Reallocated sectors : 05 18
    Available Reserved Space : E8 99
    POH 5390
    MD5 OK

    35.02MiB/s on avg (~17 hours)

    --

    Corsair Force 3 120GB

    01 94/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 45 (Wear range delta)
    E6 100 (Life curve status)
    E7 10 (SSD Life left)
    E9 617211 (Raw writes) ->603TiB
    F1 821510 (Host writes) ->802TiB

    MD5 OK

    107.06MiB/s on avg (~17 hours)

    power on hours : 2372

    --

    The Force 3 is idling a few hours in wait for a data retention test. (which will be lasting through Sunday)
    -
    Hardware:

  21. #3096
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I’m jumping back in with an 830 (in week or so)….but with a couple of twists depending on if I can establish exactly when the theoretical P/E cycle count expires. Twist one; the workload will be 4K random full span. Twist two; once the theoretical P/E count has expired I will fill the drive up with data and checksum it. I’m then going to unplug the drive and leave it. I will check the data integrity every 3 months.

    As a quick recap of the JDEC spec:

    The SSD manufacturer shall establish an endurance rating for an SSD that represents the maximum number of terabytes that may be written by a host to the SSD

    1) The SSD maintains its capacity
    2) The SSD maintains the required UBER for its application class
    3) The SSD meets the required functional failure requirement (FFR) for its application class
    4) The SSD retains data with power off for the required time for its application class
    The functional failure requirement for retention of data in a powered off condition is specified as 1 year for Client applications and 3 months for Enterprise (subject to temperature boundaries)
    .

    I’m not sure of the basis that determined the different requirements between 3 months for enterprise and 12 months for client applications. Presumably the variation is factoring the difference between how rapidly the SSD is written to as opposed to how much is written.

  22. #3097
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'll be entering a few more drives as well and both will have twists.

    1 drive with an extra spare area and one SF based drive with a different compression ratio.

    Just for the record, I've changed to 46% on the Intel (some time ago), it shouldn't matter for endurance as 0-fill is just as valid as any other value but it would mean that all drives are performing the same test.
    I suggest that all new drives are set to 46%, it would mean that all drives are using the same compression ratio/data pattern.

    I'll decide on how to perform retention testing on the new drives but I won't stop when MWI is exhausted (as it would take a year to verify that it sticks to JEDEC) but I'll probably write n times NAND exhaustion so that the retention test (idle) period can be set to e.g 1 month or 14 days.
    -
    Hardware:

  23. #3098
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    .
    I’m not sure of the basis that determined the different requirements between 3 months for enterprise and 12 months for client applications. Presumably the variation is factoring the difference between how rapidly the SSD is written to as opposed to how much is written.
    I have the impression it is not so much about how fast the SSD is written to as simply how much. I think a significant portion of the extra erase cycles that are specified for e-MLC come from them only needing to have 3 months data retention. In other words, if the consumer MLC only had to have 3 months data retention, I think they could specify significantly higher erase cycles than they do.

    I'm not saying that there is no difference between standard MLC and e-MLC. But I think that if both were specified for 3 months data retention, then MLC might have 10,000 or 15,000 erase cycles, as compared to 30,000 for e-MLC.

    By the way, the test you propose sounds interesting. Too bad we have to wait a year to find out if it passes the JEDEC specification! Anvil has an interesting idea to try to speed up the time by writing some multiple of what is required to exhaust MWI. However, I seriously doubt the decay in data retention time is linear with the multiple of MWI written (I suspect it accelerates a lot at some point). Maybe it is roughly linear at first, but I would not go past a multiple of 4 if I were doing it. Then it would still take 3 months to test the data retention.

    Why did you decide on a Samsung 830? If the problems Christopher had are indeed a firmware bug, you may have trouble telling when you have exhausted the MWI on the Samsung.
    Last edited by johnw; 12-30-2011 at 01:17 PM.

  24. #3099
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'm going for drives with rich and proven SMART attributes, a pity that the Samsung's are "hiding" some of the more interesting ones.
    I'll most likely go for 4-6x the spec. (both new drives are 25nm)

    Testing for data retention failures raises a few interesting questions, if one powers On the SSD every 3 months I'd expect it to somewhat interfere with the deterioration of the data and possibly help stabilize the data. The data won't be reprogrammed unless one writes (of course) but it will most likely make a difference.
    If this wasn't the case then the drive could just be left powered On.
    What about rotating data?, I expect none of the consumer drives (probably none at all) do "auto rotating" except while doing GC. (meaning that it would have to be done manually)

    If I was to keep an offline copy of some data I'd most likely refresh the copy every 3-6 months, less frequent updates would make most data out of date.
    -
    Hardware:

  25. #3100
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I’d also guess that P/E exhaustion and data retention are not linear. The 830 is attractive for a few reasons. Selfishly it would be an interesting SSD to play with it for a few hours before testing, but more importantly it will only take a couple of days to exhaust the theoretical P/E count. The downside of course is that I might not know when that might be, but I can take that risk.

    Potentially waiting a 1 year plus is a pain, but then again the 320 and X25-V have been running for what, nearly 7 months? Who knows they might still be running before I’m done

    Data retention is the big unknown for me. AFAIK data retention is established with accelerated testing using high temperatures. I will just leave the 830 in my PC case so it will always remain at normal operating temperatures.

Page 124 of 220 FirstFirst ... 2474114121122123124125126127134174 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •