Swiftech
Page 53 of 220 FirstFirst ... 3435051525354555663103153 ... LastLast
Results 1,301 to 1,325 of 5491

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #1301
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    198
    I would not expect to see any increase in writes during a SE. Think that a cycle for a NAND cell is an erase followed by a program operation, so during SE you only have half of the cycle.

    @johnw: could you run the endurance test for 20-30TB with trim disabled? I am curious to see what would be WA when the drive has no clue about what has been erased.

    Also, about general write performance, it was specified earlier by Ao1 that programming a page takes normally 900μs. 8 dies * 4KiB * (1000ms/0.9ms) = ~34.7MiB/s which is much smaller than what a normal SSD can do. Does anybody know how many pages could be programmed in parallel for one die?

  2. #1302
    SynbiosVyse
    Guest
    18.40 hours
    18.8203 TiB written
    58.20 MB/s
    MD5 ok

    E9 8832
    EA/F1 19328

  3. #1303
    Admin Vapor's Avatar
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    13,107
    Quote Originally Posted by SynbiosVyse View Post
    18.40 hours
    18.8203 TiB written
    58.20 MB/s
    MD5 ok

    E9 8832
    EA/F1 19328
    Also need B1's raw, 05's raw, and E7's normalized value. In exchange, I don't need hours or TiB written (from Anvil's app).

  4. #1304
    SynbiosVyse
    Guest
    You were right :p Health finally dropped to 97% today.

    05: 0
    B1: 6
    E7: 97%
    E9: 8960
    EA/F1: 19456

  5. #1305
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    SynbiosVyse, something does not seem right. Are you running at 46% fill? It seems like you have racked up a lot of writes since your last update. Even if the last screenshot was taken just before the MWI turned to 99% it still seems a big drop to now be at 97%.

    Sorry, don't mean to doubt you, but the data is quiet strange. Certainly very different to the OCZ drives.

  6. #1306
    SynbiosVyse
    Guest
    E9 (if correct) is only showing 5,312 GiB of writes to NAND. F1, host writes, is showing 14,976 GiB. Were you running 0 fill? Now 46% fill?
    Quote Originally Posted by Ao1 View Post
    SynbiosVyse, something does not seem right. Are you running at 46% fill? It seems like you have racked up a lot of writes since your last update. Even if the last screenshot was taken just before the MWI turned to 99% it still seems a big drop to now be at 97%.

    Sorry, don't mean to doubt you, but the data is quiet strange. Certainly very different to the OCZ drives.
    You were right with your assessment previously. I was originally running 0-fill and changed to 46%.

    My drive only has a few MiB of data as I had mentioned before and I set min GiB free to 1 GiB, quite a different setup than what you guys were running before. My goal was to kill this drive as fast as possible.

    I did see 100% health yesterday and when I first looked at it today it was at 97%. Unfortunately if it was ever set to a value in between, I missed it.

  7. #1307
    Admin Vapor's Avatar
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    13,107
    Quote Originally Posted by Ao1 View Post
    SynbiosVyse, something does not seem right. Are you running at 46% fill? It seems like you have racked up a lot of writes since your last update. Even if the last screenshot was taken just before the MWI turned to 99% it still seems a big drop to now be at 97%.

    Sorry, don't mean to doubt you, but the data is quiet strange. Certainly very different to the OCZ drives.
    I don't think it's that far off the V2 40GB. The F40-A had ~3.56TiB of NAND writes between 100 and 97 where the V2 40GB had ~6TiB (5.5TiB host writes * 1.1x) of NAND writes between 100 and 97. Assuming the V2-40 has 34nm 5000 cycle NAND and the F40-A has 25nm 3000 cycle NAND, it works out pretty well.

    At this rate, it seems the F40-A is only 2 days away from LTT activating (if it's set to 1 year lifetime).

  8. #1308
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by johnw View Post
    415.193 TiB, 1125 hours, sa177: 1/1/33990, sa178: 60/60/398
    423.894 TiB, 1149 hours, sa177: 1/1/34684, sa178: 58/58/418

    Average speed reported by Anvil's app has been steady at about 113MB/s.

    The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.

    64GB Samsung 470

  9. #1309
    SynbiosVyse
    Guest
    05: 0
    B1: 6
    E7: 97%
    E9: 9472
    EA/F1: 20096

    22.26 hours
    19.5930 TiB
    58.23 MB/s avg
    MD5 Ok

    What are some indicators of LTT? Slow write speed?

    Has anyone doing these tests ever achieved the point where they could not write to the drive at all?

    I have another one of these drives (virgin) ready to go again if you guys want to see another test with more controlled parameters.

  10. #1310
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    Whilst I don't think the erase time is an issue for the WA calculation I'm still not sure how WA has been calculated.

    It's interesting to see the difference TRIM has made to bluestang. Significantly faster write speeds and reduced WA.
    The write amplification is calculated as the ratio of sa177 raw * flash-capacity / writes-to-SSD. This of course assumes that sa177 raw is counting the average number of erase cycles of the flash in the SSD.

    I'm not surprised that TRIM helped write speed and reduced WA. That is exactly what TRIM is supposed to do. By increasing the number of invalid blocks for GC to work with, performance is increased and write amplification is reduced since collecting invalid pages is more efficient when there is more "scratch" space. That is almost exactly the same thing as increasing over-provisioning in order to increase lifetime and help performance.
    Last edited by johnw; 08-14-2011 at 12:24 PM.

  11. #1311
    Admin Vapor's Avatar
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    13,107
    johnw, your reallocations are accelerating

    Aug14HostLine.png

    (disparity between the bottom of the pack and your reallocations made me switch to a logarithmic scale for that axis....and the line still appears nearly linear with a slope greater than 0, meaning acceleration)

  12. #1312
    SynbiosVyse
    Guest
    How do you know the reallocations? Is that C4?

  13. #1313
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by sergiu View Post
    I would not expect to see any increase in writes during a SE. Think that a cycle for a NAND cell is an erase followed by a program operation, so during SE you only have half of the cycle.

    @johnw: could you run the endurance test for 20-30TB with trim disabled? I am curious to see what would be WA when the drive has no clue about what has been erased.

    Also, about general write performance, it was specified earlier by Ao1 that programming a page takes normally 900μs. 8 dies * 4KiB * (1000ms/0.9ms) = ~34.7MiB/s which is much smaller than what a normal SSD can do. Does anybody know how many pages could be programmed in parallel for one die?
    I don't think a program automatically directly follows an erase. My understanding is that "program" is a page-operation (basically a write to a page), and it can only be done on a page in a block that has been erased (cannot re-write or re-program a page). So it would make no sense to program the pages in a block after erasing the block, unless the SSD had actual data to write to the pages.

    I would have tried it with TRIM disabled if we had thought of it a couple hundred TiB ago, but now I think the Samsung 470 is in deterioration (with sa178 moving quickly) and I do not want to disturb the conditions of the experiment now. Maybe someone else with a Samsung 470 can try that experiment.

    As for flash write speed. The writes can be interleaved, possibly up to 5 per channel, but I think that requires more die than channels. For example, if there are 8 channels and 32 die, the writes can be interleaved 4 times, effectively increasing the write speed 4 times over the number you computed there. I think this may only be possible with synchronous flash, but I am not certain that it cannot be done with async flash (although async flash is slower at writes than sync flash, so there must be a reason for that).

    Even with interleaving at 5 times, it does not explain 250-300 MB/s write speeds that can be achieved on 240-256GB SSDs using 8GiB flash die. There must be additional tricks beyond interleaving to increase the write speed.
    Last edited by johnw; 08-14-2011 at 12:59 PM.

  14. #1314
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    (disparity between the bottom of the pack and your reallocations made me switch to a logarithmic scale for that axis....and the line still appears nearly linear with a slope greater than 0, meaning acceleration)
    Can you set the minimum for the reallocated sectors axis as 1 instead of 0.125?

  15. #1315
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    johnw, your reallocations are accelerating
    The last part of the curve looks like a double every 40 or 50 TiB. If that holds, then sa178 should cross 1000 (normalized 1) in about 2.5 more doublings, or about 100 to 120 TiB from now, which would be a total of 525 to 545 TiB.
    Last edited by johnw; 08-14-2011 at 12:41 PM.

  16. #1316
    Admin Vapor's Avatar
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    13,107
    Quote Originally Posted by johnw View Post
    Can you set the minimum for the reallocated sectors axis as 1 instead of 0.125?
    Might as well...was holding off until reallocations of a drive (any drive) climbed, but no sense having that awkward dead space on the bottom when it can be future-useful and not awkward up top

    Aug14HostLine.png

    Quote Originally Posted by SynbiosVyse View Post
    How do you know the reallocations? Is that C4?
    05 and C4 raw values for your F40-A, I believe
    Quote Originally Posted by SynbiosVyse View Post
    What are some indicators of LTT? Slow write speed?
    I may be wrong, but I think that's the only indicator on an SF-1200 drive.

  17. #1317
    SynbiosVyse
    Guest
    Quote Originally Posted by Vapor View Post
    05 and C4 raw values for your F40-A, I believe
    Okay both are still 0 for me. I will start reporting them once I see them go up.

  18. #1318
    Admin Vapor's Avatar
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    13,107
    C300 Update

    204.177TiB host writes, 31 MWI, 3452 raw wear, 2048/1 reallocations, MD5 OK, 62.5MiB/sec


    SF-1200 nLTT Update

    79.69TiB host writes, 50.69TiB NAND writes, 77 MWI, 811 raw wear (equiv), wear range delta 3, MD5 OK, 56.3MiB/sec
    Last edited by Vapor; 08-14-2011 at 01:55 PM. Reason: SF-1200 nLTT added

  19. #1319
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    198
    Quote Originally Posted by johnw View Post
    I don't think a program automatically directly follows an erase. My understanding is that "program" is a page-operation (basically a write to a page), and it can only be done on a page in a block that has been erased (cannot re-write or re-program a page). So it would make no sense to program the pages in a block after erasing the block, unless the SSD had actual data to write to the pages.
    Correct. What I referred as a "cycle" is the idea that you cannot have a program unless the page is part of a block which was erased. Because there is normally no program operation during a SE, there will be no "cycle" and no write counting. Now what would be interesting is to have a counter for both block erases and page writes, because this would give us a 100% accurate write amplification number. Unfortunately, I saw no SSD to count both parameters.

  20. #1320
    Xtreme Addict bluestang's Avatar
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,405
    M225->Vertex Turbo 64GB Update:

    75.24 TiB
    373 hours
    MWI 54 (drops by 1 for every 50 raw wear)
    2337 Raw Wear
    114.26 MB/s avg for the last 66.5 hours (on W7 x64)
    MD5 OK

    CDI-M225-OCZ-VERTEX-TURBO-08.15.2011.PNG
    Home PC -- Cruncher #1
    GA-P67A-UD4-B3 BIOS F8 modded, i7-2600k (L051B138) @ 4.5 GHz, 1.260v full load, HT Enabled, Corsair H70 exhausted @ 1600rpm
    Samsung Green 2x4GB @2133 C10, Gigabyte 7950 @1200/1250, Vertex 4 128GB, 2x3TB WD Red, F4EG 2TB, BR Burner, Win7 Ult x64, CM690, HX750

    Work PC -- Cruncher #2 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Sapphire 6970 @955/1475, Vertex 2 60GB, 2x500GB Hitachi R1, Win7 Ent x64, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->S/PDIF->Kenwood Sovereign VR-4090B->JBL Studio Series Floorstanding Speakers

    BTC: 1K91nTPceMcap66AhDBgMx8t87TomgAABH LTC: LNqbVqebzpMwuZHq95qRTfP73kR2FRZWS4

  21. #1321
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by johnw View Post
    423.894 TiB, 1149 hours, sa177: 1/1/34684, sa178: 58/58/418
    433.397 TiB, 1175 hours, sa177: 1/1/35458, sa178: 55/55/452, 111.30 MB/s

    The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.

    64GB Samsung 470

    Yesterday I noticed that the Avg MB/s field had been creeping down very slowly, in the 0.01 digit each day. It was still 113.13, but it seemed like the speed was decreasing. I think that ASU was averaging the speed from the last time the program was first started, which was many days ago. So after yesterday's data, I restarted ASU so that the average speed reported would only be for the last day. So today it read 111.30 MB/s. I'll keep restarting ASU after each day's data so that the average is only for the last day, and reporting the speed.

  22. #1322
    Admin Vapor's Avatar
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    13,107
    Very interesting on the last two updates.

    M225->Vertex Turbo 64GB's recent WA is still shrinking. It was 1.89x for the chunk of initial TRIM-enabled test, 1.228x for the 2nd chunk, and was 1.14x for the most recent chunk. It makes sense that net WA reduces as the proportion of low-WA writes increases, but the actual recent/instantaneous WA is still shrinking too

    I don't want to get too spammy with the charts (full chart update later today, I think), but the Samsung 470's reallocated sector count slope is starting to turn upward even with the logarithmic scale....death march has turned into a jog.

  23. #1323
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    m4 update:

    Been away for the weekend but my m4 has been working like a busy bee.

    304.3266 TiB
    1004 hours
    Avg speed 89.46 MiB/s.
    AD gone from 193 to 179.
    P/E 5329.
    MD5 OK.
    Still no reallocated sectors

    M4-CT064 M4SSD2 SATA Disk Device_1GB-20110816-0051.PNG
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

  24. #1324
    SynbiosVyse
    Guest
    05: 0
    B1: 9
    E7: 94%
    E9: 13952
    EA/F1: 25728

    49.32 hours
    25.0284 TiB
    58.38 MB/s avg
    MD5 Ok

  25. #1325
    Admin Vapor's Avatar
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    13,107
    C300 Update

    210.195TiB host writes, 29 MWI, 3553 raw wear indicator, 2048/1 reallocations, 62.45MiB/sec, MD5 OK


    SF-1200 nLTT Update

    84.938TiB host writes, 54.75TiB NAND writes, 75 MWI, 876 raw wear (equiv), 56.3MiB/sec, MD5 OK

Page 53 of 220 FirstFirst ... 3435051525354555663103153 ... LastLast

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •