Page 180 of 220 FirstFirst ... 80130170177178179180181182183190 ... LastLast
Results 4,476 to 4,500 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #4476
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Samsung 830 256GB Day 68

    1,645,138.28GiB
    1,606.58 TiB

    284.20
    MB/s Average
    1632 Hours

    7146 Wear Leveling Count
    MWI 1

    6/0 Erase/Program Fail
    Used Reserved Block Count: 12/24572 sectors

  2. #4477
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Endurance_cr_20120526.png

    @canthearu

    391.26TiB is the latest recording I've got for the V4, how much did the last effort add to this?

    A pity about the V4, much too early.

    --

    The 330s MWI changed to 98 this afternoon, looks like it'll take a few readings before it stabilizes.

    MWI 99
    Host Writes F1 (241) 691942*32=21263GiB = 21.11TiB
    NAND Writes F9 (249) 15230GB = 14.87TiB

    MWI 98
    Host Writes F1 (241) 957613*32=30643616 = 29.22TiB
    NAND Writes F9 (249) 21086GB = 20.59TiB

    Intel 330 120GB

    30.64TB Host writes
    Reallocated sectors : 05 0
    Available Reserved Space : E8 100
    MWI 98
    [F1] Total LBAs Written 1003997
    [F2] Total LBAs Read 22623
    [F9] Total NAND Writes 22110GB
    POH 106
    MD5 OK

    122.55MiB/s on avg (~9 hours)

    --

    Kingston SSDNow 40GB (X25-V)

    985.91TB Host writes
    Reallocated sectors : 05 46 up 1
    Available Reserved Space : E8 99
    POH 8831
    MD5 OK

    36.92MiB/s on avg (~9 hours)

    --

    Both drives had a short break 9 hours ago, I had to check that MD5 testing was correctly configured for the 330. (Host reads were stuck, all was OK)


    ... and Thanks!
    Last edited by Anvil; 05-26-2012 at 11:31 AM.
    -
    Hardware:

  3. #4478
    Registered User
    Join Date
    Nov 2008
    Location
    Canada
    Posts
    60
    RIP V4.

    I guess there is not much more to do with current SATA technology, the SSD market is going to stabilize until SATA 12 Gbps comes. The question is, which company selling SSD will survive until then? All of them?
    ---------------------------------
    Cooler Master HAF912
    Kingston Twister bearing 120mm fans
    Sunbeam Rheosmart, fans controlled with Speedfan
    Asrock Z68 Extreme3 Gen3, modded BIOS OROM 11.6
    2500K @ 4.5 GHz
    OCZ Vendetta 2
    Visionteck HD7850
    4 x 4GB Gskill 1600MHz 1.5V
    1680GB of SSD: Mushkin Chronos Deluxe 240GB, Sandisk Extreme 480GB, 2 x Mushkin Chronos 480GB RAID0
    LG 10x Blu-ray burner and Lite-On DVD burner

  4. #4479
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    @Anvil

    The v4 did virtually nothing while in zombie mode, so I'm going to take it from the time of the first error - (maybe 100GB overall added after first error, which is virtually nothing)

    Vertex 4 Death Certificate:

    Date of Death: 25/5/2012 - 8:15am (UTC +8 Timezone)
    Days of Operation: Approx day 39
    Cause of Death: Excessive bad blocks cause drive to enter panic lock state during erase phase.

    Total ASU Writes are 400,650.26 GiB (391.25 TiB)
    Total Host writes as per SMART 403,316.99 GiB (393.86 TiB)
    Remaining life: 31 (61 new - 30 used on old firmware)

    Disappointing result for a 128gb drive!

  5. #4480
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Opps, the poor old Intel 520 got left behind in all this commotion:

    Intel 520 60GB - Day 93.5

    Drive hours: 2,188
    ASU GiB written: 693,156.65 GiB (676.91 TiB)
    Avg MB/s: 93.47 MB/s
    MD5: OK

    Host GB written (F1): 698,113.12 GiB (681.75 TiB, 22339620 raw)
    NAND writes (F9): 494,215 GiB (482.63 TiB)

    Reallocated sectors (05): 0
    Failure count (AB, AC): 0 program, 0 erase
    Raw Error Rate (8B): 120 normalised
    Avaliable Reserved Space (AA): 100 normalised
    Media Wearout Indicator (E9): 1

  6. #4481
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    The V4 did slightly better than the Octane... Not that either one performed
    any kind of acceptable.

  7. #4482
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Christopher ... I'm less inclined to now believe that you simply got a drive with dodgy NAND. Something else is killing the NAND prematurely.

  8. #4483
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by canthearu View Post
    Christopher ... I'm less inclined to now believe that you simply got a drive with dodgy NAND. Something else is killing the NAND prematurely.
    What could kill just one part of one NAND device? Who knows?

    I've prepped up a new drive, the M3P 128GB. I should start it up soon, but preliminary testing shows that it should be able to just about match the 830 256GB's speed at half the capacity, and that pleases me.

    UPDATE
    The M3P is baking in the oven now... it doesn't look like the MWI works (I'm terribly angered by this), unless MWI is based on >5KPE cycles or something. Either that, or it's based solely on reallocations (very possible). It's averaging around 270MB/s for the first couple of hours.
    Last edited by Christopher; 05-26-2012 at 11:09 PM.

  9. #4484
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    @Christopher/ canthearu

    The V4 and Octane both used IMFT 25nm? The first post does not indicate the NAND part number. Do you know what it is?

    Edit:

    According to the info in the first page of the thread there is a consistent premature failure of all drives that have used Indilinx controllers, regardless of model or NAND type used. Only two of the Indilinx drives managed to get past the MWI, but considering the NAND that was used in those drives they should have lasted longer. The other three failed well before the MWI expired.

    There is nothing wrong with the Marvell controller. It has been used problem free by Intel & Micron for years. I’d say that this is clear evidence that Indilinx firmware is the problem and not the NAND.

    • Crucial M225 64 GB – Samsung 51nm (SLC?) = 880 TB (Exceeded)
    • OCZ Vertex Turbo 64 GB – Samsung 51nm (SLC?) = 116 TB (MWI – 58)
    • OCZ Vertex Turbo 64 GB – Samsung 51nm (SLC)? = 499 TB (Exceeded)
    • Octane 128 GB – Intel 25nm = 303.82 TB (MWI – 85)
    • Vertex 4 128GB – Intel 25nm = 393 TB (MWI – 31)
    Last edited by Ao1; 05-27-2012 at 12:54 AM.

  10. #4485
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I split the failed drives chart so that the drives that did not meet it's MWI specification are separated in a new chart.
    (the original was getting crowded)

    Endurance_failed_drives_mwi_tib.png

    Endurance_failed_drives_mwi_pecount.png

    @Christopher
    My 330 does not update Host Reads (it needs assistance), not sure how to handle it but it appears to be stuck until it's given a small break. (weird)

    Could be the same thing with the M3P, give it some time, it might start moving.
    270 MB/s is awesome for a 128GB drive, what are your "settings"?
    -
    Hardware:

  11. #4486
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Anvil, if I’m reading the table in the first post correctly it looks like the Kingston V100+ reached MWI 0 @ 196 TB. It then notched up a total of 368 TB before it died. It used 32nm Toshiba NAND, so I guess the P/E spec was 5K (?). (Actual = 6K+)

  12. #4487
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I noticed and did find it a bit strange

    Host Writes/Capacity is not the same as MWI though so it could be that MWI still had some left.
    (will have to go back and find some screenshots or look at the SMART attributes and values)
    -
    Hardware:

  13. #4488
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I can’t remember if P/E specs are based on the minimum or average count. It’s in the thread somewhere and I believe Intel stated it was the minimum, but I can’t be 100% sure.

    If it is the minimum then relocations should (in theory) not occur until after the MWI has expired. If it is average it would be reasonable for relocations to occur.

    It would be interesting to do a chart to see when relocations first appeared for each drive tested and map them to the P/E cycle count. I started to do it, but its way too much work to go through all the posts to find out. (Not too bad to do if the log files are still available).
    Anyway let’s assume that the P/E count is based on the minimum count. A relocation event before the P/E count has expired should not occur. If it does occur it should be a limited and isolated occurrence. Relocations past the MWI should be expected, but as the X25-V and other drives have shown writes can go significantly past the MWI without resulting in major relocations. Compare that to the Octane and V4, which both used IMFT NAND (AFAIK).

    $OCZ Octane (Relocations started at MWI 97 and then rapidly accelerated to 226 before total failure of the drive)

    $OCZ Vertex 4 128 GB (Relocations started at MWI 52 and then rapidly accelerated to 243 before total failure drive.)

    Samsung 830 256 GB – First relocation occurred past the MWI (1,160.90 TB) and has slowly increased to 12 having written a further 446 TB of data.


    Edit:
    As the MWI is based on the theoretical P/E count it should be possible to determine the combined sum of WA & WL for each drive that has been tested at the point that the MWI expired (assuming the drive reported decent SMART info).

    Capacity *P/E spec / ((WA) * (WL)) = Total theoretical write capability
    Last edited by Ao1; 05-27-2012 at 08:13 AM.

  14. #4489
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Ao1,

    V4 128GB Intel 29F64G08ACME3
    Octane 128GB Intel 29F64G08ACME2

    The 830 really has 6, and not 12, reallocation events. Somehow, the 830's used reserved block count attribute is double runtime bad block count. Reallocated sector count is 2048 x Used Reserve.

    Not sure why this is, exactly as I don't think it's using 2MB blocks, but I don't think it's had 12 reallocations regardless.


    Anvil,

    I just think Plextor couldn't add a traditional MWI. Like the Octane, the M3P is just using reallocations to determine MWI (the pre 1.03FW already possessed a reallocated sector count). This would have been much easier to add then a "real" MWI counter. All in all, I was very excited to see all the new attributes in 1.03, but it looks as though they're mostly useless (aside from the write counter).
    Last edited by Christopher; 05-27-2012 at 07:46 AM.

  15. #4490
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Plextor M3P 128GB Day 0

    12,267.39 GiB
    11.98 TiB

    268.31 MB/s
    Average

    10 Hours

    Rellocated Event Count 0

    Click image for larger version. 

Name:	m3p day 0 a 2.PNG 
Views:	1644 
Size:	75.6 KB 
ID:	127138
    Click image for larger version. 

Name:	m3p day 0 s 2.PNG 
Views:	1639 
Size:	118.5 KB 
ID:	127139

    Samsung 830 256GB Day 69

    1,665,964.47 GiB
    1,626.91 TiB

    299.17
    MB/s Average
    1653 Hours

    7260 Wear Leveling Count
    MWI 1

    6/0 Erase/Program Fail
    Used Reserved Block Count: 12/24572 sectors

  16. #4491
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    The M3P is using the 9174-BLD2, Toshiba TH58TEG702HBA4C 24nm Toggle in 8 packages x4 die x32gbit.

    I'm using 39.5 GB (42,479,930,379 bytes) static data.

    Test officially started 27-May-2012 02:22:32 hours.

  17. #4492
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I've updated the info in post#1 on the M3P, an extremely exciting drive, lets see how it compares to the Samsung.

    @Ao1
    I've been looking for data on the Kingston V100+ and it is there, I looks like we couldn't decide on whether the attribute in question really was an MWI counter, anyways I did put it in the table. TiB written when it ran out of MWI would mean ~3300 so it looks to be correct.

    I've got logs for my drives from the day I started using the log, January 9th this year.
    I'll find the missing "points" for the X25-V, shouldn't be that much of a job if we do our own drives.
    -
    Hardware:

  18. #4493
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    An overview of all drives that are not running anymore.

    Endurance_failed_drives.png

    The 460 is one of the drives that we never really decided/agreed on when MWI was exhausted.
    (Johnw, looking back and vs the 830 attributes, 60TiB looks to be a very low figure for that drive)

    Endurance_failed_drives_mwi_tib.png

    Endurance_failed_drives_mwi_pecount.png
    Last edited by Anvil; 05-27-2012 at 08:52 AM.
    -
    Hardware:

  19. #4494
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Christopher View Post
    Ao1,

    V4 128GB Intel 29F64G08ACME3
    Octane 128GB Intel 29F64G08ACME2
    I can’t find a data sheet for that either of this NAND types, but onethreehill has done an excellent job of collecting data on controllers and NAND.

    http://forums.storagereview.com/inde...eviews-thread/

    According to that link the NAND in the V3/ Octane has been used in a number of different drives. The Octane also used Hynix NAND and there have been widespread reports of premature relocated sectors with Hynix NAND. So the issue is evident on NAND from two different suppliers.

    Intel 29F64G08ACME3
    Kingston SSDNow V+200
    Zalman F1 120GB
    Team Xtreem S3

    Intel 29F64G08ACME2
    Corsair Force Series GT 120GB
    Mach Xtreme DS Turbo 120GB
    Silicon Power V30 60GB
    Silicon Power V30 120GB

  20. #4495
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post

    I've got logs for my drives from the day I started using the log, January 9th this year.
    I'll find the missing "points" for the X25-V, shouldn't be that much of a job if we do our own drives.
    That would be great. There is a lot of info missing in the tables in the first post. It would be great to see a chart showing how relocated sectors occurred for each drive and also a calculation of WA/WL based on when the MWI expired (if sufficient SMART data is available).

  21. #4496
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Ao1 View Post
    I can’t find a data sheet for that either of this NAND types, but onethreehill has done an excellent job of collecting data on controllers and NAND.

    http://forums.storagereview.com/inde...eviews-thread/
    The same info is listed in the storage section here as well.
    -
    Hardware:

  22. #4497
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Lol, that will teach me for only hanging around in the storage section.

    The X25-M G1 used 50nm NAND and Intel claimed a write amplification factor of 1.1. The best possible is 1, so from 2008 when the X25-M G1 came out there has been near zero scope for improvement in f/w algorithms to further reduce WA.

    The only way to get below 1 is to use compression. To get closer to 1 the only other option is to use DRAM to optimise how writes are placed on the NAND.

    Looking at it that way it makes more sense to me now why Intel have switched to SF and why drives with no compression are having to start to use large DRAM caches to offset reduced P/E cycles (which Intel have always avoided).

    It could be that drives with large DRAM caches are negatively impacted by the sustained writes that occur in the endurance workload as opposed to normal workloads that occur in real life. That does not explain the V3 or Octane however.

  23. #4498
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    The 830 has a relatively large DRAM cache, and it doesn't seem to be adversely affected. I think Intel just went with SF as a hedge against smaller lithography. When they started with SF, they probably ha no idea what 20nm was going to look like in production form (with respect to longevity). At least in Intel's case, SF seems to be doing its job in that respect, but i'm not super stoked about 20/19nm regardless of what controller it's bolted too.

  24. #4499
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Assuming P/E cycles are calculated correctly with the 830, your drive hit 827,911 @ MWI 1.

    Capacity *P/E spec / ((WA) * (WL)) = Total theoretical write capability

    256 *5,000 = 1,280,000/ 1.54

    WA & WL = ~ 1.55

    I'll work it out for a drive with no cache later.

  25. #4500
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    This was the post where mwi reached 1 for the Kingston V+100. Link Read error rate and reallocated sectors count was at 0. They didn't move when the ssd died.

    M4v1 reached wmi=1 without a change of the Read error rate and reallocated sectors count at 172.3671 TiB. The first Read error rate came at 700.4629 TiB
    M4v2 reached wmi=1 without a change of the Read error rate and reallocated sectors count at 174.3919 TiB. At 951 TiB it still have none.
    Intel X25-M G1 reached wmi=1 without a change of the Read error rate and reallocated sectors count at 147,5687 TiB.. The first reallocated sector came at 308,22 TiB.
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

Page 180 of 220 FirstFirst ... 80130170177178179180181182183190 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •