Page 108 of 220 FirstFirst ... 85898105106107108109110111118158208 ... LastLast
Results 2,676 to 2,700 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #2676
    SSDabuser Christopher's Avatar
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,433
    Its really a shame that SSD parts aren't user interchangeable... You can't switch out processors or NAND. How great would it be if they were socketed? I could drop in a SF controller in place of a Marvel or Indy, then swap some 51nm for the 34nm IMFT. Basically like my sound card is made to swap OPAMPS.

  2. #2677
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    482.24TB Host writes
    Reallocated sectors : 05 12
    Available Reserved Space : E8 99

    MD5 OK

    33.08MiB/s on avg (~88 hours)

    --

    Corsair Force 3 120GB

    01 82/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 54 (Wear range delta)
    E6 100 (Life curve status)
    E7 10 (SSD Life left)
    E9 383450 (Raw writes) ->374TiB
    F1 510394 (Host writes) ->498TiB

    MD5 OK

    106.58MiB/s on avg (~88 hours)

    power on hours : 1476

    B1 has dropped from 55 to 54.

    Looks like a data retention test for the F3 is due tonight
    -
    Hardware:

  3. #2678
    Xtreme Addict bluestang's Avatar
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,405
    M225->Vertex Turbo 64GB Update:

    779.39 TiB (856.91 TB) total
    2016 hrs (Torture), 2835 hrs (Power-On)
    15272 Raw Wear
    118.63 MB/s avg for the last 25.51 hours (on W7 x64)
    MD5 OK
    C4-Erase Failure Block Count (Realloc Sectors) at 15.
    1=Bnk 6/Blk 2406
    2=Bnk 3/Blk 3925
    3=Bnk 0/Blk 1766
    4=Bnk 0/Blk 829
    5=Bnk 4/Blk 3191
    6=Bnk 7/Blk 937
    7=Bnk 7/Blk 1980
    8=Bnk 7/Blk 442
    9=Bnk 7/Blk 700
    10=Bnk 2/Blk 1066
    11=Bnk 7/Blck 85
    12=Bnk 4/Blk 3192
    13=Bnk 7/Blk 280
    14=Bnk 3/Blk 2375
    15=Bnk 7/Blk 768
    Home PC -- Cruncher #1
    GA-P67A-UD4-B3 BIOS F8 modded, i7-2600k (L051B138) @ 4.5 GHz, 1.260v full load, HT Enabled, Corsair H70 exhausted @ 1600rpm
    Samsung Green 2x4GB @2133 C10, Gigabyte 7950 @1200/1250, Vertex 4 128GB, 2x3TB WD Red, F4EG 2TB, BR Burner, Win7 Ult x64, CM690, HX750

    Work PC -- Cruncher #2 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Sapphire 6970 @955/1475, Vertex 2 60GB, 2x500GB Hitachi R1, Win7 Ent x64, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->S/PDIF->Kenwood Sovereign VR-4090B->JBL Studio Series Floorstanding Speakers

    BTC: 1K91nTPceMcap66AhDBgMx8t87TomgAABH LTC: LNqbVqebzpMwuZHq95qRTfP73kR2FRZWS4

  4. #2679
    Xtreme Addict
    Join Date
    Feb 2007
    Posts
    1,674
    you can't really have interchangeable controllers if they all have a different number of channels.

  5. #2680
    SSDabuser Christopher's Avatar
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,433
    Quote Originally Posted by Boogerlad View Post
    you can't really have interchangeable controllers if they all have a different number of channels.
    I get that. I was just staring at two dead drives and wishing I could assemble one working drive out of out of it.

    Though there is one company in Austria, or maybe Australia, that had a pseudo-modular PCIe SSD.

  6. #2681
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    Todays update:
    Kingston V+100
    245.2253 TiB
    982 hours
    Avg speed 76.17 MiB/s
    AD still 1.
    168= 0 to 1 (SATA PHY Error Count)
    P/E?
    MD5 OK.
    Reallocated sectors : 00


    Intel X25-M G1 80GB
    54,7732 TiB
    18938 hours
    Reallocated sectors : 00
    MWI=07 to 01 to 115 to 113
    MD5 =OK
    52.17 MiB/s on avg
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

  7. #2682
    SSDabuser Christopher's Avatar
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,433


    The 311 has nice heavy feel to it, which I wasn't expecting, and it's also a little faster too. I'm still on the fence about endurance testing it though. I was running some numbers and I think it would just take too damn long. I think it would take somewhere near 2 years, maybe not even then. Who knows? Surely it would last a little longer than the Mushkin, and that's a start.

    It has more of a stainless steel top, but the same stupid 2mm spacer on it.





    --------------------------------------------------------------------------

    I've been digging in a little deeper as to why the Mushkin went T.U.

    Raw Read Error Rate is ECC + uRAISE errors.
    Retired block count is not actual blocks, but 100 * (retired blocks / minimum acceptable free space in blocks). Not sure how the value of 11 stacks up, but I believe that each retired block may be much more significant than I initially believed -- but I still don't think 11 is high.
    Last edited by Christopher; 11-16-2011 at 07:19 PM.

  8. #2683
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Both drives have been disconnected for 11 hours, I had to do a quick check and they are both alive.
    (nothing read or written except for SMART attributes)
    They will stay disconnected for another 10-12 hours.

    Attachments are still not working ?!
    -
    Hardware:

  9. #2684
    Registered User Hopalong X's Avatar
    Join Date
    Jun 2011
    Posts
    87
    Quote Originally Posted by Christopher View Post
    The 311 has nice heavy feel to it, which I wasn't expecting, and it's also a little faster too. I'm still on the fence about endurance testing it though. I was running some numbers and I think it would just take too damn long. I think it would take somewhere near 2 years, maybe not even then. Who knows? Surely it would last a little longer than the Mushkin, and that's a start.

    It has more of a stainless steel top, but the same stupid 2mm spacer on it.
    You could run Anvils Endurance app for say 2 days to 2 weeks in order to get some generalized data.
    That info could be interesting for comparison to other Intels used here in their early usage.

    My opinion that all info gathered could be useful.
    The first Vertex tested gave us useful info even though throttling eliminated it from further testing.

    Just a thought.

  10. #2685
    Xtreme Addict bluestang's Avatar
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,405
    M225->Vertex Turbo 64GB Update:

    788.68 TiB (867.16 TB) total
    2038 hrs (Torture), 2858 hrs (Power-On)
    15441 Raw Wear
    118.32 MB/s avg for the last 22.84 hours (on W7 x64)
    MD5 OK
    C4-Erase Failure Block Count (Realloc Sectors) from 15 to 16.
    1=Bnk 6/Blk 2406
    2=Bnk 3/Blk 3925
    3=Bnk 0/Blk 1766
    4=Bnk 0/Blk 829
    5=Bnk 4/Blk 3191
    6=Bnk 7/Blk 937
    7=Bnk 7/Blk 1980
    8=Bnk 7/Blk 442
    9=Bnk 7/Blk 700
    10=Bnk 2/Blk 1066
    11=Bnk 7/Blck 85
    12=Bnk 4/Blk 3192
    13=Bnk 7/Blk 280
    14=Bnk 3/Blk 2375
    15=Bnk 7/Blk 768
    16=Bnk 7/Blk 765


    Bank 7 now has 8 bad Blocks.
    Last edited by bluestang; 11-17-2011 at 07:57 AM. Reason: Typo
    Home PC -- Cruncher #1
    GA-P67A-UD4-B3 BIOS F8 modded, i7-2600k (L051B138) @ 4.5 GHz, 1.260v full load, HT Enabled, Corsair H70 exhausted @ 1600rpm
    Samsung Green 2x4GB @2133 C10, Gigabyte 7950 @1200/1250, Vertex 4 128GB, 2x3TB WD Red, F4EG 2TB, BR Burner, Win7 Ult x64, CM690, HX750

    Work PC -- Cruncher #2 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Sapphire 6970 @955/1475, Vertex 2 60GB, 2x500GB Hitachi R1, Win7 Ent x64, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->S/PDIF->Kenwood Sovereign VR-4090B->JBL Studio Series Floorstanding Speakers

    BTC: 1K91nTPceMcap66AhDBgMx8t87TomgAABH LTC: LNqbVqebzpMwuZHq95qRTfP73kR2FRZWS4

  11. #2686
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Hopalong X View Post
    You could run Anvils Endurance app for say 2 days to 2 weeks in order to get some generalized data.
    That info could be interesting for comparison to other Intels used here in their early usage.

    My opinion that all info gathered could be useful.
    The first Vertex tested gave us useful info even though throttling eliminated it from further testing.

    Just a thought.
    I’d agree. Based on observations to date expiry of the MWI can be projected with precision, so you only really need to know the extent of writes it takes to diminish the MWI by a couple of percenatge points to be able to establish how long the drive will last.

    Sadly it looks like if you don’t want to risk losing your data you need to replace the SSD when the MWI expires. No doubt the MWI is conservative, in context to the chances of losing data once it has expired, but testing that would be extremely time consuming.

  12. #2687
    SSDabuser Christopher's Avatar
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,433
    Ao1,

    I think once MWI expires you have a choice to make. How fast did you get to 0 MWI? If the answer is 1 week, then you're going to have problems sooner rather than later. If it takes you 19 months to get to MWI 0, you're probably still golden. Somehow, I'm only writing about .3 to .7GB a day to my main system drive in my primary desktop over the course of 8 to 12 hours a day that this particular system is on. But really, if you can write under 100GB a day to your drive, I think it should last quite some time. Should you find yourself writing 10+TB a day to a drive, it's not going to last long in any event. Had I spread out the 486TB host writes of the Mushkin over the span of 12 -24 months, it wouldn't really be a problem and would most likely make it to the end of warranty. Large numbers of host writes probably aren't a problem unless they pile up quickly.

    Some of the drives, like the 40GB Intel units, just can't write fast enough to commit suicide, while other seem to be heading inexorably towards disaster. Maybe there is some sound logic to OCZ's insistence that SF drives have LTT.

  13. #2688
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    My first planned data retention test is finally over

    First test lasted for 11 hours, no issues, the next test lasted fore 24 hours, both drives looks to be fine, no issues so far

    I'll think about how and when the next test will be performed but I expect in about a week or so I'll leave one drive at a time off for 48 hours and then the next week for 72.
    We can't really expect to find out if the drive will last disconnected for months or years so I think I'll settle for finding when the drive can't take the weekend off.
    -
    Hardware:

  14. #2689
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Christopher View Post
    How fast did you get to 0 MWI? If the answer is 1 week, then you're going to have problems sooner rather than later. If it takes you 19 months to get to MWI 0, you're probably still golden.
    No, it is the number of erases that are primarily responsible for degrading the flash and reducing data retention time, not the frequency of erases. While there might be a slight additional degradation from a large number of erases in a short time, the effect is most likely small compared to degradation from number of erases.

    Unfortunately, data retention will be reduced significantly based primarily upon the number of erases performed on the flash. Assuming anything else is just wishful thinking.

  15. #2690
    SSDabuser Christopher's Avatar
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,433
    I was speaking in terms of bit errors, the kind that I believe killed the Mushkin. I don't believe it was a data retention issue since the drive had not been turned off in two weeks by the time it died. Flash degradation plays a part, but I do not think that the flash wore out all at once. The Raw Read Error Rate has everything to do with how long and how fast you write. The more rest time in between writes, the fewer UCC+URAISE errors.

  16. #2691
    Uber Raid King Computurd's Avatar
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    Quote Originally Posted by johnw View Post
    No, it is the number of erases that are primarily responsible for degrading the flash and reducing data retention time, not the frequency of erases. While there might be a slight additional degradation from a large number of erases in a short time, the effect is most likely small compared to degradation from number of erases.
    primarily would be true, but there is a big impact of idle time on the condition of nand. I believe that AO1 posted a whitepaper that shows that nand endurance can be increased exponentially by having idle time. I will try to find it, but i believe if i recall correctly that some went from 5k p/e ratio to 30k p/e ratio.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  17. #2692
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    If I read that paper correctly it states that NAND specs are based on continual charge events, which does not leave time for charges to de-trapp. The specs are therefore worse case scenarios.

    The paper points out that the P/E cycle (stress event) causes two types of problem, one relating to data retention and the other relating to endurance.

    Data retention is impacted as the oxide layer weakens, which eventually results in the inability to retain data. Endurance is impacted by charges that get trapped in the oxide. If you allow time between charges there is a potential for the charges to become untrapped, which can greatly extent life. The longer the time between charges the more chance that trapped charges can dissipate.

    Obviously if a SSD is writing all the time, as in the case of running the endurance app, it does not mean that all of the NAND is being charged at the same time, so even within that scenario the NAND within the SSD will have chance to recover. It can however make a significant difference.



    Uploaded with ImageShack.us

    Link to the paper:
    http://www.usenix.org/event/hotstora...pers/Mohan.pdf

    The paper also mentions that the retention period of flash is 10-20 years. This is reduced as P/E cycles are incurred. For a client SSD once the MWI expires that should in theory be reduced to 1 or 2 years.

  18. #2693
    Uber Raid King Computurd's Avatar
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    this is definitely why i believe that desktop ssds will last forever basically. they have tons of rest time.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  19. #2694
    Xtreme Addict bluestang's Avatar
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,405
    M225->Vertex Turbo 64GB Update: Another Milestone...

    800.06 TiB (879.67 TB) total
    2065 hrs (Torture), 2886 hrs (Power-On)
    15648 Raw Wear
    118.54 MB/s avg for the last 27.52 hours (on W7 x64)
    MD5 OK
    C4-Erase Failure Block Count (Realloc Sectors) from 16 to 17.
    1=Bnk 6/Blk 2406
    2=Bnk 3/Blk 3925
    3=Bnk 0/Blk 1766
    4=Bnk 0/Blk 829
    5=Bnk 4/Blk 3191
    6=Bnk 7/Blk 937
    7=Bnk 7/Blk 1980
    8=Bnk 7/Blk 442
    9=Bnk 7/Blk 700
    10=Bnk 2/Blk 1066
    11=Bnk 7/Blck 85
    12=Bnk 4/Blk 3192
    13=Bnk 7/Blk 280
    14=Bnk 3/Blk 2375
    15=Bnk 7/Blk 768
    16=Bnk 7/Blk 765
    17=Bnk 7/Blk 182

    Bank 7 now has 9 bad Blocks.
    Home PC -- Cruncher #1
    GA-P67A-UD4-B3 BIOS F8 modded, i7-2600k (L051B138) @ 4.5 GHz, 1.260v full load, HT Enabled, Corsair H70 exhausted @ 1600rpm
    Samsung Green 2x4GB @2133 C10, Gigabyte 7950 @1200/1250, Vertex 4 128GB, 2x3TB WD Red, F4EG 2TB, BR Burner, Win7 Ult x64, CM690, HX750

    Work PC -- Cruncher #2 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Sapphire 6970 @955/1475, Vertex 2 60GB, 2x500GB Hitachi R1, Win7 Ent x64, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->S/PDIF->Kenwood Sovereign VR-4090B->JBL Studio Series Floorstanding Speakers

    BTC: 1K91nTPceMcap66AhDBgMx8t87TomgAABH LTC: LNqbVqebzpMwuZHq95qRTfP73kR2FRZWS4

  20. #2695
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @Christopher

    The Intel 311 looks pretty good for a small drive
    How about a few loops of testing

    --

    My drives are doing fine, there was an issue though as it BSOD'd, a 101 stop which normally means that one needs more vcore, if it happens again I'll have a look at it.
    (I caught the issue within 30minutes)

    I'll make a summary in post #1 for each drive this weekend and so some input would be appreciated.
    -
    Hardware:

  21. #2696
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    Data retention is impacted as the oxide layer weakens, which eventually results in the inability to retain data. Endurance is impacted by charges that get trapped in the oxide. If you allow time between charges there is a potential for the charges to become untrapped, which can greatly extent life. The longer the time between charges the more chance that trapped charges can dissipate.
    Yes, data retention is primarily affected by number of erases, since each erase tends to result in oxide degradation. Write endurance may be greater if the erases are more spaced out, but the data retention is minimally affected by erase frequency.

  22. #2697
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @Ao1

    I've read that article some time ago, will reread it.

    This pdf (The Inconvenient Truths of NAND Flash Memory) from Micron is rather old (2007) but it still looks relevant. (the most interesting stuff starts on page 15)
    I'm not sure if this one has been linked to Link (it covers laboratory testing)
    -
    Hardware:

  23. #2698
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    484.05TB Host writes
    Reallocated sectors : 05 12
    Available Reserved Space : E8 99

    MD5 OK

    36.49MiB/s on avg (~5 hours)

    --

    Corsair Force 3 120GB

    01 92/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 52 (Wear range delta)
    E6 100 (Life curve status)
    E7 10 (SSD Life left)
    E9 387764 (Raw writes) ->379TiB
    F1 516133 (Host writes) ->504TiB

    MD5 OK

    107.47MiB/s on avg (~5 hours)

    power on hours : 1497

    B1 has dropped from 54 to 52.
    -
    Hardware:

  24. #2699
    Xtreme Member
    Join Date
    May 2009
    Posts
    201
    Quote Originally Posted by bluestang View Post
    M225->Vertex Turbo 64GB Update: Another Milestone...

    800.06 TiB (879.67 TB) total
    Can we declare Vertex Turbo > M4 now?... The Indilinx controller is older and probably has more WA. Although its aided by 34nm NAND, this is definitely a great result!

  25. #2700
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    485.53TB Host writes
    Reallocated sectors : 05 12
    Available Reserved Space : E8 99

    MD5 OK

    34.07MiB/s on avg (~18 hours)

    --

    Corsair Force 3 120GB

    01 85/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 52 (Wear range delta)
    E6 100 (Life curve status)
    E7 10 (SSD Life left)
    E9 391452 (Raw writes) ->382TiB
    F1 521041 (Host writes) ->509TiB

    MD5 OK

    107.45MiB/s on avg (~18 hours)

    power on hours : 1510

    B1 hasn't changed. (still at 52)
    -
    Hardware:

Page 108 of 220 FirstFirst ... 85898105106107108109110111118158208 ... LastLast

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •