Page 85 of 220 FirstFirst ... 35758283848586878895135185 ... LastLast
Results 2,101 to 2,125 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #2101
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    My drive doesn't ever get a chance to idle since it's either writing 120+ mbs or disconnected. I only have 17 percent static data anyway, as I used a fresh Win7 install on it for when I was using it in a laptop. Today WRD has advanced to 11, so perhaps it will go back up to 20+.

  2. #2102
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Mushkin Chronos Deluxe 60 Update, Day 18

    05 2
    Retired Block Count

    B1 11 Going up
    Wear Range Delta

    F1 168250
    Host Writes

    E9 129720
    NAND Writes

    E6 100
    Life Curve

    E7 32
    Life Left

    Average 128.23MB/s Avg
    RST drivers, Intel DP67BG P67

    415 Hours Work (23hrs since the last update)
    Time 16 days 7 hours

    12GiB Minimum Free Space 11720 files per loop

    SSDlife expects 7 days to 0 MWI
    Click image for larger version. 

Name:	Mushkin Day 18.JPG 
Views:	1205 
Size:	196.3 KB 
ID:	121022

  3. #2103
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Most likely controller failure for the Corsair drive. Too bad because it even had LTT removed. So I guess with the Mushkin disconnecting, we won't get to see how SF does perform !? Maybe SF is just not suitable for this kind of testing. Or maybe Anvil's drive will give us some results without the controller crapping out.

  4. #2104
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I read a note that wear levelling via static data rotation can potentially impact performance, so one could assume that idle time is not required, however I also read that the SF controller is supposed to maintain the wear delta within a few % of the maximum lifetime wear rating of the NAND.

    Vapor indicates a value of 3 in post # 1,899, which seem about right to how the SF is supposed to behave. AFAIK Vapor’s SF has run continuously. So how could the F3 get to 58? If the P/E is 5K or 3K the difference between least and most worn is huge and would take a lot of writes (not just idle time) to rebalance.

    3 = [(166.7) / 5,000] x 100 - Vapor
    11 = [(550) / 5,000] x 100 – The highest I saw before aborting
    58 = [(2,900) / 5,000] x 100 - Anvil

    It seems there are two triggers for static data rotation:

    • Length of time that data remains static before it is moved
    • The maximum threshold between least and most worn blocks

    If SF works on the first trigger then it would be reasonable that the delta would get quiet high if the drive is written to heavily. That does not seem to match up with what Vapor reports.

  5. #2105
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by bulanula View Post
    Most likely controller failure for the Corsair drive. Too bad because it even had LTT removed. So I guess with the Mushkin disconnecting, we won't get to see how SF does perform !? Maybe SF is just not suitable for this kind of testing. Or maybe Anvil's drive will give us some results without the controller crapping out.
    I don't think you should worry about results with the Mushkin or the Force 3. It's not like total drive failures are common, but I'm not really surprised one drive (the F40-A) died from something other than wearing out the NAND. The chances of it happening to another drive in the near future is minuscule.

    I've bought a new motherboard, it should be here this later this week. While I think it's ridiculous to have to buy hardware that works with the SF2281 rather than just having a drive that works with anything (you know, like it's supposed to), I'm committed to figuring out what the hell is going on. I can always reboot once a day as this should stop the drop outs from occurring, but that's not a very good solution (I'm not always going to be here to babysit the Mushkin). I've made some other changes this weekend, and as a result, I'm already past my own personal consecutive time between drive failures. If I can make it to 40 - 50 hours I may just be on to something.

    The new motherboard is a Maximus IV Gene-z, so that makes one P67, H67, and Z68 board I'll end up testing on with the only hardware difference being motherboards. The P67 and H67 were basically the same at no more than 30 hours of running. Either the Mushkin lasts no more than 30 hours with the new motherboard, or it just works. Like I said, in the meantime I'm trying something else, but it would be hilarious if the Mushkin worked for the next 3 days until the Gene-z gets here. But it would also be hilarious if people have been swapping out the wrong components the whole time...
    Last edited by Christopher; 10-10-2011 at 01:32 AM.

  6. #2106
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    382.22TB Host writes
    Reallocated sectors : 11
    MD5 OK

    36.12MiB/s on avg (~21 hours)

    --

    Corsair Force 3 120GB

    01 90/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 55 (Wear range delta)
    E6 100 (Life curve status)
    E7 64 (SSD Life left)
    E9 141282 (Raw writes)
    F1 188176 (Host writes)

    106.62MiB/s on avg (~21 hours)

    power on hours : 561

    --

    Wear Range Delta is slowly decreasing, nothing else of importance except for that it's still running.

    Ao1,
    We'll just have to monitor the Force 3, the next few days should tell us the trend although it looks like a slow process.
    Some parameters a surely related to the capacity and free space.
    Maybe the continuing creating and deleting of "fresh" files disturbs the advanced logic on the SF controller.
    -
    Hardware:

  7. #2107
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Christopher View Post
    ..
    I've made some other changes this weekend, and as a result, I'm already past my own personal consecutive time between drive failures. If I can make it to 40 - 50 hours I may just be on to something.
    ...
    What changes and are you running a no "Power save" setup?

    I'll know in 12 hours or so, the speed is OK for an 3Gb/s port.
    -
    Hardware:

  8. #2108
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by Anvil View Post
    What changes and are you running a no "Power save" setup?

    I'll know in 12 hours or so, the speed is OK for an 3Gb/s port.
    All power saving technologies enabled, same RST 10.6 drivers. It's pulling 46.9w from the wall right now.

    34 hours straight... still too early to tell, but still a record for me.



    I don't want to jinx myself by proclaiming that I've found a fix, so the most I'll say at the moment is it SHOULD have already crashed, or SHOULD crash any moment...


  9. #2109
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838


    Lets hope it works out and is reproducible!
    -
    Hardware:

  10. #2110
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    M225->Vertex Turbo 64GB Update:

    466.85 TiB (513.31 TB) total
    1245.34 hours
    9609 Raw Wear
    118.46 MB/s avg for the last 64.76 hours (on W7 x64)
    MD5 OK
    C4-Erase Failure Block Count (Realloc Sectors) from 6 to 7.
    (Bank 6/Block 2406; Bank 3/Block 3925; Bank 0/Block 1766; Bank 0/Block 829; Bank 4/Block 3191; Bank 7/Block 937; Bank 7/Block 1980)

    Click image for larger version. 

Name:	CDI-M225-OCZ-VERTEX-TURBO-10.10.2011.PNG 
Views:	972 
Size:	60.6 KB 
ID:	121053
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  11. #2111
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The boot drive just disconnected, a Force GT on fw 1.3.

    I've got a screenshot from 30min ago so avg speed etc is pretty close, the other values are the current ones, taken from CDI.


    Kingston SSDNow 40GB (X25-V)

    382.91TB Host writes
    Reallocated sectors : 11
    MD5 OK

    35.33MiB/s on avg (~27 hours)

    --

    Corsair Force 3 120GB

    01 90/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 55 (Wear range delta)
    E6 100 (Life curve status)
    E7 64 (SSD Life left)
    E9 143026 (Raw writes)
    F1 190497 (Host writes)

    MD5 OK

    106.65MiB/s on avg (~27 hours)

    power on hours : 568
    -
    Hardware:

  12. #2112
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is a summary of B1 data posted on SF drives. There is not enough info on what happened with Vapor’s SF drive. Vapor also did a lot of testing on compression before getting started on the endurance app, which would have made a difference; plus it has modified f/w.

    Anyway, when the endurance app is running all the time B1 appears to only increase. I’m wondering if SynbiosVyse’s drive failed due the fact that B1 got too high. The difference between least and most worn blocks exceeds the P/E cycle capability.

    130 = (6,500 / 5,000] x 100
    130 = (3,900 / 3,000] x 100

    Edit: Looking at SynbiosVyse’s posts he mentions he had hardly any static data (a few MB?) yet B1 was still able to significantly increase above the target SF value.

    Click image for larger version. 

Name:	Untitled.png 
Views:	1010 
Size:	41.5 KB 
ID:	121054

    Click image for larger version. 

Name:	Untitled.png 
Views:	915 
Size:	64.1 KB 
ID:	121087
    Last edited by Ao1; 10-10-2011 at 11:25 PM.

  13. #2113
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    Update:
    m4
    641.5496 TiB
    2331 hours
    Avg speed 90.95 MiB/s.
    AD gone from 245 to 240.
    P/E 11165.
    MD5 OK.
    Still no reallocated sectors
    Click image for larger version. 

Name:	M4-CT064 M4SSD2 SATA Disk Device_64GB_1GB-20111010-1958-2.png 
Views:	947 
Size:	80.3 KB 
ID:	121056Click image for larger version. 

Name:	M4-CT064 M4SSD2 SATA Disk Device_64GB_1GB-20111010-1958-3.png 
Views:	944 
Size:	36.2 KB 
ID:	121055

    Kingston V+100
    The drive is up again but I need to restore the log before I can restart the test.
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

  14. #2114
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    The real hero here is the M4. Too many problems and headaches with the SF. I wonder if any SSD will beat in in the future. Probably the C300

  15. #2115
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Ao1 View Post
    Here is a summary of B1 data posted on SF drives.
    Nice compilation, maybe Vapor has more info on his drive(s)

    It looks like mine hit some boundary at 59, it's still at 55. (the drive is not back online yet)
    -
    Hardware:

  16. #2116
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Now I'm starting to get nervous... 45 hours straight without disconnects.

  17. #2117
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    well, I give my 320 another week or two. Reallocations are rising at the rate of about 150 per day now and the SSD is slowing down a bit. I had a 2 day setback due to a failed part in my testing rig (ram).

    4694 reallocated sectors. Reserved space is at 16. Erase fail count is at 86. 487.5TB written.

  18. #2118
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Mushkin Chronos Deluxe 60 Update, Day 19

    05 2
    Retired Block Count

    B1 10
    Wear Range Delta

    F1 179537
    Host Writes

    E9 138428
    NAND Writes

    E6 100
    Life Curve

    E7 27
    Life Left

    Average 127.74MB/s Avg
    RST drivers, Intel DP67BG P67

    440 Hours Work (25hrs since the last update)
    Time 18 days 8 hours

    12GiB Minimum Free Space

    SSDlife expects 6 days to 0 MWI
    Click image for larger version. 

Name:	Mushkin Day 19.JPG 
Views:	905 
Size:	150.8 KB 
ID:	121074

    47.36 Consecutive Hours, A new record!

    About two minutes after I posted the update, I received an email from Mushkin about a new FW release. The new 3.30 release is already available or will be soon for drives that use standard SF FW.

    I think I have to skip this FW after finally finding stability. If the Mushkin can make it another 24hr, I think it's stable in my slightly revised system.
    Last edited by Christopher; 10-10-2011 at 05:15 PM.

  19. #2119
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by One_Hertz View Post
    well, I give my 320 another week or two. Reallocations are rising at the rate of about 150 per day now and the SSD is slowing down a bit. I had a 2 day setback due to a failed part in my testing rig (ram).

    4694 reallocated sectors. Reserved space is at 16. Erase fail count is at 86. 487.5TB written.
    How long have you been getting 150 reallocations a day for?

    If you really think it's going to die soon, maybe you should post more frequent updates (if possible).

  20. #2120
    Xtreme Addict
    Join Date
    Nov 2005
    Location
    Where the Cheese Heads Reside
    Posts
    2,173
    Quote Originally Posted by bulanula View Post
    The real hero here is the M4. Too many problems and headaches with the SF. I wonder if any SSD will beat in in the future. Probably the C300
    Indeed the champ continues. Watch they miss labeled some SLC NAND in that drive.. ooops
    -=The Gamer=-
    MSI Z68A-GD65 (G3) | i5 2500k @ 4.5Ghz | 1.3875V | 28C Idle / 65C Load (LinX)
    8Gig G.Skill Ripjaw PC3-12800 9-9-9-24 @ 1600Mhz w/ 1.5V | TR Ultra eXtreme 120 w/ 2 Fans
    Sapphire 7950 VaporX 1150/1500 w/ 1.2V/1.5V | 32C Idle / 64C Load | 2x 128Gig Crucial M4 SSD's
    BitFenix Shinobi Window Case | SilverStone DA750 | Dell 2405FPW 24" Screen
    -=The Server=-
    Synology DS1511+ | Dual Core 1.8Ghz CPU | 30C Idle / 38C Load
    3 Gig PC2-6400 | 3x Samsung F4 2TB Raid5 | 2x Samsung F4 2TB
    Heat

  21. #2121
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by deathman20 View Post
    Indeed the champ continues. Watch they miss labeled some SLC NAND in that drive.. ooops
    Maybe we have a ringer here...

    Maybe Crucial actually gets the best NAND out of the IMFT venture. Maybe Intel get the bottom of the barrel, the dregs of NAND production.

    I think the C300 and the M4 are going to peter out around the same point, but the C300 will just last longer due to it's slowness. That point might be 20K or 30K PE cycles though, so don't think it's happening next Thursday at 04:37 GMT. I think it's going to take most of a year to kill it.

    It's hard not to be impressed by the M4 though. I'm really looking forward to the 320 giving up the ghost soon, but maybe it's just another false alarm.

  22. #2122
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Christopher View Post
    How long have you been getting 150 reallocations a day for?

    If you really think it's going to die soon, maybe you should post more frequent updates (if possible).
    It was 150 for the last day. It is speeding up a lot so actually I think this has a few days left. Reallocations are now at 4719. It rose by 25 over the last 3 hours. Wont make it over 500TB probably.

  23. #2123
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    At 500TB the 320 40GB will have written it's own capacity 12,500 times -- not too shabby. The M4 has almost done that 10,000 times. Perhaps the difference between async and sync nand is more important for longevity than just 25nm vs 34nm. The nand in the 320 should be really similar to the Micron *AAA 25nm async, while the M4 uses *AAB sync nand. Undoubtedly the IMFT sync nand is higher quality, not just faster.

  24. #2124
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by One_Hertz View Post
    well, I give my 320 another week or two. Reallocations are rising at the rate of about 150 per day now and the SSD is slowing down a bit...
    Both 40GB Intel drives have proven to be exceptional value, 500TB is not bad at all although I'm aiming/hoping for more

    --

    Running on the X58, Power savings Enabled, C-States Disabled.
    Boot drive is the Corsair Force GT 120GB FW 1.3.2, online for 16 hours.

    Kingston SSDNow 40GB (X25-V)

    384.81TB Host writes
    Reallocated sectors : 11
    MD5 OK

    33.99MiB/s on avg (~12 hours)

    --

    ASRock Extreme4 Z68
    Power savings Enabled, C-States Disabled

    Corsair Force 3 120GB

    01 89/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 54 (Wear range delta)
    E6 100 (Life curve status)
    E7 63 (SSD Life left)
    E9 146890 (Raw writes)
    F1 195641 (Host writes)

    104.21MiB/s on avg (~12 hours)
    As a result of Power savings the avg is down a bit. (normal)

    power on hours : 583

    --

    As a test I have increased the delay while deleting files, it is now 500ms per 500 files.
    So far it looks OK, the result is that it is able to write the set amount of random writes every time.

    I have made the pause a user setting so if the drive has no issue with deleting thousands of files it can be set as low as .5 seconds and by doing that one can sort of regain the time lost caused by the added delay while deleting.
    Last edited by Anvil; 10-11-2011 at 02:13 AM.
    -
    Hardware:

  25. #2125
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by One_Hertz View Post
    It is on its way to me via USPS.
    Hi One_Hertz, did you find anything interesting on John’s SSD? Are you able to determine the P/E cycle count at a block level? (I’d be really interested to find out how the wear was distributed over the blocks).

Page 85 of 220 FirstFirst ... 35758283848586878895135185 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •