It was only online for about 2 months. Keep in mind though that also my drive did not have very much static data (just a few MB). I agree that this was probably a case of the controller pooping out.
Printable View
Update:
m4
632.6673 TiB
2303 hours
Avg speed 91.02 MiB/s.
AD gone from 254 to 245.
P/E 11010.
MD5 OK.
Still no reallocated sectors
Attachment 121000Attachment 121001
Kingston V+100
It dropped out during the night. I'm not be able to reconnect it until tomorrow since I'm away this weekend. Anvil was talking about an updated ASU so we can restore the log ourselves when this happens. I'll ask him to help me unless the new ver of ASU is finished.
Attachment 120999
Both drives are now running on the X58 which is configured w/o power savings and runs OROM 11 + RST 11 alpha.
I'll play a bit with the Z68 setting during this session.
Both drives were allowed to idle for 12 hours and so there isn't much progress.
Wear Range Delta stuck at 57 so no decrease while idling.
Kingston SSDNow 40GB (X25-V)
380.19TB Host writes
Reallocated sectors : 11
MD5 OK
38.69MiB/s on avg (~4.5 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 56 (Wear range delta)
E6 100 (Life curve status)
E7 66 (SSD Life left)
E9 136592 (Raw writes)
F1 181934 (Host writes)
106.33MiB/s on avg (~4.5 hours)
It has dropped a bit in avg speed but that is mainly due to that I have activated MD5 for this session.
power on hours : 544
I had a Fusion-io ioXtreme that had 307.8 TB writes. DOM was Jan 4 2010. And I posted it in Nov 9 2010.
See thread: http://www.xtremesystems.org/forums/...=1#post4619346
My current SSDs that I'm testing (Vertex 3 240 GB and RevoDrive 3 240 GB) are at about 1.4 TB write already and I've only had it for 5 days, so I'm at about 300 GB/day (at least for the RevoDrive 3). If I didn't have unexpected power losses, I'm sure that it would be higher by now.
So....14 TB of data is NOTHING to me.
*edit*
a) The Fusion-io ioXtreme was only 80 GB. And b) I think it was on a PCIe x4 connector (same as my RevoDrive 3 now).
Anvil I’ve been trying to find out a bit more about how the static data levelling process works. This white paper is the only one I can find that talks about how static wear levelling is implemented, but even here it is not discussed in great detail. If I read the following correctly the processes requires idle time before the process can start. (I guess you could also read the 1st trigger as the period the static data had sat idle, rather than how long the drive had sat idle).
The reason I think static data levelling is not working (as well as intended) on the SF drive is due to the high difference between worn/ least worn blocks. The SF controller is supposed to keep this to a very low threshold and the only reason I can think why it is not working is a lack of idle time. Obviously to get back to a low threshold you would need to write, pause, write, pause etc. until the wear could be evenly distributed, which would take a lot of time once the threshold has got beyond a certain point, as your drive appears to have done.
“Static wear levelling addresses the blocks that are inactive and have data stored in them. Unlike dynamic wear levelling, which is evaluated each time a write flush buffer command is executed, static wear levelling has two trigger mechanisms that are periodically evaluated. The first trigger condition evaluates the idle stage period of inactive blocks. If this period is greater than the set threshold, then a scan of the ECT is initiated.
The scan searches for the minimum erase count block in the data pool and the maximum erase count block in the free pool. Once the scan is complete, the second level of triggering is checked by taking the difference between the maximum erase count block found in the free pool and the minimum erase count block found in the data pool, and checking if that result is greater than a set wear-level threshold. If it is greater, then a block swap is initiated by first writing the content of the minimum erase count block found in the data pool to the maximum erase count block found in the free pool.
Next, each block is re-associated in the opposite pool. The minimum erase count block found in the data pool is erased and placed in the free pool, and the maximum erase count block, which now has the contents of the other block’s data, is now associated in the data block pool. With the block swap complete, the re-mapping of the logical block address to the new physical block address is completed in the FTL. Finally, the ECT is updated by associating each block to its new groups”.
Anyway I was going to experiment with wear levelling today but my V2 got hit with the time warp bug. First I noticed that files within folders had lost their integrity. It was as if I had run a SE from within Windows, but had not then rebooted. The file structure remained, but I could not copy or open files.
The drive then “disappeared”. It could not be seen in the bios or by the OS. After physically disconnecting and then reconnecting the drive it reappeared. The folder structure was complete and all files could be opened/ copied. Only one problem; the file structure had reverted to a previous time, i.e. recently created folders had disappeared.
Static data rotation might just be working but in a totally different way than everybody expects for SF. I interpret B1 (WRD) as difference in percents between most worn and least worn block. For 136592GB written, if considering 128GB as one complete cycles, we have ~1067 P/E cycles in average. For this values there might be a block with 1300 P/E cycles and another one with 572 cycles (1300-572)/1300 = 0.56.
Now, the controller might be programmed to postpone data rotation as much as possible to avoid increased wear, but to achieve a wear range delta of 5% (or any other value) at the end of estimated P/E cycles. This would explain why the value increased suddenly and now is slowly decreasing.
My Mushkin dropped from about 26 down to 9 - 10, but maybe we're misinterpreting. Maybe even 50+ is nothing on a 120GB drive. If my drive peaked at 27, and the 120GB Force 3 peaks at around twice that (as in that's the high water mark before it drops back down), then perhaps that's normal.
I haven't seen much in the way of interpreting Wear Range Delta.
Ao1,
I've read that paper before (or something similar), not sure how to interpret the lack of WRD movement while idling, I even restarted the computer and there was nothing monitoring the drives, no CDI or SSDLife, so, it was definitely idling. (and it was a secondary/spare drive)
And, if it wasn't doing static wear leveling the WRD would make no sense imho.
I think it somewhere along the lines that sergiu explains and there should be no problem in addressing static wear leveling on the fly.
Lets see if it keeps on decreasing, it's still at 56 and it's been writing for >9 hours since moving it to the other rig.
I've read about the time warp, have you been using the drive or has it been sitting there just idling?
Keep us updated on that issue.
My drive doesn't ever get a chance to idle since it's either writing 120+ mbs or disconnected. I only have 17 percent static data anyway, as I used a fresh Win7 install on it for when I was using it in a laptop. Today WRD has advanced to 11, so perhaps it will go back up to 20+.
Mushkin Chronos Deluxe 60 Update, Day 18
05 2
Retired Block Count
B1 11 Going up
Wear Range Delta
F1 168250
Host Writes
E9 129720
NAND Writes
E6 100
Life Curve
E7 32
Life Left
Average 128.23MB/s Avg
RST drivers, Intel DP67BG P67
415 Hours Work (23hrs since the last update)
Time 16 days 7 hours
12GiB Minimum Free Space 11720 files per loop
SSDlife expects 7 days to 0 MWI
Attachment 121022
Most likely controller failure for the Corsair drive. Too bad because it even had LTT removed. So I guess with the Mushkin disconnecting, we won't get to see how SF does perform !? Maybe SF is just not suitable for this kind of testing. Or maybe Anvil's drive will give us some results without the controller crapping out.
I read a note that wear levelling via static data rotation can potentially impact performance, so one could assume that idle time is not required, however I also read that the SF controller is supposed to maintain the wear delta within a few % of the maximum lifetime wear rating of the NAND.
Vapor indicates a value of 3 in post # 1,899, which seem about right to how the SF is supposed to behave. AFAIK Vapor’s SF has run continuously. So how could the F3 get to 58? If the P/E is 5K or 3K the difference between least and most worn is huge and would take a lot of writes (not just idle time) to rebalance.
3 = [(166.7) / 5,000] x 100 - Vapor
11 = [(550) / 5,000] x 100 – The highest I saw before aborting
58 = [(2,900) / 5,000] x 100 - Anvil
It seems there are two triggers for static data rotation:
• Length of time that data remains static before it is moved
• The maximum threshold between least and most worn blocks
If SF works on the first trigger then it would be reasonable that the delta would get quiet high if the drive is written to heavily. That does not seem to match up with what Vapor reports. :shrug:
I don't think you should worry about results with the Mushkin or the Force 3. It's not like total drive failures are common, but I'm not really surprised one drive (the F40-A) died from something other than wearing out the NAND. The chances of it happening to another drive in the near future is minuscule.
I've bought a new motherboard, it should be here this later this week. While I think it's ridiculous to have to buy hardware that works with the SF2281 rather than just having a drive that works with anything (you know, like it's supposed to), I'm committed to figuring out what the hell is going on. I can always reboot once a day as this should stop the drop outs from occurring, but that's not a very good solution (I'm not always going to be here to babysit the Mushkin). I've made some other changes this weekend, and as a result, I'm already past my own personal consecutive time between drive failures. If I can make it to 40 - 50 hours I may just be on to something.
The new motherboard is a Maximus IV Gene-z, so that makes one P67, H67, and Z68 board I'll end up testing on with the only hardware difference being motherboards. The P67 and H67 were basically the same at no more than 30 hours of running. Either the Mushkin lasts no more than 30 hours with the new motherboard, or it just works. Like I said, in the meantime I'm trying something else, but it would be hilarious if the Mushkin worked for the next 3 days until the Gene-z gets here. But it would also be hilarious if people have been swapping out the wrong components the whole time...
Kingston SSDNow 40GB (X25-V)
382.22TB Host writes
Reallocated sectors : 11
MD5 OK
36.12MiB/s on avg (~21 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 55 (Wear range delta)
E6 100 (Life curve status)
E7 64 (SSD Life left)
E9 141282 (Raw writes)
F1 188176 (Host writes)
106.62MiB/s on avg (~21 hours)
power on hours : 561
--
Wear Range Delta is slowly decreasing, nothing else of importance except for that it's still running.
Ao1,
We'll just have to monitor the Force 3, the next few days should tell us the trend although it looks like a slow process.
Some parameters a surely related to the capacity and free space.
Maybe the continuing creating and deleting of "fresh" files disturbs the advanced logic on the SF controller.
All power saving technologies enabled, same RST 10.6 drivers. It's pulling 46.9w from the wall right now.
34 hours straight... still too early to tell, but still a record for me.
I don't want to jinx myself by proclaiming that I've found a fix, so the most I'll say at the moment is it SHOULD have already crashed, or SHOULD crash any moment...
:D
:)
Lets hope it works out and is reproducible!
M225->Vertex Turbo 64GB Update:
466.85 TiB (513.31 TB) total
1245.34 hours
9609 Raw Wear
118.46 MB/s avg for the last 64.76 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) from 6 to 7.
(Bank 6/Block 2406; Bank 3/Block 3925; Bank 0/Block 1766; Bank 0/Block 829; Bank 4/Block 3191; Bank 7/Block 937; Bank 7/Block 1980)
Attachment 121053
The boot drive just disconnected, a Force GT on fw 1.3.
I've got a screenshot from 30min ago so avg speed etc is pretty close, the other values are the current ones, taken from CDI.
Kingston SSDNow 40GB (X25-V)
382.91TB Host writes
Reallocated sectors : 11
MD5 OK
35.33MiB/s on avg (~27 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 55 (Wear range delta)
E6 100 (Life curve status)
E7 64 (SSD Life left)
E9 143026 (Raw writes)
F1 190497 (Host writes)
MD5 OK
106.65MiB/s on avg (~27 hours)
power on hours : 568
Here is a summary of B1 data posted on SF drives. There is not enough info on what happened with Vapor’s SF drive. Vapor also did a lot of testing on compression before getting started on the endurance app, which would have made a difference; plus it has modified f/w.
Anyway, when the endurance app is running all the time B1 appears to only increase. I’m wondering if SynbiosVyse’s drive failed due the fact that B1 got too high. The difference between least and most worn blocks exceeds the P/E cycle capability.
130 = (6,500 / 5,000] x 100
130 = (3,900 / 3,000] x 100
Edit: Looking at SynbiosVyse’s posts he mentions he had hardly any static data (a few MB?) yet B1 was still able to significantly increase above the target SF value. :shrug:
Attachment 121054
Attachment 121087
Update:
m4
641.5496 TiB
2331 hours
Avg speed 90.95 MiB/s.
AD gone from 245 to 240.
P/E 11165.
MD5 OK.
Still no reallocated sectors
Attachment 121056Attachment 121055
Kingston V+100
The drive is up again but I need to restore the log before I can restart the test.
The real hero here is the M4. Too many problems and headaches with the SF. I wonder if any SSD will beat in in the future. Probably the C300 :)
Now I'm starting to get nervous... 45 hours straight without disconnects.
well, I give my 320 another week or two. Reallocations are rising at the rate of about 150 per day now and the SSD is slowing down a bit. I had a 2 day setback due to a failed part in my testing rig (ram).
4694 reallocated sectors. Reserved space is at 16. Erase fail count is at 86. 487.5TB written.
Mushkin Chronos Deluxe 60 Update, Day 19
05 2
Retired Block Count
B1 10
Wear Range Delta
F1 179537
Host Writes
E9 138428
NAND Writes
E6 100
Life Curve
E7 27
Life Left
Average 127.74MB/s Avg
RST drivers, Intel DP67BG P67
440 Hours Work (25hrs since the last update)
Time 18 days 8 hours
12GiB Minimum Free Space
SSDlife expects 6 days to 0 MWI
Attachment 121074
47.36 Consecutive Hours, A new record!
About two minutes after I posted the update, I received an email from Mushkin about a new FW release. The new 3.30 release is already available or will be soon for drives that use standard SF FW.
I think I have to skip this FW after finally finding stability. If the Mushkin can make it another 24hr, I think it's stable in my slightly revised system.
Maybe we have a ringer here... ;)
Maybe Crucial actually gets the best NAND out of the IMFT venture. Maybe Intel get the bottom of the barrel, the dregs of NAND production.
I think the C300 and the M4 are going to peter out around the same point, but the C300 will just last longer due to it's slowness. That point might be 20K or 30K PE cycles though, so don't think it's happening next Thursday at 04:37 GMT. I think it's going to take most of a year to kill it.
It's hard not to be impressed by the M4 though. I'm really looking forward to the 320 giving up the ghost soon, but maybe it's just another false alarm.
At 500TB the 320 40GB will have written it's own capacity 12,500 times -- not too shabby. The M4 has almost done that 10,000 times. Perhaps the difference between async and sync nand is more important for longevity than just 25nm vs 34nm. The nand in the 320 should be really similar to the Micron *AAA 25nm async, while the M4 uses *AAB sync nand. Undoubtedly the IMFT sync nand is higher quality, not just faster.
Both 40GB Intel drives have proven to be exceptional value, 500TB is not bad at all although I'm aiming/hoping for more :)
--
Running on the X58, Power savings Enabled, C-States Disabled.
Boot drive is the Corsair Force GT 120GB FW 1.3.2, online for 16 hours.
Kingston SSDNow 40GB (X25-V)
384.81TB Host writes
Reallocated sectors : 11
MD5 OK
33.99MiB/s on avg (~12 hours)
--
ASRock Extreme4 Z68
Power savings Enabled, C-States Disabled
Corsair Force 3 120GB
01 89/50 (Raw read error rate)
05 2 (Retired Block count)
B1 54 (Wear range delta)
E6 100 (Life curve status)
E7 63 (SSD Life left)
E9 146890 (Raw writes)
F1 195641 (Host writes)
104.21MiB/s on avg (~12 hours)
As a result of Power savings the avg is down a bit. (normal)
power on hours : 583
--
As a test I have increased the delay while deleting files, it is now 500ms per 500 files.
So far it looks OK, the result is that it is able to write the set amount of random writes every time.
I have made the pause a user setting so if the drive has no issue with deleting thousands of files it can be set as low as .5 seconds and by doing that one can sort of regain the time lost caused by the added delay while deleting.
^^ It is still not here...
Reallocations at 4760 this morning.
M225->Vertex Turbo 64GB Update:
473.55 TiB (520.67 TB) total
1259.19 hours
9732 Raw Wear
117.99 MB/s avg for the last 16.45 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 7.
(Bank 6/Block 2406; Bank 3/Block 3925; Bank 0/Block 1766; Bank 0/Block 829; Bank 4/Block 3191; Bank 7/Block 937; Bank 7/Block 1980)
Attachment 121095
I'm up to almost 63 hours stable on the Mushkin. I think tonight I'm going to stop it at 75 hours.
If it doesn't crash after 75hrs straight, it's not going to crash.
75 hours should cover most usage patterns except for those who standby/hibernate and those who never power off.
How long should I let it run? I want to undo the changes and let it run until it crashes to verify that my fix is legitimate.
I don't think that I would have a problem sleeping the system, but I've got to keep pushing on with the endurance testing.
EDIT
I looked over at the Patriot SF2 forums. There are many unhappy campers over there as well, even with the new 3.3.0 FW (not that I think FW is going to help most users).
It really looks like SF2 drives just aren't compatible with many system configurations. Say what you want about RST drivers and oroms and voodoo magic, but I've found a fix that works for me, though I wish I had more known-problematic drives to test.
I find it hard to believe that I would be the only one to stumble upon this, but if I'm right, everyone is barking up the wrong tree.
I doubt he has it yet. I didn't overnight it. It is going by USPS priority mail, which I don't think gets particularly expedited through Customs. I shipped it on Oct 6, but as of Oct 9, it had only made it to LA. I'm not sure if it is going through Customs in LA, (does Canada even have an LA Customs facility?), but if it is not going through Customs in LA, it is certainly traveling slowly.
Update:
m4
649.1909 TiB
2355 hours
Avg speed 90.75 MiB/s.
AD gone from 240 to 236.
P/E 11291.
MD5 OK.
Still no reallocated sectors
Attachment 121112Attachment 121113
Kingston V+100
141.0880 TiB
558 hours
Avg speed 76.59 MiB/s
AD gone from 61 to 54.
P/E?
MD5 OK.
Still no reallocated sectors
Attachment 121110Attachment 121111
So, is the 320 giving in ? Wonder how much longer the X25-V will last etc. compared to the 320. This should show that 34nm NAND is that much better assuming same controller etc. Same story with M4 and C300. We will see.
As long as it's reproducible and 3-4 days I'd say it's OK.
You need to tell us what you have stumbled upon :), it will be easier/quicker to tell if *this* is it.
Kingston SSDNow 40GB (X25-V)
385.95TB Host writes
Reallocated sectors : 11
MD5 OK
33.21MiB/s on avg (~22 hours)
The GT has been online for 26+ hours.
--
Corsair Force 3 120GB
01 89/50 (Raw read error rate)
05 2 (Retired Block count)
B1 53 (Wear range delta)
E6 100 (Life curve status)
E7 62 (SSD Life left)
E9 149683 (Raw writes)
F1 199355 (Host writes)
104.14MiB/s on avg (~22 hours)
power on hours : 594
I'm posting the update now, I've had trouble with XS all day.
Mushkin Chronos Deluxe 60 Update, Day 20
05 2
Retired Block Count
B1 9
Wear Range Delta
F1 189474
Host Writes
E9 146094
NAND Writes
E6 100
Life Curve
E7 22
Life Left
Average 127.25MB/s Avg
RST drivers, Intel DP67BG P67
391Hours Work (24hrs since the last update)
Time 19 days 7 hours
12GiB Minimum Free Space 11720 files per loop
SSDlife expects 5 days to 0 MWI
Attachment 121114
Hot Damn!
69 Hours and Counting!
I am most pleased with myself.
Anvil,
Be patient. I'm about to stop the Mushkin, and I'll PM you tomorrow with the story about my "SandForce Fix". You're going to laugh.
:D
I'd like to see if you can replicate my 'results' if you're interested. Maybe my super secret SF fix can work for you as well.
alright chris now we have to kill you. fill us all in if it works, eh?
(was the fix unplugging it? LOLOLOL)
I had to do a lot of work. First I had to open the Mushkin's metal chassis up, and get out the PCB. Next I had to swap in the PCB from a drive that doesn't just give up after a few hours of working. Step three was screwing it back together and putting it back in the system. Now it works like a charm!
Not exactly. Really, once you find out what I did, it'll sound like a joke too :rofl:. Hold your horses.
I've undone the changes, and am testing it like it was before. If the instability comes back, after a some testing on a different rig, all will be revealed.
The Mushkin was stopped (by me for a change) after 74.13 hours at 437GB/hr and 10.5TB/day. The psychological effects of having a flaky drive are almost worse than the instability itself. For me it's a minor annoyance, but if the Mushkin was my main system drive and was crapping out every day, I'd be super pissed. So much of the wailing and teeth gnashing is justified in my opinion. The new 3.3.0 firmware has been released and I don't think it's helping -- and it seems to be making things worse over at the Patriot forums. I'll be skipping that FW for certain.
4904 reallocated sectors. I can sit there and watch it rise now. 491.4TB. 12 reserve space left. 82 erase fail count.
man it has put on a good show~ :)
It looks like its really getting close to the end for the valiant 320. After the major event a couple weeks ago it seemed that it was on it's last legs, but it's been a trooper since.
It still might have another surprise in store though, I can't wait to see what happens.
@Christopher
NP :), my current run will continue until it either disconnects or it get's past 3-4 days.
--
I had forgotten to disable windows update and so the X58 restarted last night, ~6 hours gone.
Kingston SSDNow 40GB (X25-V)
386.52TB Host writes
Reallocated sectors : 11
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 53 (Wear range delta)
E6 100 (Life curve status)
E7 61 (SSD Life left)
E9 153034 (Raw writes)
F1 203815 (Host writes)
104.17MiB/s on avg (~35 hours)
power on hours : 607
So if the 320 dies at around 500 how long will the X25-V last !?
Geez, MWI on the Mushkin is dropping fast it seems, but maybe that's just what happens when it runs without disconnecting all the time. 12 hours ago it was at 23, and now it's about to hit 19 - any minute now.
Kingston SSDNow 40GB (X25-V)
388.54TB Host writes
Reallocated sectors : 11
MD5 OK
35.87MiB/s on avg (~16 hours)
Force GT "up time" ~23 hours.
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 55 (Wear range delta)
E6 100 (Life curve status)
E7 60 (SSD Life left)
E9 157471 (Raw writes)
F1 209722 (Host writes)
104.19MiB/s on avg (~51 hours)
power on hours : 624
What is going on there, 1Hz?
Is this the end?
Mushkin Chronos Deluxe 60 Update, Day 21
05 2
Retired Block Count
B1 10
Wear Range Delta
F1 201273
Host Writes
E9 155199
NAND Writes
E6 100
Life Curve
E7 17
Life Left
Average 127.28MB/s Avg
RST drivers, Intel DP67BG P67
490 Hours Work (25 hrs since the last update)
Time 16 days 7 hours
10GiB Minimum Free Space 11720 files per loop
SSDlife declines to speculate about when the Mushkin will hit 0 MWI
Attachment 121195
Attachment 121203
Everything is still going ok, MD5 is passing, just windows started :banana::banana::banana::banana::banana:ing at me (most likely due to the reserve space SMART attribute going below threshold).
Maybe you should contact your reseller ;) like the Toolbox says.
The next few days/weeks should be very interesting for the 320.
Ha Ha... it's time to get the 320 switched out under warranty.
Call your retailer back and tell them that Intel's Magic Toolbox says to give you a new drive and see what they say...
???
So if you were to short stroke the partition to 32GB, would reserve space go back above threshold?
"All I did was plug it in." :shrug: :rofl:
I have no idea what happened! I bought this shiny new SSD and it is dead in less than 3 months! I thought SSDs were supposed to be reliable?!
"Well it was that way when I got it."
"All I did was plug it in and it wouldn't work." :shocked:
I have a special tool that makes reading smart info difficult.
It's called a hammer!
Take that, Intel!
Dear Intel,
I regret to inform you that your 320 series of Solid State Drives are terrible.
It couldn't even last four months in my Chamber of Doomtm.
What the hell are you guys doing over there?? This isn't rocket science, people.
Much love,
One_Hertz
Here is a graph that shows the relationship between WRD (B1) & MWI (E7) for each of the SF drives being tested (excluding Vapor’s as there is not enough info). Anvil’s drive is holding up significantly better than SynbiosVyse & Christopher’s drives.
On SynbiosVyse’s drive the B1 value is more or less a perfect inverse of E7. I don’t know what NAND is being used in each drive, but it looks like SynbiosVyse & Christopher have NAND with 3K P/E specs and Anvil has 5K. It looks like the “disconnects” that Christopher experienced allowed better wear levelling, which has decreased the MWI burn out rate in comparison to SynbiosVyse's drive.
I can’t help think that SynbiosVyse’s drive failed due to the difference between least and most worn blocks, which resulted in an insufficient reserve block count. I think that is why the drive failed, but why, with hardly any static data, was SynbiosVyse’s drive unable to level out the wear. :shrug:
@ Anvil, a summary in the first post of each drive being tested along with the NAND P/E specs would be really helpful. Also keys findings and calculation methods would be great.
SynbiosVyse WRD (B1) & MWI (E7)
Attachment 121214
Anvil/SynbiosVyse/ Christopher
Attachment 121241
Here are yesterdays update because xs was down.:
m4
657.6115 TiB
2382 hours
Avg speed 90.81 MiB/s.
AD gone from 236 to 231.
P/E 11432.
MD5 OK.
Still no reallocated sectors
Attachment 121218Attachment 121219
Kingston V+100
148.1625 TiB
585 hours
Avg speed 76.52 MiB/s
AD gone from 54 to 48.
P/E?
MD5 OK.
Still no reallocated sectors
Attachment 121216Attachment 121217
Thanks for the chart!
I'll try to gather what I can find this weekend
Kingston SSDNow 40GB (X25-V)
389.68TB Host writes
Reallocated sectors : 11
MD5 OK
34.57MiB/s on avg (~26 hours)
Force GT "up time" ~35 hours.
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 57 (Wear range delta)
E6 100 (Life curve status)
E7 59 (SSD Life left)
E9 160317 (Raw writes)
F1 213510 (Host writes)
104.17MiB/s on avg (~61 hours)
power on hours : 635
Really interested to see the results of the Samsung.
M225->Vertex Turbo 64GB Update: Milestone reached...
488.58 TiB (537.20 TB) total
1290.04 hours
10005 Raw Wear
119.06 MB/s avg for the last 20.48 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) from 7 to 8.
(Bank 6/Block 2406; Bank 3/Block 3925; Bank 0/Block 1766; Bank 0/Block 829; Bank 4/Block 3191; Bank 7/Block 937; Bank 7/Block 1980; Bank 7/Block 442)
Attachment 121223
Well, the Mushkin is 25% faster than the Force 3 but only half the capacity. Remember, the Force 3 is 120GB(111GB) while the Mushkin is 60GB(55.5GB).
The 32nm Toggle nand in the Mushkin is 5000pe rated and the async nand in the two Corsairs is 3000pe rated.
EDIT
B.A.T.,
is the M4 running with no static data?
Attachment 121231
You have 100 percent free space there.
Nope. SSDLife Pro is not showing the correct info. This is a screenshot during the test at the end of the loop. I got around 40GB of static data.
Attachment 121232
New update
m4
665.5526 TiB
2407 hours
Avg speed 90.81 MiB/s.
AD gone from 231 to 227.
P/E 11563.
MD5 OK.
Still no reallocated sectors
Attachment 121235Attachment 121236
Kingston V+100
154.8108 TiB
610 hours
Avg speed 76.34 MiB/s
AD gone from 48 to 39.
P/E?
MD5 OK.
Still no reallocated sectors
Attachment 121233Attachment 121234
Kingston SSDNow 40GB (X25-V)
391.10TB Host writes
Reallocated sectors : 11
MD5 OK
33.91MiB/s on avg (~39 hours)
Force GT "up time" ~46 hours.
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 56 (Wear range delta)
E6 100 (Life curve status)
E7 58 (SSD Life left)
E9 163829 (Raw writes)
F1 218183 (Host writes)
104.16MiB/s on avg (~74 hours)
power on hours : 648
The Mushkin was down for several hours today as I switched out the Intel DP67BG mobo for the new Asus Maximus IV Gene-Z.
Mushkin Chronos Deluxe 60 Update, Day 22
05 2
Retired Block Count
B1 12
Wear Range Delta
F1 209632
Host Writes
E9 161646
NAND Writes
E6 100
Life Curve
E7 14
Life Left
Average 129.02MB/s Avg
MSAHCI drivers, Asus M4g-z
511 Hours Work (24hrs since the last update)
Time 21 days 7hours
10 GiB Minimum Free Space
SSDlife declines to speculate on the death of the Mushkin
Attachment 121238
AFAIK the NAND in the C300 is 34nm which is rated for 5000 program cycles. It is also ONFI 2.0 compliant and synchronous.
I'll be out of town until Sunday night, so I won't have any updates until probably late Sunday night.
I hope the 320 doesn't die while I'm gone...
Thanks for confirming that. The differences in MWI and B1 are huge. It annoying that the F3 and Mushkin experienced issues, as it’s now hard to know why the outcomes are so different.
Here are comparative B1 and MWI values at ~146K writes to NAND:
Anvil: MWI: 63, B1: 54
(Corsair F3 120GB (25nm) SF2281)
Christopher: MWI: 22, B1: 9
(Mushkin Chronos 60GB (34nm toggle) SF2281)
SynbiosVyse: MWI: 10, B1: 102
(Corsair F40-A 40GB (25nm) SF1222)
M225->Vertex Turbo 64GB Update:
495.21 TiB (544.49 TB) total
1303.67 hours
10126 Raw Wear
118.57 MB/s avg for the last 16.21 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 8.
(Bank 6/Block 2406; Bank 3/Block 3925; Bank 0/Block 1766; Bank 0/Block 829; Bank 4/Block 3191; Bank 7/Block 937; Bank 7/Block 1980; Bank 7/Block 442)
Attachment 121246
499TB. 3% wear leveling area left... As far as I understand, once this is zero, the SSD becomes read-only.
Yes, but as Anvil pointed out, when the drive disconnects it's effectively off. It can't wear level, it can't do anything because it's off.
The Mushkin hasn't spent more than 70 - 90 minutes doing anything other that endurance testing -- or disconnecting.
To be sure, the Toggle NAND equipped models are fantastic (though the Micron Sync NAND equipped models are pretty good too).
Maybe, but then I’m at a loss to understand why the B1 values are so high then. Let’s look at it another way :)
Intel gave Tech Report the following formula:
Cycles = (Host writes) * (Write amplification factor) * (Wear levelling factor) / (Drive capacity)
From post #1927 it looks like the theoretical P/E count expires when the MWI get to 10 with SF drives. SymbiosVyse posted the following when he got to MWI 10:
05: 0
B1: 102
E7: 10%
E9: 146,240
EA/F1: 195,264
3,000 = (F1) * (E9/F1) * (X) / (40)
3,000 = (195,264) * (0.75) * (X) / (40)
3,661/3,000 Wear Levelling Factor = 0.81
Intel's X25-E has a Wear Levelling Factor of 1.1, so 0.81 is not bad at all.
Let’s see how your drive pans out when you get to 10.
I found a Micron TN on wear levelling here btw
Kingston SSDNow 40GB (X25-V)
393.43TB Host writes
Reallocated sectors : 11
MD5 OK
33.44MiB/s on avg (~60 hours)
Force GT "up time" ~67 hours.
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 55 (Wear range delta)
E6 100 (Life curve status)
E7 57 (SSD Life left)
E9 169535 (Raw writes)
F1 225775 (Host writes)
104.16MiB/s on avg (~94 hours)
power on hours : 670
My trip got cancelled, so here's an update:
MWI just hit 10
Mushkin Chronos Deluxe 60 Update, Day 23
05 2
Retired Block Count
B1 12
Wear Range Delta
F1 217697
Host Writes
E9 167875
NAND Writes
E6 100
Life Curve
E7 10
Life Left
Average 129.71MB/s ? Avg
RST drivers, Intel DP67BG P67
533 Hours Work (23 hrs since the last update)
Time 22 days 5 hours
10GiB Minimum Free Space
Attachment 121260
EDIT
It really seems to be getting faster over time.
Three weeks ago I was pulling down ~122MBs. Now it's averaging 129 - 130MBs
Attachment 121262
500.71 TB. 1 wear leveling reserve left! 5478 reallocated sectors. Unfortunately I must go to bed now. When I get up, the SSD should be finished.
thats IF it doesnt just miraculously keep on going...like lazarus that SSD :) and to think they are only 85 bucks at newegg now!
Weird. So it looks like B1 does not tell the full story. SymbiosVyse’s drive has around the same WA factor. Considering your drive has 5K P/E cycles and is 60GB the host writes to get to MWI 10 does not compare that well against SymbioVyse’s drive. If your drive has 5K P/E cycles the wear levelling factor is 1.79, which compares to 0.81 for SymbioVyse with no static data.
B1 12 (103)
Wear Range Delta
F1 217,697 (195,264) – 89%
Host Writes
E9 167,875 (146,240) - 87%
NAND Writes
Assuming 5K P/E
5,000 = (217,697) * (0.77) * (1.79) / (60)
Assuming 3K P/E
3,000 = (217,697) * (0.77) * (1.07) / (60)
EDIT
Looking at the Intel’s drives they appear to have performed slightly better than spec’d. (Although the 320 uses 25nm I believe Intel NAND spec sheets quote 5K P/E).
X25-V
5,000 = (184,934) * (1.1) * (1.0) / (40)
320
5,000 = (194,560) * (1.1) * (0.95) / (40)
It looks like SF drives are not as efficient as Intel in static data rotation, although that is offset by WA due to compression.
EDIT 2:
SF drives use spare area for wear levelling, so if you calculate SymbiosVyse's drive as 48GB instead of 40GB the wear levelling index comes out as 1, (which makes more sense than 0.81).
New update
m4
676.2078 TiB
2442 hours
Avg speed 90.84 MiB/s.
AD gone from 227 to 221.
P/E 11746.
MD5 OK.
Still no reallocated sectors
Attachment 121272Attachment 121273
SSDLife has decided that the m4 won't die so it predict another 8 years of service :D
Kingston V+100
163.5195. TiB
645 hours
Avg speed 75.70 MiB/s
AD gone from 39 to 33.
P/E?
MD5 OK.
Still no reallocated sectors
Attachment 121270Attachment 121271
The Kingston is back on the Z68 rig. (since last night)
Kingston SSDNow 40GB (X25-V)
395.14TB Host writes
Reallocated sectors : 11
MD5 OK
36.18MiB/s on avg (~12 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 54 (Wear range delta)
E6 100 (Life curve status)
E7 55 (SSD Life left)
E9 173779 (Raw writes)
F1 231419 (Host writes)
104.16MiB/s on avg (~110 hours)
power on hours : 686
I guess I'm going to stop the Force 3 as it's now close to 5 days and re-test with C-States Enabled.
Attachment 121274
It looks like SSDlife predicts 8 years lifetime or so on all drives on BAT's rig. (the Kingston as well)
I'm wondering what's next for the Intel 320, hopefully it stays readable.
This 320 does not seem to care that it has no reserve space left. Still going strong...
502.35TB. 5588 reallocated sectors. Reserve space at 1.
I know, all they need to do is to look at Wear Leveling Count, it tells the story.
--
Kingston SSDNow 40GB (X25-V)
396.19TB Host writes
Reallocated sectors : 11
MD5 OK
34.57MiB/s on avg (~23 hours)
--
Corsair Force 3 120GB
01 85/50 (Raw read error rate)
05 2 (Retired Block count)
B1 53 (Wear range delta)
E6 100 (Life curve status)
E7 55 (SSD Life left)
E9 176376 (Raw writes)
F1 234877 (Host writes)
104.16MiB/s on avg (~119 hours)
power on hours : 696
You could auction the drive on Ebay -- Intel 320 40GB NO RESERVE!
Still, I'm a little confused... if the drive would theoretically become RO at Reserve Space = 0, couldn't you keep over provisioning the drive until nothing is left? Like a popsicle melting?
You could OP by a few GB, then when that gets trashed, you could OP again. Lather, rinse, repeat. Because of the wear on every cell, each time you'd get less and less additional time, but you should still be able to extend the drive's life considerably.
QUESTION?
Why does Crystal Disk Info report an Intel drive's life as being 100% when the MWI is at 99 or 98 (or possible lower)?
It reports E8 (Available Reserved Space)
E9 is based on the NAND P/E count and as we know the P/E count is worst case.
Attachment 121278
It is an interesting question though, extra OP should in theory have a positive effect on endurance and increased random write performance (IOps) on some controllers.
The extra space in over-provisioning should increase Available Reserved Space as well, right? If the ARS = 1, you should be able to OP by 4GB and get it back to 100...
I've heard that some controllers don't use the extra space in over-provisioning in this manner. With the Intel's, you should get some performance benefit just from not using all the available space for your partition, though I'm unclear on whether the drive makes some sort of distinction between real spare area and unallocated space on the drive.
Since I don't really need much space, every drive I'm using gets OP'd now (even though I surely don't need to, but it's probably not a bad practice). I could OP my 128GB drives down to 40GB and still have plenty of space for most of my system images.
Mushkin Chronos Deluxe 60 Update, Day 24
05 2
Retired Block Count
B1 13
Wear Range Delta
F1 225336
Host Writes
E9 173772
NAND Writes
E6 100
Life Curve
E7 10
Life Left
Average 126.02MB/s Avg
Intel RST drivers, Asus M4g-z
551 Hours Work (24hrs since the last update)
Time 22 days 23hours
12 GiB Minimum Free Space
Attachment 121282