Yes, MD5 testing is also very useful !
Printable View
Yes, MD5 testing is also very useful !
I understand, but you said you were "setting it up" for the Force 3, which is why I was confused.
It's not really something that needs to be done at 98%, unless something terrible is happening "under the bonnet".
Good news! I finally hit 98%.... I might make it through MWI before the apocalypse, assuming the apocalypse isn't three months from now.
My Raw read error rate has plummeted down to 86, but the RAW value is also (C3/C9/CC) as you pointed out.
I think that this is something completely different than what it looks like. If this number(01) starts approaching 0, perhaps this is some precursor to throttling? I noticed it jumped back up after stopping the endurance test for a few moments so it's not likely, but it'd be nice to know what's going on. The Raw Read Error Rate sounds pretty self explanatory for mechanical HDDs, but isn't really needed for SSDs and maybe it's one of those attributes that's always included but had many uses.
This Mushkin was taken out of the box and then almost immediately started endurance testing. I'll start including 01 in updates along with power on hours, and maybe some pattern will make itself known between the two SF2281s.
EDIT
Now it's jumped back to 94.
What the hell?
I was hoping there was some correlation with another attrib. like power on hours, but sadly there is not.
Tonight's Update:
Mushkin Chronos Deluxe 60
F1 Host Writes 13088
E9 NAND Writes 10086
E6 Life Curve 100
E7 Life Left 98
Average 123.77MBs.
Attachment 120370
Finally past the 10,000GB NAND write mark.
Nothing special happening here. 4082 reallocated sectors. 437.5TB. Erase fail count down to 98 from 99. Average speed 44.26MB/s and still rising. It was 41.8MB/s when I began the test.
Kingston SSDNow 40GB (X25-V)
337.10TB Host writes
Reallocated sectors : 7
MD5 OK
32.19MiB/s on avg (~15 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 23 (Wear range delta)
E6 100 (Life curve status)
E7 93 (SSD Life left)
E9 36446 (Raw writes)
F1 48573 (Host writes)
107.79MiB/s on avg (~15 hours)
Uptime 140 hours. (power on hours)
SSDLife estimates lifetime to be 1 month 20 days.
As I was moving to another rig I had to pick a new file for MD5 testing as the Kingston has been doing MD5 for quite some time.
No errors reported on the Kingston and so MD5 testing might have been started a bit early.
Unless there is something strange going on with the Force 3 I might not enable MD5 testing for another 2-300TB's
The 01 (Raw read error rate) on the F3 is back at 94 for me as well, will try to find out what is going on, it will most likely change the next few days.
I watched 01 jump back and forth quite a bit. Since the RAW is the ECC numbers, I guess it just changes based on part of the RAW number. I just kept refreshing the smart data with CDI, and it jumps around from minute to minute so.
My guess is what's important is the RAW ECC numbers that a HDD doesn't have, so the interpretation is wrong. I wish I knew what it was, but it looks like some worthless appendage from mechanical drives.
It looks that way, it has now changed to 82. (15-20 minutes ago it was 94)
I'll still report it for a few more days in case it starts making sense. (can't see any connections to other things though)
The Intel 320 really is putting up a fight. Speed is rising !?
M225->Vertex Turbo 64GB Update:
348.22 TiB (382.87 TB) total
1000.57 hours
7445 Raw Wear
117.84 MB/s avg for the last 16.17 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 4
(Bank 6/Block 2406; Bank 3/Block 3925; Bank 0/Block 1766; Bank 0/Block 829)
I think the added pauses in the new Beta8 has helped out speed alot. Especially consistent speed through the loop.
I'm now doing loops at 98-100 seconds compared to 114-117 seconds with the old settings. :up:
Attachment 120376
These Indilinx drives are quite good, it seems ! I thought they were pretty bad at first. Nice to see they are fast and endurant BUT I think their 4K scores suck ?
C300 Update
408.408TiB host writes, 1 MWI, 6895 raw wear indicator, 2048/1 reallocations, 62.8MiB/sec, MD5 OK
SF-1200 nLTT Update 1
249.75TiB host writes, 178.844TiB NAND writes, 10 MWI, 2861.5 raw wear (equiv), 55.7MiB/sec, MD5 OK
SF-1200 nLTT Update 2
262.5TiB host writes, 193.844TiB NAND writes, 10 MWI, 3101.5 raw wear (equiv), 55.65MiB/sec, MD5 OK
Seems both SF-1200s are 'stuck' at 10 MWI right now. :confused:
Charts will be updated when B.A.T returns from his vacation in a few days...both SF-2200s will be new additions :)
The 4K throughput for 60GB - 120GB Indilinx drives is about 12MB/s reads and 7MB/s writes. Or is it the other way around...? Bluestang's M225 --> Vertex Turbo is an excellent specimen, but probably won't be much past that. Some Indilinx drives were just better than others.
EDIT.....
Speaking of the Bluestang special, look at how few bit errors it has. My Indy drives have 1000x times as many, and two of them are brand new (maybe...).
I am following these parameters for almost one year on my model and I have found no direct link with anything. For example, I watched the evolution of the value very closely (refresh every 20-40s) both when drive was in idle and was running Anvil's storage utility. The raw value increased around the same rate. Also I have not noticed any ascending trend over the year but also, the drive is "brand new" compared to what we have here. As raw value is reset to 0 when drive enters a power down cycle, it might be useful to monitor and log the evolution during the entire test. We might notice an ascending trend at the end of the lifetime.
Here's tonight's update today.
Now that the Bit error count is up in the 8 digit range, Raw read error rate has stabilized.
Mushkin Chronos Deluxe 60
F1 Host Writes 20015
E9 NAND Writes 15402
E6 Life Curve 100
E7 Life Left 95
Average 122.57MBs last 12 hours.
Attachment 120388
Here's AS SSD for my Indilinx 60 with Intel NAND
Attachment 120390
I think those M225's are just awesome.
Once you can start doing 30 to 40 MB/s 4K reads it feels a lot less like a HDD.
Yes, in general it helps the OS/filesystem to keep up with the writes, without the small pauses there were small freezes while deleting files.
(especially noticeable on the F3 due to the number of files and the SF TRIM "bug")
For how long?
(most likely a side-effect of the nLTT FW, let's see what happens to mine)
I save a screenshot of CDI or SSDLife every day so if there is some correlation to other attributes it might be possible to pick up, not sure that it is important though.
With those speeds I'd be surprised if others aren't picking up on the toggle mode NAND on the 60GB drives :)
Well, it's super impressive at the 120GB size, but don't forget that it's still only writing 75% of that. In reality, it's not much faster at these endurance loads than my Agility 60.
For a 60GB SF drive, there's no question that's it's the champion of the universe. I'm surprised that it's the ONLY 60GB SF2281 toggle NAND drive out there. It's impossible to find the 60 Deluxe at the moment, but hopefully it'll catch on with higher availability.
Plextor makes a Marvell with Toshiba NAND at the 60GB mark as well -- Corsair used to -- It's just not price competitive vs. the 120GB versions.
Here's the ASU benchmark I meant to post the other day.
Attachment 120398
I like fast, smaller drives.
UPDATE
After a few days of flawless performance, I seem to have had some kind of crash. I was away for a little while when it happened, and when I came back, the system had restarted. Not sure what happened, but I'm going to err on the side of caution and make some changes to RAM timings to ensure that it was not the cause.
It could be the mysterious SF2281 instability, which is why I want to ensure that there can be no question about system stability... not that I think my system idling caused the crash in any event.
EDIT.
I started back up again, and started getting a couple of these Write Error 0s
Attachment 120433
Look at this
Attachment 120434
Those ECC numbers were in 8 figures earlier, and are now at 0
What the hell?!?
And now, like magic, they're back, but not at the same level.
I bet this smart attribute is really some kind of timer or something (who knows?).
I don't think it's anything like the label would imply.
Attachment 120435
weirdness...that is very curious. wth is going on?
Been a few days now, normally each MWI tick was ~2.5TiB and it's been stuck at 10 MWI for over 14TiB. SynbiosVyse's last update had 10 MWI when it could/should have been 9 (based on extrapolation). I would be very surprised if this were due to the nLTT setting, I think it's just a Sandforce oddity (like their first 3-5% expected lifetime being stuck at 100 MWI).
I've noticed RRER jumping around on my 4 Sandforce drives, as well. Never thought much of it though--normalized threshold is at 50 and I don't think I've ever seen the normalized value worse than 85. :shrug:
I wonder if there were some way to log it every minute-ish and see if potential drastic dips align with potential future crashes.
@Anvil and @Christopher, what kind of static data are you using? Your wear range deltas are already way past mine (3) and seem to be scaling like SynbiosVyse's (which is over 100, as of last update). SF-1200 and SF-2200 aren't exactly the same beast, but I would be surprised if wear leveling performance were reduced on the SF-2200 (unless it were for a tangible performance gain). The odd thing with my WRD vs. SynbiosVyse's is that he has just a few MB of static data whereas over 50% of my drive is filled with static data--one would expect wear leveling to be more effective on an empty drive.