C300 Update
71.65TiB, 76 MWI, 1208 raw wear indicator, 61.8MiB/sec, MD5 OK
Printable View
C300 Update
71.65TiB, 76 MWI, 1208 raw wear indicator, 61.8MiB/sec, MD5 OK
If those SMART data are to be trusted it's got 5000 P/E NAND and data written = C7 * 512Bytes so it looks to have some of the needed attributes.
I don't know, the missing history due to the reset makes it hard to include in charts.
It could be interesting if something pops up that could fit as a "partner" to that SSD, so keep it in the drawer for now. (if that is an option)
hi!
I'm max from italy.
I want to ask you who are experts:
calculating the life cycle of an SSD is more convenient 34nm SSDNow kingston 64gb (Intel X25-V rebranded), (and wait for the next generation in 2012), or buy a recent 25nm SATAII (like Crucial M4, OCZ Agility3, CORSAIR FORCE3) ?
Thanks, and please excuse my intrusion
max
i think its awesome that you flashed it over like that and it worked :) always love to see stuff like that work! Too bad it will not be a good candidate for testing, but it would be fun to play with :)Quote:
Don't forget that even though it says OCZ Vertex Turbo it is a Crucial M225. Let me know what you guys think.
@max-welcome to the forum.
*from the looks of it*, either will do just fine.
170.59TB Host writes
MWI 7
Reallocated sectors : 6
No errors reported.
Ok guys. I figured I'd offer it up. P/E used to be listed as 10k under FW 1916, was changed to 5k with FW 2030 (Crucial's FW anyways, not sure about OCZ's FW). I'll probably just put one in my lappy. And if I can't get someone at my work to buy the other one, maybe I'll just beat on it myself for personal curiosity sake (would need you app for that Anvil if possible?:shrug:).
@bluestang
Nothing wrong with your offer :)
I'm still thinking of a match for your drive, I'll do some checks.
10K P/E drives are of course of interest, a shame that flashing the drive reset the SMART values.
The app will hopefully go public within a few days so if you can wait then that would be the easiest, otherwise, fire me a PM and we'll make the arrangements.
^ Thanks Anvil...I'll wait. BTW, today UPS delivered me 2x 64GB M4's for me to set up in RAID0 on my new build. Got them for $99.49/ea :up:.
I think it would be a good addition. Less than 1TB = a new drive more or less (strange it dropped a couple of MWI points for so little data, but that should sort itself out over the long run)
EDIT: @bluestang, look forward to seeing your results :up:
Well, I saw Indilinx drives with less than 4TB written with only ~50% left. The WA was around 30-50 with several firmwares.
Nevertheless, I would love an Indilinx drive in those graphs :)
I've got a X25-V 40GB that is not aligned and it has written about 1TB as well, still MWI is 98% so that doesn't seem to matter for the Intel.
(it's intentionally unaligned, running W7 so TRIM is active)
C300 Update
77.3TiB, 1302 raw wear indicator, 74 MWI, 61.5MiB/sec, MD5 OK
171.94TB Host writes
MWI 6
Reallocated sectors : 6
No errors, 34.56MiB/s on avg for the last 23.3 hours.
this interests me :) why? i dont find a huge boost with alignment, and on arrays sometimes the gains are miniscule. just wondering if you have a reason to leave it that way?Quote:
it's intentionally unaligned
amazing the idiliinx have such terrible WA, or DO they? i guess that is a question. I have a 8x array of 30 gb Vertex that have seen the rain for sure. i have beat the hell out of them. wondering if the smart data is actually accurate with the indys....
Just posted and I gave you guys a little plug for all your hard work (see bottom of page)
http://thessdreview.com/our-reviews/...tilities-beta/
The indilinx really had such terrible WA. You can also read in pretty much every FW-Update they ever released some sort of "Changed WA algorithm". In german forums there are dozens of people with broken indilinx drives. Even today occasionally the old thread pops up with someone posting about it. With the newest firmware (you need to open the Ultradrive to flash it if you don't have the external switch on your drive though ...) the WA should go down to around 2-3. But well, it's a bit late for that ;)
I've got 4 X25-V's (2 are Kingstons) so why not experment a bit.
The one running the Endurance test was my "hot-spare" drive, one is running 24/7 in a server as the boot drive and the two others are boot drives running in "workstations".
The one that is not aligned works just fine but one can easiliy spot the performance drop on writes in benchmarks, still, no extra wear vs the other drives that are running under normal circumstances.
I originally had 3 Indilinx drives, it's now down to two (one was replaced by a V2), they are both doing fine and I haven't noticed anything "wrong" with any of them, will check the SMART attributes.
Thanks Les, great work!
Is this drive comparable to the Force 3/Agility 3 and most importantly, can one order one w/o LTT ;)
173.30TB Host writes
MWI 5
Reallocated sectors : 6
MD5 OK
33.72MiB/s avg (35.6 hours)
LTT...the new question eheheh. I found all 3 similar but there were definite benefits with this drive. Perhaps I might mention this thread to my contact to see if they would be interested in donating a drive to the cause although it doesn't look promising as OWC is the only company that enforces a strict sample return policy. I will ask though!
That way you could enter the Endurance test :)
Imho, they can only gain interest/popularity from such a test.
So, > 200TiB on the Samsung :) looks like there's no stopping that thing.
--
174.93TB Host writes
MWI 4
Reallocated sectors 6
No errors.
C300 Update:
83.255TiB, 72 MWI, 1404 raw wear indicator, MD5 OK, 61.85MiB/sec, no reallocated anything.
176.14TB Host writes
MWI 4
Reallocated sectors : 6
MD5 OK
Getting closer to MWI stops reporting, a few more days left :)
Vapor, F8 (vendor specific) on your C300 is recording to writes to NAND :)
In what units?
86.323 TiB written in Anvils app, so maybe 86.45TiB total in its lifetime. F8 reads 947652212, raw wear indicator is at 1456 (71MWI).
F6 is still 8x F7 (almost exactly, makes me think bits vs. bytes), and F5 is still 1.04x F7. Haven't paused the load app yet to see if any of those are a timer. F7 is 23310291875, F5 and F6 can be found from that.
^ Don't know, but it is defiantly recording writes to nand. (Got that direct from Crucial)
Defiantly? Ok, I'll record these writes, but I'll have you know that I am doing it under protest! ;)
Does Crucial have a list of SMART attributes? Micron does, and there is no F8 -- the highest number listed is 0xCE. So if Crucial has a 0xF8, then either it is Crucial specific, or else Micron does not document all of the SMART attributes.
AFAIK, I'm the only one with access to F2-F8. Got them when I thought I messed up the flash to 0007 (flashed in AHCI, which isn't supposed to be possible...the retried in IDE and said it was already with 0007). 0xCE was the highest number I had access to before the pseudo-goofed flash.
If F8 is NAND writes then its number should be after WA. 1456 raw wear (should also align with NAND writes) was what it read at the time, not sure how far along into 1456 it was (could be anywhere between 1456.0 and 1456.99), so let's say 1456.5. 1456.5 * 64 = 93.216TiB of NAND writes compared to 947652212 F8 value and it's suddenly a lot harder to piece it together :eh:
And then I back again:
130.9436 TiB
442 hours
Avg speed 90.96 MiB/s.
AD gone from 26 to 24.
P/E 2283.
MD5 OK.
Attachment 118031
I'm really wondering how far will these go before they are degraded beyond usable. And how will that look like. I wish these test files were usable files which could be later inspected if they are still usable when SSD is starting to show severe symptoms od degradation. Because to date, i don't think anyone has seen a drive that was degraded that much. Anyway, very interesting test, too bad there aren't any SandForce 22xx series drives in it (like Corsair Force 3 and OCZ Vertex 3)...
179.08TB Host writes
WMI 2
Reallocated sectors : 6
MD5 OK, 33.1MiB/s on avg. (28 hours)
C300 Update
91.914TiB, 69 MWI, 1551 raw wear indicator, 1021590229 0xF8 value, 61.85MiB/sec, MD5 OK.
New: 2048 reallocated sectors; 1 reallocation event count; 1 grown failing block count; 1 erase fail count.
Attachment 118047
Just thought I had to post this milestone :)
Attachment 118048
excellent!!
been a long time coming, that kingston is a champ :)
219.480 TiB, 603 hours, sa177: 1/1/18053
Average speed reported by Anvil's app has been steady at about 112MB/s.
The other two unknown SMART attributes, 178 and 235, are still at 72/72/276 and 99/99/2, just as they were when the SSD was fresh out of the box.
10*365.25*64e9/1024^4 = 212.603TiB
Past the point where the writes are equivalent to writing the entire 64GB capacity of the SSD every day for 10 years.
well thats a stat you can sink your teeth into! very impressive! unbelievable actually :)Quote:
Past the point where the writes are equivalent to writing the entire 64GB capacity of the SSD every day for 10 years.
glad to see this moving along...getting a tad boring with no failures and this has been going on for two months or so....
(dont get me wrong, its a good type of boring :) )
Finally a little action... yet I'm not sure how I can interpret the data:
- when we have a reallocated sector, it means that block can still be erased but a certain page cannot be programmed?
- when we have a reallocated block it means that erase failed?
If this is correct, it means that you have a failed block and 1920 failed pages spread to at least 15 blocks (this assuming 4KB pages/128 pages per block). I would expect a rapid failure also for the other blocks that contain failed pages, maybe in next TB written.
Samsung 470 pseudo-passed the Intel 320 in writes, damn it's fast.
I'm pretty sure the 320 is still ahead, but it's been a few days since the last update on the 320 40GB and it's capable of racking up more than 2.5TiB in that time frame.
Very impressive from the 470 considering it appears to have a WA of ~5.1x
(very impressive of all the drives so far, really, with a partial exception of the V2 40GB's LTT)
I think Crucial/Micron considers an LBA sector (512B) as their 'sector' (as compared to Intel doing something else, it seems). So 2048 reallocated sectors is just 1MB. I don't know the specs of the NAND in the C300, but that's probably only 1 or 2 blocks--probably 1 based on "1 grown failing block count".
It makes sense, so probably we could consider this one block as one "Intel sector".
Nice to see these SSDs lasting as much as they are doing now !
M4 update:
143.1145 TiB
481 hours
Avg speed 89.77 MiB/s.
AD gone from 24 to 17.
P/E 2493.
MD5 OK.
Attachment 118101
183.23TB Host writes
MWI 1
Reallocated sectors : 6
MD5 OK.
C300 Update
98.07 TiB, 67MWI, 1656 raw wear, MD5 OK, reallocated event still 1 and 2048 sectors. Switched to an updated version of Anvil's app, average speed over the past 8 hours is down to 60.5MiB/sec from 61.85ish.
First update:
OCZ Agility 3 AGT3-25SAT3-120G - a couple minor sector failures on one of the drives.
Corsair Force CSSD-F120GB2-BRKT - Nothing yet to report
OCZ Vertex 2 OCZSSD2-2VTXE60G - Nothing yet to report
Corsair Performance 3 Series CSSD-P3128GB2-BRKT - Both drives are throwing failures but drives are still functional
Crucial RealSSD C300 CTFDDAC064MAG-1G1 - Nothing yet to report
SAMSUNG 470 Series MZ-5PA128/US - Nothing yet to report
Intel 510 Series (Elm Crest) SSDSC2MH120A2K5 - One drive failed completely July 24 @ 9:14am and second drive is throwing failures but has yet to fail.
Intel X25-M SSDSA2MH160G2K5 - Nothing yet to report
Kingston SSDNow V+ Series SNVP325-S2B/128GB - a couple minor sector failures on one of the drives.
-------- End of line ----------
Any further questions?
Todays update;
147.1145 TiB
494 hours
Avg speed 89.77 MiB/s.
AD gone from 17 to 15.
P/E 2564.
MD5 OK.
Attachment 118134
184.88TB Host writes
MWI 1 (stopped moving)
Reallocated sectors : 6
MD5 OK, 35.62MiB/s avg (~17 hours)
http://www.xtremesystems.org/forums/...=1#post4854810
The short version being that I am using a kernel level module to continuously write sectors to SSDs and read them back to check for errors. Throwing failures implies that a write/read failed. (aka the data read does not match the data written) All sectors received an equal number of writes. Once 90% of sectors fail, the drive is considered failed.
All drives are all receiving an exactly equal distribution of writes at a constant speed of 50MB/s
SMART errors are not even looked at.
A failed sector is one that is unable to have a successful write and read operation after 200 attempts to write to the sector.
A failure is that the data read from the sector is not the same as data written to the sector.
The Intel drive was classified as failed the instant that 90% of all of the drive's sectors have failed.
Closer analysis shows that less than 1% of the data written to failed sectors matches what was actually written and that errors tended to start rapidly collecting near the end of its life. It performed quite well, until the first sector failure then the drive died after a mere couple more days of testing. The second drive is currently at 42% failed and is expected to die by tomorrow night.
Could you also post a screenshot with SMART parameters for failed drive and the drive which is expected to fail in next hours? By sector I guess you are referring to a LBA 512Byte right? Also, have you tried to let the failed drive in idle and try again. In some document posted somewhere at the beginning of the thread I saw a wear model that stated a much higher endurance if some idle time is taken between consecutive writes.
Experiment is being done on a Linux box, that does not have Xorg installed.
I classify a sector as 4KB of continuous block of flash.
I will retest the drive in a few moments to check to see if the 90% sector failure is still true.
However the statement "much higher endurance if some idle time is taken between consecutive writes", would mean endurance is better if you don't write to the drive much.(duh and your car will not run out of gas as quickly if you don't drive it much)
@nn_step, at what write size did your first Intel drive fail at (the one in the quote, above)?Quote:
Intel 510 Series (Elm Crest) SSDSC2MH120A2K5 - One drive failed completely July 24 @ 9:14am and second drive is throwing failures but has yet to fail.
@nm...your data is very vague. a lack of data would be a better way to classify it.
can you give us some specifics? amount of data written, time elapsed, etc?
its hardly helping to come to some sort of understanding when all you say is : there was a failure.
where, when, how, under what conditions? after what duration?
Assuming you wrote data continuously for 24th May 2011 at 50MiB/s on Intel 510 model, then this would be translated to 61*86400*50MiB = ~251.3TB . If WA is around 1.1 like on other models from endurance test, than this is means around 2150 cycles.
Also, according to endurance model posted by Ao1: http://www.xtremesystems.org/forums/...=1#post4861258 , if theory with recovery time proves to be right, then we should see much more cycles (it would take around 2000-2500 seconds between each page write at 50MiB/s) .
Could there be other factors that are breaking the Intel model so early (like a faulty power supply or SATA issues)? it's hard for me to believe that both models are failing so fast and so near one of each other
Will have a Vertex 2 60GB with LTT removed entering the testing within a week or two :) It's a V2 with 32nm Hynix NAND though :eh:
C300 update and updated charts later today :)
WOW nice to see that this thread really is starting to pick up some speed guys.
Thanks for all the hard work everyone !
241TiB. 31 reallocated sectors. MD5 OK. I THINK I found the hidden drive wear variable using Anvil's app (his special build for me). It is 120 right now and it is going up linearly.
C300 Update, charts next post
103.64TiB, 1750 raw wear, 65 MWI, reallocated still at 1 event / 2048 sectors, speed back up to 61.75MiB/sec, MD5 OK.
Updated charts :)
Host Writes So Far
Attachment 118204
Attachment 118205
(bars with a border = testing stopped/completed)
Raw data graphs
Writes vs. Wear:
Attachment 118206
MWI Exhaustion:
Attachment 118207
Writes vs. NAND Cycles:
Attachment 118208
Attachment 118209
Normalized data graphs
The SSDs are not all the same size, these charts normalize for available NAND capacity.
Writes vs. Wear:
Attachment 118210
MWI Exhaustion:
Attachment 118211
Write-days data graphs
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Writes vs. Wear:
Attachment 118212
MWI Exhaustion:
Attachment 118213
Approximate Write Amplification
Based on reported or calculated NAND cycles from wear SMART values divided by total writes.
Attachment 118214
If the current value is 120 and it's the same thing as =100-MWI (and MWI has gone negative), then your WA has dipped below its normal ~1.015x...and even dipped well below 1.00x.
Reported NAND Cycles / Calculated NAND Cycles via manual writing:
( 120 / 100 * 5000 ) / ( 241 / 40 * 1024) = .9725x WA
:(
Reallocated sectors seem to have been moving linearly for the 320 recently, maybe it's related to that? Or maybe it is wear, but not comparable to MWI? :shrug:
What exactly do you mean by write size.
Amount of data written is easy to calculate given the posted times and fixed data rates
My first post in regards to this test lists exactly the starting time of the test.
Where : 70 degree F basement (mine)
When : see above
How : Kernel module that I wrote
Conditions : All drives are written the exact same data at the exact same time, the data is random with high entropy.
Duration : see posted times above
One that I wrote myself, its only assumptions are the RAID cards being used and the timing chip that I am using
Yes there certainly are other factors that could cause earlier failure:
1) Intel drives are closer to ventilation than other drives.
2) Intel drives received 3% less sunlight than the other drives
3) Intel drives are connected to the leftmost power connector of the power supplies
4) The failed Intel drives have sequential serial numbers and could have been part of a bad batch
But I am continuing to check for other additional reasons for the failures.
the data is random with high entropy and the RAID cards have no problems sustaining the write/read rates
After 12 hours of off time, the Intel drive still has in excess of 90% of sectors failed.
Or your "kernel module" has bugs, since you avoided my question about how you debugged and qualified it, I assume you have not done so. How about posting the source code so others can look at it and test it for bugs?
Why won't you post the SMART attributes as read by smartctl?
How have you managed to write continuously to a Vertex 2 at 50MB/s without encountering warranty throttle?
And why exactly did you write a "kernel module" to do such a basic task as writing data to SSDs? Certainly a user-space C program would be more than sufficient.
The drive's firmware is causing errors when attempting the fundamental commands required for such work.
What exactly is this "warranty throttle" you speak of.
I made it a "kernel module" because I wanted to be exact in terms of timing and bandwidth. Also I find Linux's user-space file-system akin to trying to type with boxing gloves on for this sort of testing.
And the source code for the curious.
Code:define test as lambda (list *device drives, int_64s write_speed)
{
block data, test
number index := 1
loop
{
data := read("testfile.txt", 4096, loop)
index := index + 1
map( *device x in drives)
{
write(x, 4096, index%x.blockcount(), data)
}
map( *device x in drives)
{
read(x, 4096, index%x.blockcount()) =: test
if ( test != data)
{
throw( "block failure: " + index%x.blockcount().tostring() + "/newline drive: " + x.name().tostring())
}
}
}
}
Please feel free to point out any errors that exist in the program (Yes it is written in the Rook programming language, please don't complain about how different it is from C or python)
Where is the rest of the source code?
Why won't you post the SMART attributes?
Is your whole "test" just a hoax?
187.41TB Host writes
Reallocated sectors : 6
MD5 OK.
Attachment 0I'm not sure but I don't think it is MWI according to what I found and posted in my original Snip from the Intel pdf.
OneHertz statement it is now at 120 after starting at 0 is too fast of an increase for MWI paramaters to be used in reverse.
What it is based on is not explained though. :confused:
Percentage Used Endurance Indicator- % used over what? MWI?
Attachment 118223
@Anvil...How's the app looking for release?
There is no rest, learn to understand the power of good programming languages
No it is not a hoax but feel free to completely ignore me.
As for marketing, I didn't realize that the university I am working for was doing marketing in SSDs. :rolleyes:
This is nothing more than me, in a slightly more scientific fashion, testing to confirm/refute the anecdotal evidence that SSDs have shorter life spans than hard drives given a write rich environment.
Probably 46% for static data and 67% for test data. With the E9/233 SMART value, I can see how NAND writes directly relate to wear, so no absolute need for 101% incompressible data for the test. Many pages back I found the 67% data had a compression curve similar to the types of data I would put on an SSD (apps, documents). The 46% is most similar to the OS and apps compression curve, but 67% is a more conservative version (less compressible than OS/Apps, but more compressible than documents), so I might do 67/67 or 46/67, not entirely sure yet.
That exact quote is what made me think it was just MWI counting upward and past 100 :p:
If anything, a 120 is too low a value for it to be MWI based...to maintain the 1.015x WA, it needs a value of 125 at 241TiB.
So Vapor, is the Samsung 470 good or bad drive? Some of the charts make it look amazing to me while some make it look like the worst drive ever.
It burned through its MWI SMART attribute very quickly through an apparent write amplification of ~5x (which is why it looks 'bad' in many charts), but it's still going strong having written its formatted size (64GB) over 4000 times (enough for writing 64GB/59.6GiB per day for ~11 years) :up:
@Enigma
I'm using the Samsung 470 128GB as the boot drive right now and it's performing just like any other top shelf SSD, as for Endurance I'd say that it's doing exceptionally well. (based on the 64GB drive in this test)
The one thing I do miss is some more SMART attributes, not a big issue though.
I'll post some benchmarks later tonight or tomorrow.
Todays update:
156.5918 TiB
525 hours
Avg speed 89.04 MiB/s.
AD gone from 15 to 10.
P/E 2731.
MD5 OK.
Only thing changed is that CDI is reporting BAD in healt status, why I don't know but the AD is marked red and CA markes yellow.
Attachment 118227
Here is also a screenshot of Gsmartcontroll
Attachment 118226
Pretty cool that CDI and Gsmartcontrol turn red when you pass the manufacturer's threshold :p:
And the most exciting has yet to come :p:
2-3 more days and I'll cross mwi=1
I wonder if CA will scale past 100 and still act like MWI? Although MWI is easy enough to calculate, it has stuck to steps of 30raw.
We'll soon enough know. And how far it will endure
247.432 TiB, 678 hours, sa177: 1/1/20345
Average speed reported by Anvil's app has been steady at about 112MB/s.
The other two unknown SMART attributes, 178 and 235, are still at 72/72/276 and 99/99/2, just as they were when the SSD was fresh out of the box.
So, attribute 177 just passed 20,000. If it is counting the average number of erases each flash block has undergone, that is impressive, since most 2X or 3Xnm flash is only rated for 5000 cyles. Even 34nm eMLC (enterprise MLC) is only rated for 30,000. Either this Samsung flash is very good, or attribute 177 is not what we thought it was. I guess we will have to wait and see if it passes 30,000 next.
Prototype chart, thoughts?
Attachment 118230
Realized almost most of the SSDs were already at MWI = 1, so took inspiration from subway maps to reinclude moving parts on the main charts. All lines should be visible at all lengths, even if there isn't much conflict right now :)
(m4 already 'owns' the vertical slot between the Samsung and the X25-V)
nice chart Vapor! that does a very good job of illustrating both the expected lifetime, and the *after* lifetime :)
You guys do realize that the Intel 320 series probably has over a thousand blocks it can use for reallocation. So, at the seemingly linear rate this test is currently moving, we could be well into multiple petabytes before the 40 GB 320 runs out of blocks it can use for reallocation. I mean, what's a page on a modern SSD, 8KB? The page size actually is exactly 8KB on the 320, IIRC? If I remember correctly, the block size on the 320 is 2MB, and we all know erases are done in blocks and not pages. How much spare area is there? 8-12 %? So, worst case, we're talking 40GB * 8% / 2MB = 1,525 blocks. Since, we're really most likely reallocating blocks and not sectors, I am not sure how to read the graphs to see where we're at in terms of reallocated blocks.