Hmmm, I've been trying to get an X25-E for a few years now, they just aren't available in SA. Is anyone willing to sell one at a good price? :D
Printable View
Hmmm, I've been trying to get an X25-E for a few years now, they just aren't available in SA. Is anyone willing to sell one at a good price? :D
Its been a while since the last update for M4. We are spoiled little brats over here....:D
Actually, I bought the X25 E 32GB for $88. It only had 280GB of host writes on it when the post office dropped it off Thursday.
We call that a win where I'm from.
The "new" Agility60 comes in the cheaper plastic case, and in my testing with it, scales terribly in softraid with my older one (There were only a couple in stock, but the 30GB models around still). Writes scale great, but reads don't. My older Agility has only about 1.46TB of host writes, but has an average PE count of >1600 (which would be equivalent to around ~100,000GBs. There are a couple reasons for that, but mainly because its been abused in different un-Indilinx friendly conditions -- Win7 with trim is the only way to go.
On the other hand, my 120GB Vertex Turbo should arrive in the mail on Tuesday. I'm waiting to get it before buying another one, but after seeing the impressive results of the M225 > Vturbo, I couldn't pass up the opportunity to buy one new for $1/GB.
bought a new X25-V cheap last weekend at a brick and mortar, but my opinion on endurance testing those is another one won't really say much. And I'm in the mood for destruction.
I do have a 510 120GB, and it would certainly put down high average numbers as well, but my plan is to use my laptop for endurance testing. I live in a tiny urban apartment and I can just close the lid, stick it under the couch, and pull it out to check it's progress (and it's only Sata II, C2D, but with AHCI).
Besides the two 6gbps controllers, I don't really see much out there for something different, but I want to do something. The Phison controlled Torqx 2 might have a decent average speed under endurance testing as well (and it's certainly different), but I wanted some group consideration before jumping in. When do the new Samsung 6gps drives come out at retail? I think they're shipping in OEM laptops right now.
Maybe we can see if anyone is willing to donate a Intel X25-E if SLC is really that much better etc. Or maybe get a separate thread for SLC drives ?
They both use Intel 34nm, but have different product numbers. RyderOCZ was kind enough to tell me which NAND they used:
New
JS29F32G08AAMDB
Old
JS29F64G08CAMDB
I'm not sure what those bolded numbers represent
The old one had higher correctable bit errors and much higher WA, but that was due in large part to me using it in sub optimal conditions. I recently tried updating to the 1.6 FW to try and reduce those. I'm not sure why those smart reported bit errors are caused, but it seems excessive. I have an Excel spreadsheet with SMART attributes that figures # of read/write sectors to GiB host/reads writes, etc. I was really surprised once I started looking at it in detail -- I used that drive to clone drives, install random linux distros to, and then used it on a laptop with Vista/no trim for quite some time.
If the throttling situation was sorted out, I'd pick up a 60GB SF2200 25nm drive in a heartbeat for endurance testing.
Intel 311 20GB is a readily available, low-ish priced SLC drive if anyone is really intent on putting an SLC drive through its paces. Probably fast (for SLC device) to die too, considering how little NAND it has.
C300 Update
348.1TiB host writes, 1 MWI, 5872 raw wear, 2048/1 reallocations, 63.05MiB/sec, MD5 OK
SF-1200 nLTT Update
208.688TiB host writes, 151.406TiB NAND writes, 27 MWI, 2442.5 raw wear (equiv), wear range delta 3, 56.15MiB/sec, MD5 OK
The write speed is cut in half from the X25 Es, but that's still pretty high, especially when you consider Avg write speed / capacity. Its basically the X25-V of the SLC world with its controller population taking a big chunk out of performance. I just don't think its possible to wear the drive out in any sort of reasonable time frame.
It would be killer to have a triplet of Larson Creeks in Raid 0... You'd have like 600mb reads and 300mb writes in 60GB of inexhaustible awesomeness. That's a commitment though - it would take decades to wear them out (probably).
I think you are correct.
From post #1362 it looks like erase cycles get slower and programming gets faster as the P/E cycles move towards the end game. To offset that this chart from SF shows the controller overhead that is incurred as the P/E cycle count increases. I'd guess it would be the same for all SSD's that are good at reducing WA, so when the blocks with high wear are replaced write speed should increase.
Attachment 119990
Nice idea with the RAID array. But maybe just one 20GB Larsen Creek will be enough for this test.
'Hey everyone here (everyone but me) I think it would be great if someone here (anyone else but me) could test the endurance of anything I suggest.'
Sorry, hate to be the a$$hole here, but had to get it off my chest. If I'm out of line, then I appologize and will take the punishment. :(
Anyways, thanks to all the "Testers" here and everyone else who has contributed and helped out tremendously. And Anvil for his awesome Utility. :up:
It takes alot of time and effort to do all this and I say thanks! :clap:
261.48 hours
164.5770 TiB written
40.95 MB/s
MD5 ok
05: 0
B1: 80
E7: 28%
E9: 115328
EA/F1: 169344
F2: 256
Well, if you were ( like me ) a student in the UK with potentially 50 000 GBP debt from university and a 20% chance of being unemployed after finishing my degree you would understand why I cannot test anything myself :)
One link for you to consider : http://www.telegraph.co.uk/education...60k-debts.html
Yes the actual PE cycles comes out as a bell curve distribution..
when rated at 5000 PE cycles, that's covering the vast majority of the devices at a stated ECC level and data recoverability.
The NAND is also rated at 5000 PE cycles for a given ECC level.
If you use more bit error correction than spec'd, you get 'higher' PE cycles.
If you use less bit error correction than spec, you get 'lower' PE cycles.
Note that the NAND doesn't just quit working, you are constantly just increasing the raw bit error rate probability of the NAND,
increasing the probability of the NAND returning data that is uncorrectable by the controller's ECC/data recovery algorithms.
That being said, it is possible for a SSD to lose data on any PE cycle prior to its NAND rating, only the probability is very low of that occuring.
Most of the same error rate probability stuff goes for HDD, only their bit error rate progression is more of a linear progression over time..
32Gbits/chip and 64Gbits/chip. Both use 32Gb dies.
First one is Single die 1CE, Second one is dual die 2CE.
Or you could leave the test to the SSD engineers... :wave:
If you have low-level access to the drive, you can write your own basic firmware that has no wear leveling and records certain ECC correction information.
Then you can just hit logical flash block 0 with 100,000 PE cycles (or the actual data failure point), then block 1, block 2, etc,.
You can also pull raw bit error rate data versus PE cycles.
There are some engineering flash testers that do this for you on the market today, but its not within an enthusiasts budget..
However, we actually end up doing this testing with fully built SSDs, so that we can test the flash in extreme conditions (industrial temp conditions -40C to 85C, thermal cycling, thermal shock, voltage margining, EM interference, radiation bombardment..)
I 'm surely not suggesting that would be an effective test to stripe three of those (rather, just fun to play with), but with every passing day I get less and less concerned about effective MLC lifespan. Even Indilinx controllers, which started out with a shaky track record, have become more and more effective with every firmware release. That's why I think it would be years before you could put a dent on a Larsson Creek -- unless everything we
ve been told about SLC is wrong (and it could be wrong the other way -- 2x as many PE cycles in practice).
No, you're right. It does get old really fast (like, months ago). Considering that an SSD for testing can be purchased new for $100, just about anyone who has internet access should be able to afford one by saving up their spare change for a few months, or skipping eating out or a movie once in a while.
404TiB. 4072 reallocated sectors. Looks like no dying any time soon :(
yeah, no kidding
I am also incredibly impressed with both the results and willingness of the participants, it sure has taught me a lot about not needing to baby my SSDs nearly as much as I have been, lol. on a slightly OT to this- does anyone know of any good methods of running drive maintenance on raid0 X-25Vs...or is stripping them out of a raid config to run TRIM pretty much the only viable method to restore "factory fresh" type running conditions?
m4 update:
511.3357 TiB
1666 hours
Avg speed 91.01 MiB/s.
AD gone from 77 to 58.
P/E 8964.
MD5 OK.
Still no reallocated sectors
Attachment 120008Attachment 120009
Kingston V+100
I'm still trying to figure out why it drops out, so the test is still halted.
311.23TB Host writes
Reallocated sectors : 6
MD5 OK
22234.2GB written (Last 7 days)
I short stroked mine, but I use the array just for a couple steam games like Civ 5, Deus Ex, and New Vegas. The Intel controller is really robust and good at handling life without trim. If you have much free space on there at all it shouldn't really get bad, but one option is to copy some very large files to the drives. The sequential file writes will level everything off. You could copy over some digital videos or perhaps .ISO files, then delete them (but you'd know if your performance was in the toilets, so save this for a rainy day). If you are running your OS on the drive, then it will get more beaten up without TRIM, and the X25 V is disadvantaged due to its size, but the Vs are pretty tough.