+1 and its not a little worse performance but total turd performance :(
Printable View
http://www.pcper.com/article.php?aid=669
Having to "reset" the drive periodically to get the perfomance the drive is rated at is not what you'd call ideal ;)
but that market is incomprehensibly small... a main stream solution with good price:performance, would sell much better. without turning this into an economic theory flame thread, i think we can agree supply and demand will always stay in effect. there are a lot of different SSD's coming out, and the market is very small. theres almost a new SSD every week, price drops need to speed up.
buying an SSD in general doesn't make sense at all, it'll be outdated slightly in a week or two. :rolleyes:
So what? Other SSDs don't have such problems. Where's the deal?
SSDs are evolving fast and if you're willing to pay the price (-> no jMicron crap) you get good performance.
And? The technology is evolving. I don't care if there'll be a new SSD that has read/write-speeds of 200/200 and therefore is twice as fast as the Mtron I've got. When do you (when you're a "normal" user) need such raw power? What I find most import is the access time, not the MB/s...
I'd guess all those who say SSDs aren't worth it yet haven't used a good SSD yet. It's plain and simple astonishing, believe me ;)
Yeah, any information/benches going against your beloved Intel X25-M will be questionable...
It seems you have never experienced issues with SSDs, so go ahead and buy one to see it yourself ;)
What are you talking about? This affects ALL SSDs, and Intel is the best among all of them. Re-read the article because your response makes no sense whatsoever.
Actually I only read the conclusion:
What did I miss? How does this affect ALL SSDs as you stated? Especially when you go for SLC where write combination is not as severely needed as with MLC.Quote:
It is likely that other manufacturers will employ similar write combining techniques in the future, and with those new devices may come similar real world slowdowns.
Everything dies. If someone here is getting a kick out of MTBF ratings on SSDs, they are delusional. MTBF ratings on mechanical hard drives are equally nonsensical at > 1 million hours. I somehow doubt my hard drives, be they HDD or SSD, are going to last 114 years of being on and used.
All I know is that right now the price:perf/size ratio is firmly in the traditional hard drive's favor, and I am endlessly pleased with my 15K drives. SSDs will probably make my next round of boot drive upgrades, but this one went mechanical again.
And for the reliability part:
http://www.anandtech.com/cpuchipsets...spx?i=3403&p=4Quote:
SSD lifespans are usually quantified in the number of erase/program cycles a block can go through before it is unusable, as I mentioned earlier it's generally 10,000 cycles for MLC flash and 100,000 cycles for SLC. Neither of these numbers are particularly user friendly since only the SSD itself is aware of how many blocks it has programmed. Intel wanted to represent its SSD lifespan as a function of the amount of data written per day, so Intel met with a number of OEMs and collectively they came up with a target figure: 20GB per day. OEMs wanted assurances that a user could write 20GB of data per day to these drives and still have them last, guaranteed, for five years. Intel had no problems with that.
Intel went one step further and delivered 5x what the OEMs requested. Thus Intel will guarantee that you can write 100GB of data to one of its MLC SSDs every day, for the next five years, and your data will remain intact. The drives only ship with a 3 year warranty but I suspect that there'd be some recourse if you could prove that Intel's 100GB/day promise was false.
And alot writes alot less. Even when downloading quite abit I only write about 3-4GB a day in average on my drives.
It also scales with size. A 32Gb SLC is 7TB/day, a 64GB SLC is 14TB/day. A 160GB MLC is 200GB/day. All 5 years.
Dude come on. That comparison is laughable. There is a world of difference between zones of varying speed that remain static at that speed regardless of read/write action and passage of time, and a phenomenon that drastically reduces write speed over time and requires a complete write cycle of the drive area (precluding OS drive usage) to fix, and in some cases is irreversible.
I'll defend SSD life cycles and speed advantages to any who'll listen, but this is a huge issue and doesn't deserve to be cheapened by farcical comparisons.
How in the world does it not? Who would put their warranty above the average lifetime of a product?
Shintai: all that 100GB/day writes for 5 years stuff is not relevant at all because that is measuring the lifespan of the NAND CHIPS. What fails are the CONTROLLERS.
It does affect all SSDs. The cause is fragmentation of free space and all have the issue. There have been reports about this since forever, but mostly just single statements. Before this article there wasn't much data on this. I've seen numbers only once (for FusionIO, IIRC running under MFT).
Unlike with JMicron, it doesn't make your computer stall, so it's much smaller issue. That's why it gets less attention.
Common sense says you don't. Yet I don't see your point because in general products live several times longer than the warranty period. Heck, we still got a Sony HiFi in the house that is older than I am and a Thinkpad that was build in 1998 and still runs fine.
Warranty is nothing but a bonus that depends on the decision of the company and therefore a shorter warranty period proves nothing.
Has anyone here a USB-stick that failed? The german publisher Heise (c't and others) tested a 2GB USB-stick. They wrote it nearly full and compared the checksums after 50 writes.
http://www.heise.de/ct/08/21/122/
Up to that date each cell was written and deleted 12,240 times and 23,5 TB data was written and still the checksum was ok. Of course some cells might've died and the controller reallocated the data but still it shows that 10,000 times is not the end. Furthermore I'd guess that in USB-sticks you find cheap-ass chips...
Too bad I didn't find any more info how the test went on.
I only heard this issue together with Intels SSDs and the FusionIO, that's why I'm surprised it should affect each and every SSD.
If you people wont take the time to read what you are commenting about, I kindly suggest you stop commenting here with non relevant stuff about reliability and MTBF.
What are you talking about? Just READ the article Stargazer posted, and then comment. It's a good read as well. You can't deny the vast problem that is being uncovered here. And what is wrong with their testing?
I'm left with one question: what is the status of the Vertex under this fragmentation problem of the Indilix(?) controller?
Yes I think things may take a dive if you fill them up but I have everything I need for now on the array with about 20% used (4x64GB drives).
People should read before posting indeed.
We still don't know about the new controller, they say that in the full review they'll include some data regarding this issue.
Then read the whole article... why and I mean why in the world are you talking about something you don't understand first? In the first page there are two ATTO screenshots for lazy people, if you had seen it you'll have read the whole article :shakes:
Why would the controller fail? The controller is basicly a CPU(ASIC) and nothing else. Plus the amount of data goign through the controller doesnt matter.
Controller to die first..lol...
And people talking about defragmenting SSDs. Great. Better run your memory defragmenters too!
4k random writes over 1MB/sec is extremely good as measured here:
http://www.xtremesystems.org/forums/...d.php?t=167857
You can see for yourself that 7200rpm SATA drives would struggle with that even on a Caching controller and 10,000rpm and 15,000rpm scsi/sas drives can manage 3MBps at those sizes.
If a Vertex can pull even 5MB/sec with those settings its nothing short of flat out amazing for that file size.