Double Post
Printable View
Double Post
The big announcement that OCZ made makes me more confident that, contrary to what had been said in the past, that firmware alone can fix the problem. It won't require Intel to intervene with new oroms, or any of the other non-SF solutions.
When it releases for the Mushkin, its going in the SandForce torture chamber.
If the new FW is really the cure-all solution, you would think ever vendor would at least acknowledge it's existence.
UPDATE
I finally broke the 200TB NAND writes on the Mushkin, but WRD seems to be spiking again due to more static data and less minFreeSpace.
Yes, grey are intitial bad blocks (B8) andthe reds are the program (C3) and erase (C4) failures. I don't think it's as simple as High or Low.
And at the top under the white lettering "BAD BLOCK LIST" there is a legend of what the letters mean in grey, at least for the second part of the initials.
It works under windows. Just double click "SmartViewer_2_21.exe" and a DOS style window prompt appears with what to do.
m4
708.2702 TiB
2545 hours
Avg speed 89.16 MiB/s.
AD gone from 207 to 203.
P/E 12287.
Value 01 (raw read error rate) has changed from 4 to 5.
MD5 OK.
Still no reallocated sectors
Attachment 121431Attachment 121432
I've updated to Betav9 of ASU.
Kingston V+100:
Will be online again from monday.
Wow, 4x the life span. Now I wouldn't even know where to start guessing of when the M4 will start failing. 850-900GB sounds good for a start, maybe 1PB!
Good Luck with the other drives there BAT. G1 drive will probably suffer what I had with my G2's not being able to trim properly in Raid. That SLC drive though, hope you got a PC that will be running for the next few years to test that one :)
Bluestang,
I just forgot that the Indilinx wiper and smartviewer don't work with RST on my ICH8m C2Duo Lappy.
I have 66 bad blocks on the older Agility, but they're all initial bad blocks.
EDIT
That Mtron might have really high WA/questionable wear leveling, so while it might last a long, long time, it might not be optimized in the same way an X25-E is.
That's why you have to do the test... if they lasted as long as the manufacturer said, their wouldn't really be a need to endurance test them.
Very curious to see how they work out. I know the G1 Intel drives aren't quite as good as the G2 (without trim), but should still be very interesting as well.
I've been thinking of another drive to test with, so I'm taking suggestions.
I'll probably put the mtron in my foldingrig where it can work for as long as it takes.
I think the write speed will suffer, how it will affect the WA I don't know. We'll just have to wait and see. :)
According to their spec sheet they claim 50GB a day for 140 years. I wonder what the avg write speed will be.
The lack of TRIM will indeed make the G1 an interesting drive.
I expect it will settle at 40-50MiB/s.
I'm not sure about the Mtron, iirc it was generally pretty good at QD1.
@Christopher
What about the Vertex Plus?, the 120GB is cheap and the Indilinx was/is a pretty good sequential performer.
(not sure about the current firmware though)
or, you can pick up another Mushkin and put 10GB (or more) extra over-provisioning on it, it might do a lot of difference vs the standard one.
(it could be any drive of course but over-provisioning is interesting and none of the current drives are over-provisioned)
or, you can get one of the synchronous based drives and compare vs the async one I've got.
a lot of options, I'll be looking for another drive within a month or two, and there's always the new Intel :)
The spec sheet says 16500 IOPS 4K seq writ and 12000 IOPS 4K RW.
I was even mulling over an overprovisioned 470...
The new vertex plus is not spectacular. The way I understand it, a vertex plus 60 is equivalent to a 30GB Vertex Original flavor.
The 60GB Vertex Plus weighs in at 180 reads and 90MBs writes. The 120GB would be closer to a 60GB 1G Indilinx in performance.
Kingston SSDNow 40GB (X25-V)
406.67TB Host writes
Reallocated sectors : 12
MD5 OK
35.38MiB/s on avg (~22 hours)
--
Corsair Force 3 120GB
01 88/50 (Raw read error rate)
05 2 (Retired Block count)
B1 67 (Wear range delta)
E6 100 (Life curve status)
E7 48 (SSD Life left)
E9 201786 (Raw writes)
F1 268723 (Host writes)
106.19MiB/s on avg (~22 hours)
power on hours : 793
edit:
@Christopher
I had a look at it, that's why I suggested the 120GB.
An over-provisioned drive would be great, well, you've still got some time to think about it.
Mushkin Chronos Deluxe 60 Update, Day 28
05 2
Retired Block Count
B1 24
Wear Range Delta
F1 265051
Host Writes
E9 204440
NAND Writes
E6 100
Life Curve
E7 10
Life Left
Average 127.61MB/s Avg
Intel RST drivers, Asus M4g-z
642 Hours Work (24hrs since the last update)
Time 26 days 18 hours
6 GiB Minimum Free Space
Attachment 121459
The 470 has an updated version. I'm not positive, but I think it uses the new Samsung 2xnm NAND, so once the old versions go out of stock they'll be hard to find. If a drive should be chosen to test with overprovisioning, I was thinking it should be the same model of another test participant.
I have one of those new SB Celeron Dual Cores on the way to set up an "under the couch rig". I may put the Mushkin on that, or keep it on my main system and run another drive on that eventually.
Still not so much as a peep on the new SF firmware.
Hey guys after reading a :banana::banana::banana::banana: load of pages I have come to one conclusion, the Crucial M4 is one of the most reliable drives or is the most reliable drive out of the lot?
We are about to build a new media server with 16 x SSD's 256GB , now you guys go me thinking on which drives to select.
516TB. 6556 reallocated sectors.
Speed used to peak at 46.5MB/s when the SSD was brand new. Then the speed started rising and it peaked at around 48.5. Now, it is 41.5 max. Really slowing down...
1_Hz,
Are you using Beta9, or are you still on Beta8?
If the 320 keeps falling in speed, it might last forever if it drops down to 1mb/s... maybe it's Intel's revenge.
Well, I'm not really sure what your story is, but if it's a media server, you'd best be served my a non-SF drive. I don't really see how you could go wrong with an M4 (or other Marvell controlled drives like the Intel 510/C300/Plextor M2/Performance3). Putting aside the reliability issues associated with SandForce drives, compressed media files will be slower on SF drives than non compressed/non random files. The penalty is most severe for asynchronous NAND drives (Corsair Force 3, Patriot Pyro, OCZ Agility 3). It's less severe for synchronous NAND SF drives (OCZ Vertex 3, Corsair Force GT). The Toggle NAND SF2200 drives are the best in my opinion, but if you're really serious about putting 16 SATA III SSDs in a server, the Crucial M4 is pretty damn good. But they make a reasonably priced 512GB drive, so maybe just get 8 of those and call it a day. I'm not really sure why a media server needs 16 SSD (some regular-ass-HDDs would probably give you the space and sequential speeds for a media server, but whatever -- it's probably not a Windows Home Server).
Christopher,
Thank you for that reply cleared a lot of things up :)
Well its for a content server for a few hundred users at one given time and they want to run 720p content streamed.
Seems 8 x 512GB RAID 5 on a LSi 9265-8i card + BBU with 2 hotspares will work a dream , the only reason we are going with RAID 5 is that LSi card performance in RAID 5 is almost on par with RAID 10 based on a few reviews specifically this : http://thessdreview.com/our-reviews/...ron-c300-ssds/
If you think I can better that let me know, open to suggestions.
thats a great review :)
you will need fastpath to replicate those results, and if you have some money for enterprise you can get some amazing results with SLC drives, a la SanDisk Lightning series. With four drives around 289,000 4k random IOPS possible in RAID 5, iirc.
If it turns out that you need the speed more than capacity, you can always over provision 512GB drives by a metric f**k tonne and double the endurance. I'm certainly not qualified to speak on these matters, but most of the accesses in a media server should be reads -- possibly 90% reads and 10% writes? Endurance may not be a huge issue, but it is something to think about. Intel's new 700 series enterprise drives using HET-MLC NAND (and a substantial amount of Over Provisioning too) is sort of the middle ground [between a M4 and higher end SLC solutions], but the M4 512 is the same price as the 100GB Intel 710 roughly. I say that if you can determine how much storage you need, what kind/how many writes per day, and find that a high capacity MLC drive with some extra provision could be cost effective, then that's the way to go. If you only needed 300GB per drive of 512GB, that extra space could double (or possibly more) endurance by itself and help keep the drive performance from degrading without TRIM. If your server ends up being super write heavy, an enterprise solution might be what you need, especially if the writes are random and/or small.
Yeah writes will be very minimal and mostly reads after the initial write of the data. This data will once completely when new content comes available every 90 days. So it will be completely rewritten 4-5 times a year, hardly nothing.
It is going to be accessed by schools streaming content around the country.
The over provisioning is something I will look into but how would one go about calculating the benefits of say 412GB out of the 512GB?
tbh i do not think you would need the over-provisioning for the model you are using.
Over provisioning might be more necessary just for keeping performance consistent over time. 20 to 28 percent is somewhat normal for an enterprise drive, so that will give you more endurance and better steady state performance to boot. The 512GB would really be a 480GB as the Crucial uses about 7% spare area. So really anything below that as additional area would just be gravy. I can't really say what kind of performance degradation happens to 8 drives together over time, but I imagine over the course of a year it could get pretty messy, and just going to 14% percent OP could stop quite a bit of it from happening. That's not my department though, perhaps CT could elaborate.
The whole OP issue is pretty interesting, and that's why I wanted to try a second endurance testing drive set up OPed. I believe much of the additional TBW rating of the new 710 drives is due to the heavy OP, so it stands to reason that if I were to get a 470 and added some extra OP that I might be able to add 50 percent to the lifetime.