Yeah, well at 3:00 AM you get me, and not necessarily a sober me at that.
Printable View
johnw,
Can you please explain why? :) Not here to start a heated discussion but would be great to hear your side :)
Kingston SSDNow 40GB (X25-V)
407.94TB Host writes
Reallocated sectors : 12
MD5 OK
34.33MiB/s on avg (~34 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 69 (Wear range delta)
E6 100 (Life curve status)
E7 47 (SSD Life left)
E9 205009 (Raw writes)
F1 273020 (Host writes)
106.19MiB/s on avg (~22 hours)
power on hours : 805
Wear Range delta is still increasing!
I'll consider giving it a break for some hours, last time that did no good though.
(won't happen until it's 2 days or more or when the new firmware is released)
Yeah, in essence that's what the Intel drives do. (increase random write endurance and speed)
Over-provisioning can still make a difference as it's hard to tell if (some) of the writes are converted to random writes even if they are sequential by nature.
What impact over-provisioning makes on other controllers is more of a mystery.
Funny you guys are talking about SSDs in a server. I need to build a new server in a few-to-6 months (waiting on new LGA-2011 XEONs) and also want to use SSDs. It will be a SQL 2008 R2 DB server with the minimum requirements of 6 total SSDs in 3 separate RAID0 plus a couple of enterprise SAS HDDs for backups. DB size could be anywhere around 500MB to 2GB with up to 10 concurent users. I was thinking the M4's or the Intel 520 (depending on the controller used).
It is off topic for this thread. Why not start a thread with your question? Also, it would be a good idea to put in the first post of the thread as much information as possible about your expected usage of the server (budget, capacity, peak and average load, access patterns, needed reliability and uptime, server hardware, network connection, OS, software, etc.)
M225->Vertex Turbo 64GB Update:
541.24 TiB (595.10 TB) total
1475.04 hours
10966 Raw Wear
118.74 MB/s avg for the last 15.98 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 10.
(1=Bank 6/Block 2406; 2=Bank 3/Block 3925; 3=Bank 0/Block 1766; 4=Bank 0/Block 829; 5=Bank 4/Block 3191; 6=Bank 7/Block 937; 7=Bank 7/Block 1980; 8=Bank 7/Block 442; 9=Bank 7/Block 700; 10=Bank 2/Block 1066)
Attachment 121466
I will stick to the C300 as the most endurant for now. Maybe something will beat it but for now ( seeing as the SF drives can't even write a bit before dying out ) I bet it will be the most endurant.
Sober me says, "I think it's too early to make much of the SF test performance to date."
The OCZ Everest product slides and press release (posted on AnandTech and a few other places) today has a dig at SandForce compression technology in it, which is hilarious, but I'm actually stoked about the Octane drives.
SF drives are still pretty good though, and if some new FW can magically fix most of the problems with them -- then great. We need more drives and controllers on the market, and if OCZ can come out of the gate swinging with their own controller, then that's even better. Furthermore, if OCZ can get them out at a competitive price it should have a positive effect on the market.
And if Intel's Cherryville and the new OCZ drives come out in early November as they are said to, I'm going shopping... unless they both get marked up way over MSRP, or end up being a relatively poor value in light of current incumbent SATA III drives.
Todays update:
m4
716.7737 TiB
2572 hours
Avg speed 89.30 MiB/s.
AD gone from 203 to 198.
P/E 12427.
Value 01 (raw read error rate) has changed from 5 to 7.
MD5 OK.
Still no reallocated sectors
Attachment 121474Attachment 121475
Kingston V+100:
Will be online again from monday.
Kingston SSDNow 40GB (X25-V)
409.37TB Host writes
Reallocated sectors : 12
MD5 OK
33.79MiB/s on avg (~47 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 71 (Wear range delta)
E6 100 (Life curve status)
E7 46 (SSD Life left)
E9 208592 (Raw writes)
F1 277791 (Host writes)
106.17MiB/s on avg (~46 hours)
power on hours : 819
Mushkin Chronos Deluxe 60 Update, Day 29
05 2
Retired Block Count
B1 23
Wear Range Delta
F1 274383
Host Writes
E9 211645
NAND Writes
E6 100
Life Curve
E7 10
Life Left
Average 127.45MB/s Avg
MSAHCI drivers, Asus M4g-z
663 Hours Work
Time 27 days 15hours
6 GiB Minimum Free Space
Attachment 121476
might be a reason for that :) there is a bit of posturing/jockeyiong going on here.Quote:
The OCZ Everest product slides and press release (posted on AnandTech and a few other places) today has a dig at SandForce compression technology in it,
another thing they are quick to point out is the device latency. whereas they have been completely ignoring that with the SF drives. SF latency tends to be very high, but nary a word from them on that, until now!
With respect to latency in access times, there is a pretty comprehensive 120 vs 240 drive comparison at TechReport. The SF drives did well in their tests. It an analysis of SandForce, Intel, and Crucial drives at 120 and 240 on their two trace based tests and Iometer. It includes access times expressed in various ways (they use service time). It's primarily concerned with how they perform in relationship to each drives' higher capacity version, but it's worth checking out.
http://techreport.com/articles.x/21843/7
well offhand i would comment on the parameters of the Trace used. Including compressibility of data, in which SF will look great. Also, intentional high loadings set up by the tester (by his own admission). so he did a two week trace, and intentionally loaded it up in the first place.
Then he runs it back fast, creating even higher loadings in my opinion. (well for sure, your running two weeks of stuff in a short time).
so unrealistic would be my initial impression. Same methodology as Anand.
I will read further shortly.
http://www.behardware.com/articles/8...l-510-320.html
select the correct test from the list above the chart, it will change the results so you can see them.
http://i517.photobucket.com/albums/u...ssbledata2.png
here is another one
http://i517.photobucket.com/albums/u...essbledata.png
here is another set of tests that i find telling;
http://www.techau.tv/blog/ssd-shaked...vs-crucial-m4/
look at the access time.
latency as follows:
SF- .284
M4- .076
so.....seriously off topic here, but at low QD the SF are not great with latency. and normal people will be at low QD almost constantly. just brain candy...latency is measured as a function of 4K@QD1, and the M4 is much much better. not only with incompressible, but with incompressible as well.
interesting that there are very few tests with low 4K QD on SF drives out there. i mean very very few. At least published of course. Even Anand will only test 4k @ QD3 which is ridiculous.
I still am getting crashes on deletes with Beta9. I'm not really sure why, but it did take about 27hrs to crash. I'm setting up the new endurance testing rig tomorrow afternoon, so the Mushkin will be offline for a short time.
CT,
That's a good point.
Tom's Hardware does Iometer at QD1 and their comprehensive M4 review at every capacity point shows how super low the average and max response times are. The C300 occasionally had very high max response times, but the 510 and M4 seem to have that sewn up. At least Anand has the light and heavy trace based tests, but the avg QD for the light is still 2+. I find my 510 120GB to be a lot faster in practice than you would assume from looking at it's results compared to 2281s, but the M4 is even better in some ways.
On a side note, it's absurd that the 120GB 510 is still $280 -- for that kind of money you could get a M4 120GB, 80 supreme tacos AND a large Mt. Dew.
Any new chart updates or C300 updates, Vapor ? Thanks.
@ Anvil. So idle time did not make a difference to B1? How about deleting and then reinstating the static data to see what happens then?
Attachment 121492
Ao1,
Yes, I'll do that when the new firmware is ready, I haven't decided on whether to do a Secure Erase though, possibly just a quick format.
That is AS-SSD access time, which is measured with 512B IOs, not 4KiB. The only SSDs that do really well with 512B IOs are Intel and Samsung. The Crucials do well with 512B read access time, but much worse for 512B writes than Intel and Samsung. It looks like Everest will join Intel and Samsung. But I'm not sure how important 512B access time is. If you look at IO traces (eg., Ao1 has posted some), 512B IOs are quite rare with Windows 7.
Much more important than 512B access time is 4KiB access time. The Crucials are a bit ahead of the V3s for 4KiB reads, but the V3s are a bit ahead of the Crucials for 4KiB writes (even incompressible data). It will be interesting to see where the Everest SSDs fit in for 4KiB access times. My guess is that they will be similar to Intel 320s.
You have to be careful with Tom's IOMeter data. Some of the data in their reviews is labeled as QD1, but it really is not (for example, they give 100+ MB/s 4KB QD1 reads, which is absurd). I think the problem is that they set IOMeter for QD1 but also start up 4 or 8 or more worker threads. Each worker thread may only let one IO queue up, but since the worker threads are submitting IOs in parallel, the queue fills up higher than 1 anyway. I think the only thing that should be labeled QD1 is when there is only one worker submitting IOs, and it limits its queue depth to 1.
I don't think the have realized how bad/wrong those iometer test are.
Iirc it was the workstation, web and server test that were the worst.
They should update those test to show the actual number of outstanding IO's. (workers * outstanding IO's)
M225->Vertex Turbo 64GB Update: Milestone Reached...
549.07 TiB (603.71 TB) total
1492.56 hours
11109 Raw Wear
117.77 MB/s avg for the last 19.25 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 10.
(1=Bank 6/Block 2406; 2=Bank 3/Block 3925; 3=Bank 0/Block 1766; 4=Bank 0/Block 829; 5=Bank 4/Block 3191; 6=Bank 7/Block 937; 7=Bank 7/Block 1980; 8=Bank 7/Block 442; 9=Bank 7/Block 700; 10=Bank 2/Block 1066)
Attachment 121500
Ao1,
I've changed the static data around 3 times on the Mushkin, and it didn't seem to matter. I didn't secure erase, but WRD didn't seem to be affected.