What are people expecting the outcome between the C300 and the M4?
Should I open my C300 and ensure it's 34nm? I know there's been no controversy, but 34nm supply has to dry up some time....
Last edited by deathman20; 06-29-2011 at 05:41 PM.
-=The Gamer=-
MSI Z68A-GD65 (G3) | i5 2500k @ 4.5Ghz | 1.3875V | 28C Idle / 65C Load (LinX)
8Gig G.Skill Ripjaw PC3-12800 9-9-9-24 @ 1600Mhz w/ 1.5V | TR Ultra eXtreme 120 w/ 2 Fans
Sapphire 7950 VaporX 1150/1500 w/ 1.2V/1.5V | 32C Idle / 64C Load | 2x 128Gig Crucial M4 SSD's
BitFenix Shinobi Window Case | SilverStone DA750 | Dell 2405FPW 24" Screen
-=The Server=-
Synology DS1511+ | Dual Core 1.8Ghz CPU | 30C Idle / 38C Load
3 Gig PC2-6400 | 3x Samsung F4 2TB Raid5 | 2x Samsung F4 2TB
Heat
Just saying hello for now.
I've been lurking and reading the thread the last few weeks.
I finally got the registration button to work on the forum.
I bought my first SSD about the same time the testing started.
Went with the Intel 320 120GB.
The longer the testing goes the more I like it.
Enjoying the thread immensly and it looks like more fun to come.
wow.. I've had my X-25V drives for closing in on 15 months, mine are reporting 8142 and 8138 hours power on time, and 0.99 and 1.09TB writes...which is probably close to about 1/10 what I thought I have done on them so far and they have a MWI of 98, with available reserved space of 100 and a 10 threshold
Current System:
eVGA 680i SLi "A2" P30 BIOS
intel Core 2 Quad Q6600 (currently at stock)
OCZ ReaperX 4GB DDR2 1000 (running at DDR2 800 Speeds with cas4)
320GB Seagate 7200.10
XFX 8800GT XXX 512MB (stock clocks)
auzentech X-Fi Prelude
PC Power and Cooling Silencer 750 Quad Copper
Win XP Pro
Thanks for the warm welcome
Before we start the crucial test we need to agree on the config.
- How much should the ssd be filled with static data? 40 vs 64 GB
- What parameters should we use on Anvils app
- How much random vs seq?
Something i missed?
1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
3: Asus U31JG - X25-M G2 160GB
I'm expecting them to exceed the 72TBW guarantee that Crucial has put on them, hopefully they'll get up there with the Intels.
As there is more NAND they should match the Intels but as we know, the controller can make the difference
I'm pretty sure there is 34nm NAND in it but maybe we should all open the drives that enter the Endurance test?
Appreciated
I'm not shocked at all, I've got plenty of SSD's that are low on writes.
As long as it's used for normal tasks it just doesn't write as much as one thinks.
My default setup is w/o the pagefile, system restore and the hibernation is off as well, these things will make a difference.
But, as we all can see, you shouldn't really worry
--
119,49TB Host writes
MWI 34
Still at 6 reallocated sectors.
-
Hardware:
I mentioned in my post that I put a ~40GB static file on the SSD. To be precise, it is 41992617078 Bytes (I imagine that is a typical amount of static data for a 64GB SSD) And Anvil's app and data is on the SSD. All settings in Anvil's app are the default, except that I checked the box about keeping running totals for GB written (option just added yesterday).
For reference, the md5sum of the 42GB file is: 0d1c4ec44d9f4ece86e907ab479da280
Last edited by johnw; 06-30-2011 at 05:47 AM.
Now we have more drives joining in who will track all the data?
This chart can go into the negative for wear out, but can only assume a negative MWI value based on average writes to date per MWI value.
It's also hard to see all the data when it is small. I had to take out all the hard data as it was too small. The Y axis only represents TB for One_Hertz. With more drives it will get a bit harder, but it would be good if all drives could be on one chart.
Alright, got a file ready for myself, weighing in at 42,022,123,868 bytes. C300 64GB should start tomorrow so long as UPS sticks to their delivery date.
I'm also going to test a SF-1200 drive now that the prospect of no-LTT has emerged (and if anyone wants to test a 25nm vs. my 34nm, let me know...easier to arrange testing and setup in pairs!). With a Sandforce back on the scene, I wanted to examine the compression settings in Anvil's app and see if any were suited to mimic 'real' data. With the discovery of the 233 SMART value, we can now see NAND writes in addition to Host writes, so if we can also write 'real' data we can kill two birds with one stone: see how long a drive lasts with 'real' use and how much the NAND can survive.
So what did I do?
First, I took two of my drives, C: and D:, which are comprised of OS and applications (C:) and documents (D:, .jpg, .png, .dng, .xlsx probably make up 95% of the data on it) and froze them into separate single-file, zero compression .rar documents. I then took those two .rar files (renamed to .r files...WinRAR wasn't too happy RARing a single .rar file) and ran them through 6 different compression algorithms: WinRAR Fastest RAR setting, WinRAR Normal RAR setting, WinRAR Best RAR setting, 7-zip Fastest LZMA setting, 7-zip Normal LZMA setting, and 7-zip Ultra LZMA setting. I then normalized the output file sizes.
Doing this created two 'compression curves' showing how my real data responds to various levels of compression. My thinking being that if any of Anvil's data compressibility settings had similarly shaped and similarly sized (after normalization) outputs, it would be a good candidate to use to mimic real data and allow the use of 'real' data with SF testing. Real data != 'real' data; 'real' data is just the best attempt to generate gobs of data that walk, talk, and act like real data. A great candidate would be a generated data set that had a compression curve between the two curves from real data, across the entire curve.
Once I had those curves mapped out, I made ~8GB files of each of the various settings with Anvil's app (0-fill, 8%, 25%, 46%, 67%, and 101%) and made curves for each of them.
All put together, they look like this:
The green zone is where the potential candidates should show up. Only one candidate was in that range, however: 67%. Unfortunately, it fell out pretty aggressively with stronger compression algorithms. So I turned off the "Allow Deduplication" setting and generated another 8GB file and compression curve and it was a little better.
While dedicated hardware can be magnitudes more efficient than a CPU with an intensive task, I do doubt the SF-1200 controller's ability to out-compress and out-dedup even low resource LZMA/RAR (R-Fastest and 7-Fastest), so the left-most part of the green zone is a stronger green as I feel that's the most important section of the curve. Unfortunately, I don't have the ability to get more granular compression curves at the low-end (left side) of the curve, so I'll have to make do with overall compression curves with just an emphasis on the low-end.
Of all the data I have available it does look like 67% compression setting with "allow deduplication" unchecked seems to be the best fit for use as a 'real' data setting for when I start testing SF-1200. Hopefully anybody else who plans to test a controller with compression and deduplication will find this useful as well
I'll PM you the excel file. If nothing else it has the raw data. It will be nice to join the peanut galley. I've been spending way too much time on that V2.
If you run at 67% it will be interesting to see if you get throttled at some stage. I suspect you will.
EDIT:
How will you record writes? 231 & 241?
Last edited by Ao1; 06-30-2011 at 09:49 AM.
@Vapor
Superb job on collecting compression data and yes, it's based on 7Zip Fast compression ratio. (could be Fastest, will check)
Looking at my tests on the SF2 controller it couldn't keep up with the ratio that 7Zip Fast(est) produces, not sure how the SF1 handles vs the SF2.
For reference, I 7Zipped one of my VM's earlier today (Windows Server 2008 R2, SQL-Server, + some data) and it ended up being ~50% of the original size using 7Zip Fastest.
Still it took 40minutes to produce that file using an W3520@4GHz on an Adaptec 3805 hosting a 3R5 volume, there is no way that the SF controller is able to achieve that sort of compression (on the fly) as 40GB is written at a rate of ~100MB/s using a 60GB SF1 drive. (based on steady-state)
I'll do some more tests when I get a few more of the items off my to-do list.
-
Hardware:
1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
3: Asus U31JG - X25-M G2 160GB
Nice job, and beautiful graphs!
There are two more data points I'd be interested in seeing: the compression your Sandforce drive achieves on your C: and D: not-compressed archive files.
I guess you can measure it by just observing the SMART values on the drive, then copy one file to the drive, then look at the SMART values again to find the compression (assuming that attribute for actual flash writes is accurate). Maybe you have to delete the file and re-copy it several times to get an accurate measurement?
I have zero doubt hardware designed for compression/dedup could do twice (at least) what our CPUs do with just a 1W power envelope....but that doesn't mean the SF1 and SF2 controllers can do it. It's a safe bet they can't and their compression levels are weaker than the weakest RAR/7zip setting--too bad there's no way of running their compression levels on our CPUs to see what they can do with more precision than the 64GB (or 1GB SF2) resolution the SMART values give.
Almost done with the charts of all the drives so far (minus the V2-40GB...not sure whether or not to include that as testing essentially errored-out). Including a new chart with normalized writes vs. wear, which is kind of necessary considering drives of different sizes are getting entered into testing; writes will be normalized to the amount of NAND on the drive, not the advertised size.
Working on bar charts with writes from 100-to-0 wear as well as total writes done so far. 100-to-0 wear will be extrapolated until MWI = 0 and then frozen...so when MWI > 0, total writes will be less than 100-to-0 but after MWI hits 0, total writes will be greater than 100-to-0. Would "MWI Exhaustion" be a better name for the 100-to-0 bar?
Whoa, that is still running fast. It took One_Hertz 9 days to write that much. (longer still for Anvil)
Bookmarks