160GB G2 used in my main machine since launch
all user folders are mapped to RAID0 hard drive array
![]()
....
Agreed but it suggests the PC being monitored was more or less being used as a type writer. It would have been much better if they used a trace from a modern PC/ OS with a wider range of applications. Using an SSD for the trace would also have been more helpful.
At the end of the day however it is still going to be mostly small xfers, but maybe some larger xfers would have shown up.
One_Hertz
ETA for your drive?
-
Hardware:
Today apparently. NCIX is very fast here in Canada.
I guess we need to come to a consensus on our test parameters so we do the same thing. Are we going to TRIM the drive or not? Some users will use TRIM, others will not due to RAID or due to an OS that doesn't support TRIM.
The whitepaper I linked recommends using QD 1 for endurance testing as higher QD is not necessarily realistic (as Ao1's testing showed as well).
For size distributions maybe 50% static data, 25% test file size, 25% empty? What mix of block sizes do you want to use? As for seq/random I am thinking 70% random?
Are we using IOMeter or are you writing us a utility?
Last edited by One_Hertz; 05-13-2011 at 09:06 AM.
So we have a 320 & an X25-v head to head?
My vote is for TRIM.
For comparison here are the JDEC xfer sizes for their proposed SSD Enterprise Endurance Workload.
That doesn't seem too far off the mark to what I have seen when monitoring, although I do see larger xfers ~ 1MB and above from time to time. Larger xfers are obviously more application centric, whereas those below are more OS related. I'd vote for a bit of a mix.
512 bytes (0.5k) 4%
1024 bytes (1k) 1%
1536 bytes (1.5k) 1%
2048 bytes (2k) 1%
2560 bytes (2.5k) 1%
3072 bytes (3k) 1%
3584 bytes (3.5k) 1%
4096 bytes (4k) 67%
8192 bytes (8k) 10%
16,384 bytes (16k) 7%
32,768 bytes (32k) 3%
65,536 bytes (64k) 3%
http://www.jedec.org/standards-documents/docs/jesd219
This test will take forever. Such a workload will be very very slow at QD1 on these 40GB SSDs. 10MB/s? I guess we'll see.
Last edited by One_Hertz; 05-13-2011 at 09:43 AM.
There are a lot of things to take into consideration but I'm sure we'll find some middle ground.
We can't really recreate normal operations and so the question is what do we want to find out?
First of all, should we do the same test or should we take different routes?
Is there a point in doing the same test?, it would of course tell us what we can expect using 25nm vs 34nm NAND.
Keep in mind that this is a very small sample size, a different batch of NAND could perform better or worse.
50% static is too much imho, ~12GB would be sufficient. (about the same size as the W7 OS)
So, I suggest using:
-12.5-15 GB of static data
-A single 2.5GB "static" random datafile for random writes. (TRIM wouldn't work on the random datafile as it wouldn't be deleted/recreated)
-12.5-15 GB for creating/deleting or copying files
That would leave 25% of the drive empty.
No over-provisioning, just the standard op set from factory.
The most realistic route would be to dynamically create/delete files and I do think we need to make the use of TRIM.
Then there is the matter of reporting/collecting data, there are several ways to report and publish, either manually or by that utility I'm creating.
I can put a "screenshot" of the collected metrics on an FTP server and we could just link it to that new thread. A lot of options, could be updated every hour or whatever we decide on.
I haven't decided what computer to run the test on yet, I've got a couple of options though.
I suggest we settle for something and run it for a few days just to see what happens and then we make adjustments based on what happens during that "test" period.
We could be in for a surprise, I don't know what to expect...![]()
-
Hardware:
I think we should do the same test... if we do a different test and see different results then we won't know whether it is due to NAND differences or the different test.
What software do you propose we use for this? I've got an old crappy P4 rig I can throw W7 onto and run 24/7 for this test.
OK, then we go for doing the same test.
W7 would be perfect, I'll see what computer I can find, might be a laptop but that shouldn't make a difference.
I'll PM you some more info on my utility later tonight and we'll take it from there.
We'll just have to make sure that we're able to somehow retrieve SMART info from the SSDs/disk controllers, it might work form my utility, not sure yet.
edit:
It shouldn't be a problem using the filesizes that Ao1 suggested, 4K writes QD1 are at 35-40MB/s on a fresh X25-V, could change dramatically within a few hours though.
As part of the test we could also copy/delete files from the Windows directory over and over again.
Last edited by Anvil; 05-13-2011 at 10:41 AM.
-
Hardware:
Sure.
We'll just need some way to monitor that it's still working and for reporting, so, a bit depending on how we can get to that SMART info it should be fully automatic, including reporting.
-
Hardware:
i think that using the Jedec standards would be great. looks to be very exciting, cant wait to see this test![]()
"Lurking" Since 1977
![]()
Jesus Saves, God Backs-Up *I come to the news section to ban people, not read complaints.*-[XC]GomelerDon't believe Squish, his hardware does control him!
The MFT reserved area is smaller on an SSD than on an HDD of a similar size, thus more of an SSD is used, whereas on an HDD certain are of the drive (10% even) might never be touched unless the drive gets filled >90%.
The log does not record last access time on an SSD by default, whereas it does on an HDD (W7 only).
The area where data is written to on an SSD is different than that on an HDD because the starting format is not the same. There is quite a bit of difference. It's not 50% or probably not even 20% difference in terms of amount, but with random I/O it can be >10%.
You guys up and running yet?
If not could you run the test file for a hour first and then use Intel's method to calculate the projected wear rate based on SMART values?
This could then be compared to what happens as the experiment continues.
The worst possible endurance scenario for the 40GB 320 is 5TB, which could be racked up in 3 days @ 20MB/s 24/7. Once speeds slow down to a crawl maybe it would last a couple of months.
I can see this lasting at least 6 months, maybe a year even.
Yeah, we're up and running.
I started a bit ahead of One_Hertz, his 25nm 320 looks to be a bit faster than my 34nm.
I'll start a new thread shortly.
It could take some time, but, E9 "Wear-out" has already moved down a few notches on my drive, don't know about One_Hertz yet.
-
Hardware:
Bookmarks