PDA

View Full Version : Anvil's Storage Utilities



Pages : [1] 2 3

Anvil
07-27-2011, 07:31 AM
So, what is Anvil's Storage Utilities?

First of all, it's a Storage benchmark for SSD's and HDD's where you can check and monitor your performance.

The Standard Storage Benchmark performs a series of tests, you can run the full test or just the read or the write test or you can run a single test like i.e. 4K QD16.

This is showing a screenshot made by clicking the Screenshot button in the application where I have just performed a full test of the Samsung 470 SSD.

118263

We'll walk through the main functions of the standard benchmark in the following screenshot.

At the top of the app you select the size of the test, which drive to test as well as a menu where you can set the default preferences in the Settings menu option.

To the left there are buttons for running a single benchmark like i.e. 4K QD16

To the right there are buttons for either running the read tests, the full test or the write test as well as the scores.

Bottom left of the screen there is a collection of information about the OS, motherboard, processor and memory.

Bottom right is a collection of information about the selected drive as well as the compressibility of data is used in the test.
The compressibility option is of interest for drives using compression, like the SandForce based SSD's.

Most of the information displayed at the bottom of the app is collected using WMI, the rest is found in the Registry.
WMI is short for Windows Management Instrumentation and has been a part of Windows since Windows 2000.

118264

TheSSDReview will be hosting the download and it can be found at this link (http://thessdreview.com/latest-buzz/anvil-storage-utilities-releases-new-storage-and-ssd-benchmark/)

Preliminary download link for Beta5 (http://www.diskusjon.no/index.php?app=core&module=attach&section=attach&attach_id=452318)

Fixes
There will be a fix for Windows XP in the next beta. (beta 4)
- WMI class missing for Windows XP.
Drivers
-Fixed iaStorV detection.

Beta5 2011-August-01
-Will now show if Volume is compressed.
-Option to recreate testfiles on every Full run
-Option for setting runtime length of MixedIO in ms.
-Cosmetic fixes (some are still open)
-Display IRST version for 32bit OS.
-Settings have been rearranged.
-Save Screenshot in Threaded read/write now defaults to PNG

Beta7 2011-August-25
-Fixed detection of memory cards. (resulted in detection errors)
-A few minor fixes.
-Beta expires 2011-September-30. (I expect it will be released before it expires)

Preliminary download link for Beta7 (http://www.diskusjon.no/index.php?app=core&module=attach&section=attach&attach_id=455691)


Beta9 2011-October-18
- Lists Intel Option ROM version.
- Pause on Endurance test is now user configurable.
- There is now a 500ms pause for every 500 files deleted.
- Expires 27th of January 2012

Link to download of beta9 (http://www.diskusjon.no/index.php?app=core&module=attach&section=attach&attach_id=464000)
new download link for Beta9 (http://www.ssdaddict.com/apps/AnvilBenchmark_Beta9.zip)

Beta11 2012-January-12
- Expires May 2012
- Next beta will include more on full-span testing, expected within a few weeks
new download link for Beta11 (http://www.ssdaddict.com/apps/AnvilBenchmark_Beta11.zip)


Release Candidate 1 (2012-05-19)
- Expires September 2012

Fixes
-Fixed bug when USB drives are connected
-Fixed bug when there are unpartitioned drives

What's new
-ASU now requires Administrative rights.
-USB drives are now supported
-TRIM can be triggered if TRIM is supported by the OS, will work on most drives that supports TRIM.
-Default Compression is set to 100% (Incompressible)
-New setting for Enabling/Disabling testing with "Write-Cache Buffer Flushing" on the X79 when using the 3 series RSTe driver.

-Endurance : Layout changed

Temporary test folder renamed from _AP_BENCH to _ASU_BENCH, the folder is at the root of the drive.
In case the system shuts down and ASU is not able to clean up the test folder, the contents can be deleted.
Files produced by ASU Benchmark have the following extensions : TRM and TST

Make sure that you don't have any folders that are in conflict with the test folder.

new download link for RC1 (http://www.ssdaddict.com/apps/AnvilBenchmark_RC1.zip)

Release Candidate 2 is now available. (2012-06-04)

Download link (http://www.ssdaddict.com/apps/AnvilBenchmark_RC2.zip)

Fixes

- It now detects OS's using WMI. (it wasn't handling new OS's properly)
- Drive-name is now being displayed top right.
- Changes to colors, more theme friendly.

The color/theme changes might still need adjustments so feedback is welcome!

---

Release Candidate 6 is now available. (2013-01-03)

Download link (http://www.ssdaddict.com/apps/AnvilBenchmark_RC6.zip)

Fixes
- Expiry date set to 2012-December-31

--

Version 1.1.0 is now available. (2014-01-03)

Download link (http://www.ssdaddict.com/apps/AnvilBenchmark_V110_B337.zip)

Fixes
- Removed Expiry date
- Minor fixes

--

more info to follow in the next few posts...

Anvil
07-27-2011, 07:31 AM
Menu and Settings

118265


118267

...

Anvil
07-27-2011, 07:32 AM
Threaded IO tests

118268

118269

...

Anvil
07-27-2011, 07:32 AM
Endurance testing

So, what does this test do?
It creates files at a random size, every second file is between 1KB and 128KB in size, the other files can be from 1KB up to the size of a typical digital photograph. ( 10MB+)
The point is that it creates a files at a random size just like we do in real life.

First off, it has to be placed on the drive where the test is being performed.
If you fail to do so it will run the Endurance test on the drive where the executable (application) is placed.
Once the test is started it will create a TEST folder where all the files are held, so, make sure to copy the application to the drive where you are performing the endurance test.

By clicking Start you will by default fill the drive until there is 12GiB free space left, you can change this by modifying the value found top left "Min GiB Free".
Max # of files to create regulates how many files as a maximum to create per loop.
Loops lets you set a specific count of loops unless you want the test to run "forever"

When each loop is finished it will
-delete the files created in the loop
-perform random writes on a designated file, this file is not deleted between loops.
-take a 5 second pause
-optionally perform an MD5 test

118270

There are a few settings that relates to the Endurance test

Randomize compressibility (default Off)
Allow deduplication of random data (default On)
Keep running totals (default Off)

Random write duration (default 5000ms)

MD5 options
Perform MD5 testing (default Off)
The frequency of MD5 testing, every n loops (default 5)
Select a source file for the MD5 test
Tell the app what checksum to compare the test to

Preparing for an Endurance test

You'll have to fill the drive with static data until there is adequate free space left.
approximate figures:
40GB drive : 12GB static data, 25GB free space left
60GB drive : 32GB static data, 25GB free space left
64GB drive : 35GB static data, 26GB free space left
...

Ao1
07-27-2011, 07:35 AM
Anvil, I would like to thank you for the hard work and thought you have put into this app. Kudos :up:

Great to see it go live.

:party:

Anvil
07-27-2011, 07:53 AM
Thanks!

Still a bit to do but we'll get there :)

One_Hertz
07-27-2011, 07:56 AM
A large improvement over AS SSD and CDM that a lot of people use these days. Well done!

flamenko
07-27-2011, 08:00 AM
Have used it in the past few reviews and its a world away from the frustrations most had experienced trying to get through IOMeter. Its also good to be able to quickly test SSDs with different configurations of read, write or mixed IO to see where the best IOPS result can be achieved.

Qute frankly, this is my favorite synthetic benchmark program. Great job Anvil and I think this software will do absolutely great. You might as well contact www.download.com for their inclusion right away.

johnw
07-27-2011, 08:15 AM
Suggestion:

Please include the version number or date in the filename of the zip archive or folder in the zip archive. That way we do not have to download and run the app to find out if we have the latest version.

Hopalong X
07-27-2011, 08:18 AM
Great benchmark tool. :up:
I just tried it out.

I would "Thank you" as others have if I knew how.

Anvil
07-27-2011, 08:24 AM
Suggestion:

Please include the version number or date in the filename of the zip archive or folder in the zip archive. That way we do not have to download and run the app to find out if we have the latest version.

Will do, a bit hectic the last few days :)

bluestang
07-27-2011, 08:55 AM
Thanks for the app and the tutorial :up:

TV Addict#2
07-27-2011, 09:53 AM
runs reliably on these two systems
you do good work man

Ao1
07-27-2011, 10:07 AM
I'll be the first to post a benchmark :)

Mixed I/O is a little known performance metric and it's great to be able to easily test this metric.

Here are a couple of runs at QD2. (I've got a seperate instance of Anvils app running on the V3 at the same time, so could be that the results took a hit).

118277

118278

Vapor
07-27-2011, 12:04 PM
Getting some buggy behavior on the benchmark part of the newest beta :(

Was working fine earlier, but now it's bugging out, not sure why. I loved it when it was working though :D

Sequential reads/writes are stopping at just 4MB for some reason. With my 2R0 V2 50GB array, I'm getting divide by zero errors with sequentials (probably related if it dips below 8ms and timing chunks are 1/64th of a second). And with my Intel 80GB G1, I get this:

118284

Anvil
07-27-2011, 12:19 PM
I'll have a look at the Intel G1, I haven't forgotten the issue :)

It's a bit strange that it can't read or write more than 4MB during the sequential IO part of the benchmark, are there other tasks running on that computer?

Could you try disabling real-time AV scanning while running one test?
(I've changed the extension on the last few betas, it could lead to the anti-virus reacting differently to the benchmark)

Vapor
07-27-2011, 12:31 PM
Oh, everything I normally do is still running :p:

No AV, but Photoshop, Lightroom, Excel, Skype, Spotify, Pidgin, and Chrome are all running. They were all running (except maybe Lightroom) when I ran it yesterday and it worked though (G1 worked just fine yesterday too) :confused:

It seems to skip the "Preparing the testfile" step Preparing the testfile step seems to be the issue (usually takes 15-20sec, but now it's taking 47-48ms).

deathman20
07-27-2011, 01:27 PM
Oh sweet! Thanks for this program. Has a lot more function then some other ones.

felix_w
07-27-2011, 01:41 PM
Well... I tried it...need to read carefully about the programm's various settings and stages of testing...this is my result:

http://i256.photobucket.com/albums/hh200/felix_w/LSIMR9260-4iSCSIDiskDevice_125GB_2GB-20110728-0026.png

4x Vertex 30GB mod to Turbo on LSI 9260-4i w/ FP

One_Hertz
07-27-2011, 04:32 PM
118287

80gb iodrive + 320gb mlc iodrive

SteveRo
07-28-2011, 02:00 AM
Mr. Anvil - Much thanks for a great app! This should become very popular, looks to me to be much better than ASSSD or CDM alone for sure.
Mr. 1hz - wow - I suspect this is the high end scores for the bench for sure! Is this standard format or fast write format on the iodrives? Standard NTFS, 4K cluster windows softraid? I will post some benches - hopefully later today.

Anvil
07-28-2011, 02:18 AM
Thanks to all of you and especially to the ones that have been part of the preview/beta, some for a couple of months and last but not least to the guys at TheSSDReview for hosting the download.

There is a new build (Beta4) at TheSSDReview (http://thessdreview.com/latest-buzz/anvil-storage-utilities-releases-new-storage-and-ssd-benchmark/), it fixes a WMI related issue on Windows XP, it looks like some are still using the good old XP and the benchmark works just fine on XP with the new Beta.

@SteveRo

Looking forward to a duel of the ioDrives :)

One_Hertz
07-28-2011, 05:09 AM
Mr. Anvil - Much thanks for a great app! This should become very popular, looks to me to be much better than ASSSD or CDM alone for sure.
Mr. 1hz - wow - I suspect this is the high end scores for the bench for sure! Is this standard format or fast write format on the iodrives? Standard NTFS, 4K cluster windows softraid? I will post some benches - hopefully later today.

Standard format... The fast write format doesn't seem to do anything after they implemented TRIM. It used to reduce degradation, but now there isn't any to begin with so it is kind of useless in normal environments.

P.S. PM me when you are done playing with that iodrive and want to sell it :)

aintz
07-28-2011, 06:39 AM
stop buying iodrive and buy a house so i can move in. thank you.

bluestang
07-28-2011, 07:15 AM
@Anvil
Thanks for the XP fix, works fine now :up:
I'll try and start the Endurance Testing on that 64GB M225/Vertex Turbo FW drive real soon. Need to take some baseline screenshots first.

EDIT: Is the "Stop" button wiper tool and then start again without loosing track of totals?

EDIT 2: Nevermind. Figured it out myself, it still keeps track if you stop and restart (I didn't close app either) so I could run Wiper.

No TRIM is killing me, first 2-3 loops run @ ~150MiB/s then drops to ~50-60 on loops 4-5 and on.

Sirakuz
07-28-2011, 11:12 AM
Anvil, join all thanks for your very best work!

Mitchb
07-28-2011, 11:25 AM
Thank you.

Mitch

SteveRo
07-28-2011, 12:23 PM
1st gen SSD, imation (mtron mobi) slc 16GB on ich10 -

http://img192.imageshack.us/img192/6475/imationmtronmsdsata15gb.png (http://img192.imageshack.us/i/imationmtronmsdsata15gb.png/)

SteveRo
07-28-2011, 12:25 PM
4xC30064 R0 on z68 pch wbc enabled -

http://img818.imageshack.us/img818/1041/4xc30064volume01gb20100.png (http://img818.imageshack.us/i/4xc30064volume01gb20100.png/)

SteveRo
07-28-2011, 12:27 PM
6xAcard9010 on z68 pch wbc enabled - nice little increase -

http://img703.imageshack.us/img703/9618/6xacard9010pchwbcvolume.png (http://img703.imageshack.us/i/6xacard9010pchwbcvolume.png/)

SteveRo
07-28-2011, 12:31 PM
Iodrive 80GB SLC formatted for fast writes (50% reduced capacity) - just a bit lower score vs the acards -

http://img683.imageshack.us/img683/505/iodrive80gbslcquickwrit.png (http://img683.imageshack.us/i/iodrive80gbslcquickwrit.png/)

One_Hertz
07-28-2011, 12:34 PM
Now dynamic RAID the ACARD array with the iodrive :)

Should be over 10k.

SteveRo
07-28-2011, 12:37 PM
Lastly - Frankenstein softraid 6xacard with iodrive :D.
Mr 1Hz - I ran this with 2600k at 50x also with iodrive formatted for fast writes :rolleyes:
64k cluster scored 10464 - a little faster than this - 4k -

http://img64.imageshack.us/img64/4325/4ksoftraid6xacardonpchw.png (http://img64.imageshack.us/i/4ksoftraid6xacardonpchw.png/)

One_Hertz
07-28-2011, 12:40 PM
Now that's an array :)

2x SLC iodrives should be a tad quicker though :)

If only the chipset wasn't holding your ACARDs back...

SteveRo
07-28-2011, 12:51 PM
^^ there is so much that is configurable, I am sure we can dial in higher numbers.
Have you played with iodrive custom sector sizes?

Sirakuz
07-28-2011, 12:54 PM
PQI S525
:D

http://img52.imageshack.us/img52/9041/dk9128gd6rasu1026b4.png (http://imageshack.us/photo/my-images/52/dk9128gd6rasu1026b4.png/)

Ourasi
07-28-2011, 02:17 PM
4xC30064 R0 on z68 pch wbc enabled -



I suspect you can get a bit more out of your 4xC300 64gb setup, even if two are on the SATAII ports, here is my 2xC300 64gb raid0:

http://bildr.no/thumb/937012.jpeg (http://bildr.no/view/937012)

Anvil
07-28-2011, 02:51 PM
Amazing scores SteveRo, I'll try to give you some competition this weekend. (got a few 6Gb/s drives :))

Now, where is the Areca :)

@Ourasi

Great score using the 1st gen 6Gb/s drives, the C300's really are great drives.

Anvil
07-28-2011, 02:58 PM
Here's one score using the Vertex 3 240GB w/ new fw 2.11

118306

Compression is set at 67% and the test size is 4GB, the next one is using 8% "compression" and a 1GB test file

118307

Will put a few of these in raid tomorrow on the PCH. (Z68)

johnw
07-28-2011, 03:16 PM
Interesting. The Vertex 3 gets about 20MB/s on 4K random read with ASU, which is about the same with AS-SSD (at least with the old firmware) and IOMeter. It is still only CDM that gets over 30MB/s. Although you got pretty close to 30MB/s on your 8%/1GB run.

Anvil
07-28-2011, 03:38 PM
Yeah, the Vertex 3 will do 30ish with easily compressible data.

One should always run benchmarks on the SF drives when they are in steady-state, otherwise the results can be somewhat unpredictable. (and optimistic)
This drive was upgraded this morning and was secure erased as well and so it needs some time to settle.

One_Hertz
07-28-2011, 04:12 PM
^^ there is so much that is configurable, I am sure we can dial in higher numbers.
Have you played with iodrive custom sector sizes?

Yes, it breaks a lot of benchmarks. Using 4K sectors makes things a few percent faster in some tests I think.

mak1skav
07-29-2011, 05:43 AM
There it goes my benchmark for a non SSD system

118348

P.S. When i am trying to re-run the benchmark i am getting the error message "Test file was not created!" and i have to exit from the program and then reload it.I don't know if this a known problem or if it is normal.

One_Hertz
07-29-2011, 06:03 AM
Anvil - how about another column showing CPU usage during each test? Perhaps add your mixed IO test to the default benchmark?

Anvil
07-29-2011, 06:03 AM
I'm aware of that issue, already fixed but it has not been published yet.
(Will make a post when the next beta is available for download)

@One_Hertz

Working on some more info (like cpu usage) and the MixedIO module in general, I'm not sure it will make it into the default benchmark as I'm trying to keep writes down to a minimum.
It will definitely make it into one of the more "advanced" benchmarks.

chispy
07-29-2011, 10:00 AM
Awsome software Anvil , well done. Ill post some results later. Thank you.

chispy
07-29-2011, 12:03 PM
@ Anvil , you asked for some Areca results heres mine :up: 6x Acards Raid0 on Areca 1231ML 4GB Raid Card.

11,042.79 Total

http://img546.imageshack.us/img546/2751/arecaarc1231vol00scsidi.png
By chispy (http://profile.imageshack.us/user/chispy) at 2011-07-29

One_Hertz
07-29-2011, 12:09 PM
You are running on cache. Increase test size to 8GB ;)

flamenko
07-29-2011, 05:53 PM
Anvil... You were asking for these...

118363118364

Anvil
08-01-2011, 02:14 PM
Beta5 2011-August-01
-Will now show if Volume is compressed.
-Option to recreate testfiles on every Full run
-Option for setting runtime length of MixedIO in ms.
-Cosmetic fixes (some are still open)
-Display IRST version for 32bit OS.
-Settings have been rearranged.
-Save Screenshot in Threaded IO read/write now defaults to PNG

I'll post an update when the beta is available for download. (it was sent for publishing 10 minutes ago)

This shows the effect of compressing the drive/volume :)
(hence it is highlighted)

118538

Vapor
08-02-2011, 08:31 AM
Intel X25-M G1 80GB in a used state (hasn't seen a SE in well over a year):

118575


2R0 Vertex 2 50GB in a used state (hasn't seen TRIM or SE in over a year):

118574

Anvil
08-02-2011, 11:19 AM
Pretty good!
Doesn't look like there's much performance loss on either of the "drives".

Some more nostalgia :)

118577

lowfat
08-02-2011, 12:44 PM
My ioXtreme.
http://hostthenpost.org/uploads/43430357d6362c408d07514db6524b84.jpg (http://hostthenpost.org)

lowfat
08-02-2011, 12:51 PM
Iodrive 80GB SLC formatted for fast writes (50% reduced capacity) - just a bit lower score vs the acards -

iodrive80gbslcquickwrit.png (http://img683.imageshack.us/i/iodrive80gbslcquickwrit.png/)

Hot diggity those are some nice 4k qd1 results. Funny how they actually decrease when you frankenraid w/ the Acards.

I still have hopes for a pair of SLC ioDrives myself if I ever luck out and find a good eBay auction or two.

EDIT: Kind of weird how your ioDrive 4k QD1 writes aren't any better than my ioXtreme.

SteveRo
08-03-2011, 06:44 AM
EDIT: Kind of weird how your ioDrive 4k QD1 writes aren't any better than my ioXtreme.

Could just be the way I have it formatted.

deathman20
08-03-2011, 10:07 AM
Any word on the update link Anvil?

Anvil
08-03-2011, 10:25 AM
I'm sorry for the delay!

I have put up an preliminary download link to Beta5 in the first post.

One_Hertz
08-03-2011, 10:35 AM
Kind of weird how your ioDrive 4k QD1 writes aren't any better than my ioXtreme.

The MLC Iodrive is identical to the SLC one in terms of writes. It is the reads and mixed read+write workloads where there are a large differences.

bluestang
08-03-2011, 11:30 AM
I'm sorry for the delay!

I have put up an preliminary download link to Beta5 in the first post.

Getting this with Windows extract...

118608

And Winrar says "unexpected end of archive"

Anvil
08-03-2011, 11:39 AM
I'll have a look at it, I recall errors of that kind when linked from the Norwegian forum.

edit;
I just downloaded using both IE9 and Chrome and both are OK when unpacked using 7Zip

Could you try 7Zip and report back what happens?

bluestang
08-03-2011, 06:13 PM
Download of 7Zip is fine...thanks!

Here is my contribution...

118629

Vapor
08-03-2011, 06:56 PM
Download of 7Zip is fine...thanks!

Here is my contribution...

118629Very impressive use of $200 :)

Hondacity
08-03-2011, 07:30 PM
here is my bloated win7 + supertalent ssd

Anvil
08-04-2011, 02:15 AM
Download of 7Zip is fine...thanks!

Here is my contribution...


They are just great, reminds me about the C300 64GB when they were launched.


Very impressive use of $200 :)

I don't think there are any alternatives in this price range.


here is my bloated win7 + supertalent ssd

Looks to be good on reads, a bit slow on small block writes though. (still outperforming HDDs where it counts)

cx-ray
08-04-2011, 03:05 AM
2R0 C300 256GB. Seq. read held back by SATA II.

118637

Anvil
08-04-2011, 03:10 AM
The C300's are really something.

That line of 4K is just awesome. (both reads and writes)

bluestang
08-04-2011, 04:05 AM
Very impressive use of $200 :)


They are just great, reminds me about the C300 64GB when they were launched.

I don't think there are any alternatives in this price range.

Yeah guys, really happy I went the 2x64GB RAID0 route instead of a single 128GB :up:

And even happier I didn't go the SF route :)

deathman20
08-04-2011, 06:05 AM
Yeah guys, really happy I went the 2x64GB RAID0 route instead of a single 128GB :up:

And even happier I didn't go the SF route :)

Love my m4's as we'll. I'll have to post my results tonight. Mines just running on SATA II and is a huge improvement to what I did have before.

DooRules
08-04-2011, 01:33 PM
great bench Anvil, many thanks bud...

2 c300 128gb R0 on sata 3

118647

NapalmV5
08-04-2011, 03:39 PM
nice thanx Anvil :toast:

http://img811.imageshack.us/img811/6896/41476911.png

http://img811.imageshack.us/img811/6896/41476911.png

F@32
08-04-2011, 05:00 PM
X18-M 2R0 4K strip on PCH after SE

http://i15.photobucket.com/albums/a397/reieev/RAID/Anvilbeta52R0P67.png

NapalmV5
08-04-2011, 05:10 PM
highest 4k
http://img232.imageshack.us/img232/5774/202h.png
http://img232.imageshack.us/img232/5774/202h.png

highest score
http://img855.imageshack.us/img855/4770/198r.png
http://img855.imageshack.us/img855/4770/198r.png

Tiltevros
08-04-2011, 05:28 PM
ARC-1880 R0x8 intels x25'm G2 80GB

118649
118650

Tiltevros
08-04-2011, 05:59 PM
almost 12.000 points with 4GB test file

118651

Sniper
08-04-2011, 06:21 PM
Asus G73SW with Intel 320 300Gb

http://img.techpowerup.org/110804/INTEL SSDSA2CW300G3_300GB_1GB-20110805-0417.png

One_Hertz
08-04-2011, 06:52 PM
Single HDD + fancycache. I think I win.
118655

Hondacity
08-04-2011, 06:54 PM
ultimate rams :D

One_Hertz
08-04-2011, 07:07 PM
ultimate rams :D

Hey man it's just cache. Totally legit.:rolleyes:

Hondacity
08-04-2011, 07:08 PM
yeah :D super score bro! lol totally blew me away ..hahahahha

deathman20
08-04-2011, 07:13 PM
My Results.. Not the best, not the worst but it works :) I should get my Intel's hooked back up after the secure erase and try them again.

118656

Tiltevros
08-05-2011, 04:01 AM
Hey man it's just cache. Totally legit.:rolleyes:

mine is not cache :/

Anvil
08-05-2011, 06:48 AM
X18-M 2R0 4K strip on PCH after SE


Could you check back in this thread for a special build, I'm trying to find out why some are failing on WMI.
Meanwhile could you post the MB, OS, tweaks (if any) and connected drives (all sorts of media)


almost 12.000 points with 4GB test file


Hi Tilt, long time no see :)
Same goes for you, I need to find out why it fails on WMI on your system as well so could you list just the basics.


Single HDD + fancycache. I think I win.


It's fully legal to use cache, as long as it's mentioned :)
Not very hard to tell anyways.

I'll display it as part of the info, shouldn't be that hard to detect.

So, what are the most known ones?

One_Hertz
08-05-2011, 07:53 AM
mine is not cache :/

Yes, your result is largely cached. Run an 8GB testfile and watch your score drop. The larger the testfile the lesser part of the test is cached.

Anvil - it shouldn't be too hard to detect cache. For example, results over 92ish MB/s in QD1 4KB reads must be cached since nothing can do more than that. If it is not an Iodrive/ACARD then over 40mb/s is not possible unless it is a cached result. You could make the read results cache resistant by creating the test file, then making another 4GB testfile, and then testing the reads on the original testfile. This way the results should be real/represent the actual storage device instead of DRAM.

Or make testfile, ask for a reboot, then test the reads after reboot but that would be annoying as hell.

There isn't an easy way.

Anvil
08-05-2011, 08:23 AM
I was simply thinking of listing it *if* a supplementary cache was installed.
(It is detectable in the registry)
I might do some extra level of testing for cache later though.

So besides Fancy-Cache what other popular supplementary caches are used?

One_Hertz
08-05-2011, 08:46 AM
I was simply thinking of listing it *if* a supplementary cache was installed.
(It is detectable in the registry)
I might do some extra level of testing for cache later though.

So besides Fancy-Cache what other popular supplementary caches are used?

There isn't really any difference between controller cache and software cache like Fancycache; controller cache is just slower but it performs the same function. I doubt there is a simple way to detect controller cache, and how much of it is present. Or is there?

zads
08-05-2011, 09:33 AM
Great work Anvil, looks like it has tons of potential!
But it seems like my results are strangely jumping around in performance compared to something like IOMeter..
Could use a few usability tweaks, and it always rewrites a test file when I run the test..
Do you have any more info on the program settings?

Anyway here's a result from single SF-2500:
https://lh4.googleusercontent.com/-0i9VA0QfevY/TjwoY2RAGkI/AAAAAAAAB34/E-MlduwLlNE/s800/UGB88RBB200HEX%252520ATA%252520Device_200GB_TH-W-IO_20110804-1545.png

F@32
08-05-2011, 10:14 AM
Could you check back in this thread for a special build, I'm trying to find out why some are failing on WMI.
Meanwhile could you post the MB, OS, tweaks (if any) and connected drives (all sorts of media)

Here you go. I will get the make and model for HDD and DVDRW later.

ASRock P67 Extreme4 1.90
Core i5 2500K / no OC
G.Skill Ripjaws 1600 4x2Gb / no OC
HD6950 2GB / no OC
Intel Gigabit CT PCIe
M-Audio Delta 2496
Intel x-18M 2x80Gb
Hitachi 2TB SATA2 internal on Marvell
DVDRW SATA2 internal on Marvell
Fantom USB 2TB
Antec Quattro 850W
Antec 1200
Win7 64 bit


So, what are the most known ones?
http://www.superspeed.com/ is another one

Computurd
08-05-2011, 05:25 PM
If it is not an Iodrive/ACARD then over 40mb/s is not possible unless it is a cached result.

9265 can do that @ QD1 R5. actually 54 @ QD1 :)
these results with C300. I bet these 8 x 128 wildfires can beat this. only 512 cache on this card when test was done, and it is direct i/o anyways (FP uses no cache)
LINK (http://thessdreview.com/our-reviews/lsi-9265-8i-6gbps-megaraid-card-raid-5-tested-as-ssd-testing/)
when i mention caching coming into play in the header above that, i am speaking to the seq write speed.

@Anvil- i would like the option to retain a static test file as well. This would be helpful. Even with the *now* well-known uber longevity of these drives, no need to speed along degradation in raid arrays.
-also one last wish 512B @ QD 128 so i can show 465,000 IOPS

doing some long-run tests this week for a article, but i will put the toys up and do some playing soon so i will post some results.



results over 92ish MB/s in QD1 4KB reads must be cached since nothing can do more than that.

The achitecture of RAID controllers and how they fundamentally operate, and thus low QD performance, are about to be turned on their heads.....12Gb/s plugfest is going to bring about a change in RAID controllers that is going to be just unbelievable. :)

I will say this...The Fusions/ I/O Extremes, etc, will soon see the playing field changed very dramatically.

One_Hertz
08-05-2011, 07:53 PM
9265 can do that @ QD1 R5. actually 54 @ QD1 :

That result is partially cached. Iometer with a very large test file will not show those numbers. You can see from the latency that the real QD1 random reads are around 32MB/s for that config of yours. Edit: you even show yourself that the real QD1 RR number is 31MB/s later in the review...

RAID controllers are not able to increase 4k QD1 random read performance above what the SSDs themselves are capable of because it is impossible since we are just talking about raw latency here; i.e. the time it takes your devices to respond to a small block read command. The absolute best a controller can do is not add any overhead on top of that. It can not make any device respond faster than they are capable of no matter the kinds of voodoo magic you believe in :)

johnw
08-05-2011, 09:20 PM
RAID controllers are not able to increase 4k QD1 random read performance above what the SSDs themselves are capable of because it is impossible since we are just talking about raw latency here; i.e. the time it takes your devices to respond to a small block read command. The absolute best a controller can do is not add any overhead on top of that. It can not make any device respond faster than they are capable of no matter the kinds of voodoo magic you believe in :)

Nicely explained. It is surprising how often people forget that simple fact.

Computurd
08-06-2011, 05:11 AM
It can not make any device respond faster than they are capable of no matter the kinds of voodoo magic you believe in

even if i kill chickens?:p:

TV Addict#2
08-06-2011, 08:37 AM
use a heavier beater will get more better results maybe

zads
08-07-2011, 05:11 PM
https://lh5.googleusercontent.com/-7bxJM8rlV10/Tj83IOP5DXI/AAAAAAAAB4k/1akfnSfY5aE/s800/UGB88RBB200HEX_200GB_8GB-20110805-1644.jpg

Single SF-2500 drive

squick3n
08-07-2011, 09:04 PM
3x m4 128GB R0 settled state

I ran the test twice. Both times I got a "Not Responding" error right at the beginning, but the test started back up after about 10 seconds. Not sure what numbers I "should" be getting, so I don't know the effect of that brief pause.

Great tool otherwise. Amazing work Anvil.

http://www.abload.de/img/anvil-2f3sy.jpg

bot@xs
08-08-2011, 01:00 AM
here is mine

118744

bot@xs
08-08-2011, 09:06 AM
updating to the latest intel driver has a bit of a wow factor

thanks anvil for making this great tool

118751

bigretard21
08-09-2011, 08:18 AM
Just tried out the beta and had no issues running the benchmark with default settings. Nice benching utility Anvil.

C300 128GB running on Dell E6420 laptop:
118769

Computurd
08-09-2011, 05:21 PM
nice result you big retard!

sorry...had to say it once :)

bluestang
08-09-2011, 06:18 PM
^^ :rofl:

bigretard21
08-10-2011, 09:28 AM
nice result you big retard!

sorry...had to say it once :)

I'm a tard you're a turd. We're practically related.:)

Anvil
08-10-2011, 10:53 AM
@Anvil- i would like the option to retain a static test file as well. This would be helpful. Even with the *now* well-known uber longevity of these drives, no need to speed along degradation in raid arrays.
-also one last wish 512B @ QD 128 so i can show 465,000 IOPS

doing some long-run tests this week for a article, but i will put the toys up and do some playing soon so i will post some results.



I've been too busy lately, I've already made the necessary adjustments to retain the test files and will send you that build later tonight.
Will need some feedback on that though as SF based drives are behaving "differently" from other drives, especially in this regard. (static test files)

Computurd
08-10-2011, 04:07 PM
Anvil- your awesome man, i really appreciate your time, you are the guru of the guru's :worship:

vxr
08-11-2011, 03:33 PM
how can I use the iometer???:confused:
118842

sanctified
08-11-2011, 05:35 PM
Great utility! Nice work.

Anvil
08-12-2011, 01:18 AM
how can I use the iometer???:confused:
118842

The IOmeter menu option is not what you think, it was initially an option for importing iometer result files :)
Having imported the result files one can export to Excel /compare results based on a lot of options.

The IOmeter "like" benchmarks in my App is found in the benchmarks menu (Threaded IO)

Nizzen
08-12-2011, 01:24 PM
16gb testfile ;)

118889

One_Hertz
08-12-2011, 04:04 PM
There you go, those are mostly real results. Except the 32k and 128k randoms show a lot higher than they really are for some reason. (real results for your array should be less than 228MB/s for 32k and less than 920MB/s for 128k.)

Anvil - any idea why? QD1 RR 32k and 128k read iops must be less than QD1 RR 4k iops by definition but his result is showing much more? Is it a bug in the software?

Vapor
08-16-2011, 07:07 AM
Bit of a twist for this run....

Parallels VM of Win7 SP1 on top of OS X 10.7 with the VHD on an Intel 80GB G1, default settings across the board. Definitely looks like there's some caching going on :lol:

119028

*Might* be some NTFS-like compression too, have had a hard time turning off that setting :mad:

EDIT: yeah, definitely some compression going on (especially on writes), re-ran with incompressible:

119029

Computurd
08-16-2011, 10:17 PM
8x 128 gb wildfire R0 9265-8i with FastPath. no cache :)
http://i517.photobucket.com/albums/u332/paullie1/10109.png

SteveRo
08-17-2011, 01:07 AM
^^ wow, very nice Paul!

Nizzen
08-17-2011, 09:14 AM
Nice Computurd, but try even larger like 16GB test :)

I have to try with more than 4x corsair gt. 120gb. This benchmarks is THE ONE :D

NapalmV5
08-17-2011, 07:19 PM
lol @ no cache/cached statements.. mesmerizing

if you havent noticed every result posted in the thread is cached regardless of controller

no cache?? then physically remove the ssd cache/raid controller cache/ram/cpu cache/all system cache/all os cache/every bit of cache

so what you left with then ?? may as well throw your system out the window

johnw
08-18-2011, 08:06 PM
Anvil:

Any chance of adding a test similar to the SNIA conditioning test? I'm thinking of something like:

1) User chooses random or sequential, IO block size, read/write mix, test file size (span), and test duration

2) Program runs with selected parameters while measuring IOPS, for the duration specified

3) Program produces a graph of IOPS vs. time

After that functionality is working, it would be cool to be able to define macros to repeat the test with different parameters (or cycle through all possible parameters)

Computurd
08-19-2011, 04:01 PM
I concur. we definitely need a set of tests that could emulate what SNIA isdoing, in a manner that can be easily replicated by everyone. would be great, i know you are extremely busy though!

Anvil
08-21-2011, 07:40 AM
I'll have a look at the SNIA document.

It is interesting and i have been playing with specific parameters that looks to rapidly degrade performance.
(specific block sizes, both on sequential and random I/O looks to do the trick)

SteveRo
08-23-2011, 10:21 AM
Security just let us back into the bldg, i spent about 30 seconds under my desk before we all ran for the doors. 5.8 earthquake in central VA. My tummy is a little upset. :(

Anvil
08-23-2011, 11:13 AM
Hope you are all OK!

Computurd
08-23-2011, 02:16 PM
holy crap setve-o! i hope everything is fine and that more importantly that your house and loved ones are fine as well!

trans am
08-23-2011, 04:50 PM
Holy crap! omg Paul!
@ Anvil I saw a thread about HWbot hdd category and I was wondering how you would go about making a version dedicated to HWbot that could be the end all be all hdd bench? What needs to happen? Why is this such a touchy subject nobody wants to step in?
I think Anvil has the best one so far. Why not elaborate on this? how the hell can this make it into hwbot without cheat accusations and BS? There must be some way to get this on the Bot.

@ steve I was on the 17th floor in my nyc office and I started feeling this dizzy feeling like I was tripping or this vertigo feeling. when the painting started wobbling I thought to myself "Ny is on a fault line and I think this was my 1st earthquake experience" sure enough. Man you were at the source!!! It was crazy!

just got 2 m4 64gb yesterday and added my c300 (3 drive 128k partition R0 lsi 9260-4i fpKey)
I never just benched the 2 m4 as a matched set so maybe I should. Or just get 4 m4. I just went to the 3 drive array so maybe the c300 is slowing me down. Or its the lsi on the x4 slot that is taxing me. anyway. I tried going with pauls 4gb method to eliminate the cache. this is write through / direct io and no read ahead. going write back and read ahead gave worse results. 128k stripe.

SteveRo
08-24-2011, 01:02 AM
All is well on the home front, no breakage or damage to the house but lots of stuff moved around on tables and stuff that fell to the carpet. Very unsettling experience, I hope we don't have anymore :(

@ transam - yes, I was in on the 6th floor in our bldg - I hope we are done with earthquakes!

edit - home is 59 miles dues north of the epicenter, the epicenter was very near the Lake Anna Nuclear Power Plant :(

Anvil
08-24-2011, 06:47 AM
@ Anvil I saw a thread about HWbot hdd category and I was wondering how you would go about making a version dedicated to HWbot that could be the end all be all hdd bench? What needs to happen? Why is this such a touchy subject nobody wants to step in?
I think Anvil has the best one so far. Why not elaborate on this? how the hell can this make it into hwbot without cheat accusations and BS? There must be some way to get this on the Bot.


I haven't had the time to contact hwbot but it is of course interesting.
Cheating is detectable!

squick3n
08-24-2011, 10:47 AM
3x m4 128GB R0 settled state

I ran the test twice. Both times I got a "Not Responding" error right at the beginning, but the test started back up after about 10 seconds. Not sure what numbers I "should" be getting, so I don't know the effect of that brief pause.

Great tool otherwise. Amazing work Anvil.

http://www.abload.de/img/anvil-2f3sy.jpg

http://www.abload.de/img/anvil0fill1gb5p3j.jpg

Running Vertex 3 Max IOPS x2 on an Intel system. Much better. Wonder what was different.

Boogerlad
08-24-2011, 12:51 PM
There you go, those are mostly real results. Except the 32k and 128k randoms show a lot higher than they really are for some reason. (real results for your array should be less than 228MB/s for 32k and less than 920MB/s for 128k.)

Anvil - any idea why? QD1 RR 32k and 128k read iops must be less than QD1 RR 4k iops by definition but his result is showing much more? Is it a bug in the software?

it is not a bug, you're just misinterpreting the results. mb/s is iops*transfer size.

One_Hertz
08-24-2011, 02:26 PM
it is not a bug, you're just misinterpreting the results. mb/s is iops*transfer size.

Nope didn't misinterpret anything.

Anvil
08-24-2011, 02:47 PM
I'll do a few tests using my Areca, didn't notice that comment until now :)

I can't remember seeing other setups resulting in anything like it, a bit weird.

Pauls review of the Wildfire shows no such effect on the 9265 using lots of drives. Link (http://thessdreview.com/raid-enterprise/patriot-memory-wildfire-120-gb-ssd-review-anvil-storage-solutions-and-atto/)

Boogerlad
08-24-2011, 05:53 PM
Nope didn't misinterpret anything.

29161.85iopsx32kb/1024=911.31mb/sec
12356.5iopsx128kb/1024=1544.56mb/sec
7286.99iopsx4/1024=28.46mb/sec
seems right on.

lutjens
08-24-2011, 06:55 PM
Did a run on each array with no changes to default settings other than test file size increased to 16GB.

lutjens
08-24-2011, 07:20 PM
1 GB test file, changed array to No Read Ahead...

johnw
08-24-2011, 07:42 PM
29161.85iopsx32kb/1024=911.31mb/sec
12356.5iopsx128kb/1024=1544.56mb/sec
7286.99iopsx4/1024=28.46mb/sec
seems right on.

You are not paying attention to what One_Hertz wrote. The problem is that the IOPS do not make sense. It makes no sense to have IOPS be higher for 32KB or 128KB blocks as compared to 4KB blocks (QD=1 for all cases).

trans am
08-25-2011, 01:20 PM
I haven't had the time to contact hwbot but it is of course interesting.
Cheating is detectable!

I think its about time. There are some great minds in here and I think you have a solid program that's HW bot worthy.

@ uncle steve. Im glad you are okay.
Now we have a hurricane to deal with.. :(

NapalmV5
08-29-2011, 03:04 PM
http://img594.imageshack.us/img594/5756/unledwip.png

http://img594.imageshack.us/img594/5756/unledwip.png

Nizzen
08-29-2011, 11:34 PM
Why not try a 16 gb testfile? ;)

NapalmV5
08-30-2011, 12:25 AM
1gb or 16 gb its still "cached" ;)

trans am
08-30-2011, 02:55 PM
4gb, 8gb, 16gb
how did I do?

Anvil
08-30-2011, 03:12 PM
The 4GB test looks fine :up:

The other two test (8GB/16GB) shows that there is something weird going on on 32K and 128K random reads in some configurations.
(I'll have a look at it asap)

One_Hertz
08-30-2011, 03:46 PM
^^^

Just like Nizzen. Hopefully you find the culprit!

trans am
08-30-2011, 04:07 PM
I knew the 8gb and 16gb tests were too good to be true. I thought I was a genius for a second. I didnt screw with any settings in Anvils bench utility except select the ssd test and changed the test size. Other than drive cache there is no write caching enabled or any read ahead or any write back. here is a msm screenshot of my settings. This is 2x 64gb m4 09 FW paired with a c300 64gb in (3 drive)raid0 on 9260-4i with a fastpath key. no cachecade. Let me know if you want me to try some other test parameters to help diagnose the problem. In the meantime I want to score another pair of 64gb m4's so I have 4 m4's on the lsi. They are going for $100 bux ea. now. Hey does anyone have 4x 64gb m4 09 FW in raid0 on the lsi or on z68 I can compare to?

Anvil
08-31-2011, 02:16 AM
^^^

Just like Nizzen. Hopefully you find the culprit!

I'll surely find out, it looks like it shows it's face at large test files + caching controllers.


I knew the 8gb and 16gb tests were too good to be true. I thought I was a genius for a second. I didnt screw with any settings in Anvils bench utility except select the ssd test and changed the test size. Other than drive cache there is no write caching enabled or any read ahead or any write back. here is a msm screenshot of my settings. This is 2x 64gb m4 09 FW paired with a c300 64gb in (3 drive)raid0 on 9260-4i with a fastpath key. no cachecade. Let me know if you want me to try some other test parameters to help diagnose the problem. In the meantime I want to score another pair of 64gb m4's so I have 4 m4's on the lsi. They are going for $100 bux ea. now. Hey does anyone have 4x 64gb m4 09 FW in raid0 on the lsi or on z68 I can compare to?

Thanks for the detailed report, will surely make it easier to reproduce :up:

I might be able to make a comparison using 4x m4's. (sometime over the weekend)

trans am
08-31-2011, 06:24 PM
I'll surely find out, it looks like it shows it's face at large test files + caching controllers.



Thanks for the detailed report, will surely make it easier to reproduce :up:

I might be able to make a comparison using 4x m4's. (sometime over the weekend)

sweet! thanks Anvil. I want to see those 4x m4 results on your 9260. You have 4x 64gb m4 or 4x 128? Should I get those 2 m4 64gb? They are 99.00 ea new on ebay free shipping.

Anvil
08-31-2011, 11:50 PM
At $99 ea I'd personally get them :), they are great value.

I'll most likely have both 64GB and 128GB available early/mid next week and both my 9260 as well as the 9265 is available so it could be interesting, especially with the new fw.

bluestang
09-01-2011, 04:22 AM
64GB M4 is $89.99 w/FS right now at the Egg.

trans am
09-01-2011, 06:43 AM
64GB M4 is $89.99 w/FS right now at the Egg.

sweet! I just saw it too. I just bought 2 more so I have 4 matched now.

vxr
09-01-2011, 03:20 PM
sweet! I just saw it too. I just bought 2 more so I have 4 matched now.
119602

119603

What do you want to know????

squick3n
09-01-2011, 06:05 PM
http://i.imgur.com/MV6mF.jpg

M4256GB on 009

trans am
09-07-2011, 01:09 PM
hey guys the 2 64gb m4 came from newegg today. the 89.00 deal was amazing!
I got them all matched now on the lsi card and I am stoked!

SteveRo
09-08-2011, 03:44 AM
^^ wow! great numbers!

vivithemage
09-08-2011, 08:48 AM
Love this software. If you want, I can host the new versions for you for free...just PM me.

vivithemage
09-08-2011, 08:50 AM
sick, how do you like cachecade 2.0? Is that a server environment, or personal?

Computurd
09-08-2011, 11:58 AM
dude M4 is the SSD to beat. i cant tell you how much i love them. i will show you in pictures in a few days LOL.
Dont fret TA, there are some inconsistencies with your results, but that doesn't mean inconsistency with your performance. that is a helluva an array you have going. Dude even if they were double the price they are now i would recommend them over several of the other current gen ssds.
I set out the current gen of SSDs to see how it shook out, and the M4 is just IT.

vivithemage
09-08-2011, 12:45 PM
Yeah, I love my M4 in my x220 too...so fast.

felix_w
09-08-2011, 02:19 PM
Same controller config like trans am, should i notice any difference between 64GB and 128GB M4's ? I now am on 4x Vertex 30's Turbos

INFRNL
09-08-2011, 02:54 PM
Same controller config like trans am, should i notice any difference between 64GB and 128GB M4's ? I now am on 4x Vertex 30's Turbos

in benchmarks, yes, but real world; you probably will not notice any difference depending on actual use

RealTelstar
09-09-2011, 05:51 AM
Loving my new M4... need to build the new pc, it's crippled in sata2 mode :)

Nizzen
09-10-2011, 12:27 PM
Daily spam :p

2xVertex3 MI R0 64k stripe @ PCH sata6G, with win7
Just for comparing.
119905

TheLostSwede
09-11-2011, 02:14 AM
Would it be possible to add an option that just copies the data as plain text? Much in the way that you can in Crystal Diskmark, HD Tune Pro or AS SSD.

Cheers

Anvil
09-11-2011, 09:44 AM
^
There is a very simplistic copy text feature but it copies only the points. (beta7)

I'll have a look at copying the other info as well.

bluestang
09-20-2011, 12:15 PM
Beta7 link is corrupted, won't unzip. Please check and fix.
Thanks!

Anvil
09-20-2011, 01:01 PM
I'll check the file, it used to work :), however it is obsolete as there is a new beta with some minor changes to the Endurance test.

Beta 8
------
- 250ms delay for every 1000 files deleted.
- 10 second pause between loops. (used to be 5 seconds)
- files per loop is changed from 9999 to 99999

Link to download (http://www.diskusjon.no/index.php?app=core&module=attach&section=attach&attach_id=458977)

bluestang
09-21-2011, 04:46 AM
Still can't unpack. 7zip still says unsupported compression method for AnvilPro.exe

Edit: All sorted out now, Anvil is great. :up:

Anvil
09-21-2011, 07:14 AM
Strange, it works for me using both 7Zip and W7's extract (shell) method. (tried seconds ago)

Will get it sorted out asap. (I'll send you a link to a preliminary site)

lowfat
10-04-2011, 05:25 PM
It seems the beta 5 has expired. Where can I download a newer beta?

EDIT: Missed the second download link in first post.

EDIT 2: Beta 7 is expired too it seems.

Christopher
10-04-2011, 07:05 PM
You need beta 8 now, the link for it is somewhere in the endurance thread:

http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm/page54

between pages 54 and 80...

sorry. I can't seem to find it again.

lowfat
10-04-2011, 07:39 PM
Thanks. Found it.


Kingston 40GB
330.38TB Host writes
Reallocated sectors : 7

(it's been offline most of the day, still doing some tests on the new Endurance test-rig)

--

Note

There is a new beta that can be downloaded. Link (http://www.diskusjon.no/index.php?app=core&module=attach§ion=attach&attach_id=458977) (or from the ASU thread in this forum)
(it expires late November)

Christopher
10-04-2011, 07:50 PM
Nice. I was going crazy looking for it and couldn't seem to find it.

lowfat
10-15-2011, 02:53 PM
The MLC Iodrive is identical to the SLC one in terms of writes. It is the reads and mixed read+write workloads where there are a large differences.

Don't suppose you could post results for the MLC ioDrive? I am not sure if I should buy another ioXtreme and RAID them. Or to continue trying to sell off my ioXtreme and buy a 320GB MLC ioDrive.

Anvil
10-17-2011, 01:58 PM
A small teaser for the next beta.
Next beta will list the OROM version for the Intel RAID Controller.

121366

One_Hertz
10-17-2011, 05:09 PM
Don't suppose you could post results for the MLC ioDrive? I am not sure if I should buy another ioXtreme and RAID them. Or to continue trying to sell off my ioXtreme and buy a 320GB MLC ioDrive.

Sorry I didn't see this. Here you go:
121380

Mr. Anvil - any progress on non-testfile read tests?

Anvil
10-18-2011, 01:31 PM
@One_Hertz
It's close to the top of my list, I'm currently working on full-span endurance.

New beta
---------

Expires 27th of January 2012

-Lists OROM version of Intel RAID controller

Link to download (http://www.diskusjon.no/index.php?app=core&module=attach&section=attach&attach_id=464000)

DooRules
10-19-2011, 01:04 AM
Intel® Rapid Storage Technology 10.6.2.1001

http://www.station-drivers.com/page/intel%20raid.htm

Anvil
10-19-2011, 01:05 AM
Thanks for the Link :)

Downloading right now...

felix_w
10-19-2011, 01:49 AM
Does anyone have any list of bugs/fixes for that ?

Thanx

Nvrmind, found it (http://www.station-drivers.com/telechargement/intel/sata/ReleaseNotes_10.6.htm)

Sirakuz
10-21-2011, 08:52 AM
Hi Anvil
Is it possible to fix it?

121499
121498

Anvil
10-21-2011, 08:57 AM
I've seen the issue, most likely you've got unpartitioned volumes/drives.

I'll try to reproduce the issue!

bluestang
10-25-2011, 03:29 PM
Hey Anvil,
Why am I the only one who ever has trouble ectracting your .zip files?
Always say Anvil.pro is corrupt.
Can you please check it out and/or PM me with Beta9...Thanks!

Anvil
10-26-2011, 07:45 AM
Not sure why you're having issues, I'll arrange something tonight.
(I checked just seconds ago and both 7zip and Explorer worked just fine)

bluestang
10-26-2011, 08:19 AM
Ok, thanks. Wonder what gives as this happens on both of my systems.

Anvil
10-27-2011, 03:05 PM
This should work while I decide on what domain name to choose.

Download link for Beta9 (http://www.ssdaddict.com/apps/AnvilBenchmark_Beta9.zip)

bluestang
10-27-2011, 06:01 PM
Thanks! And can't wait to see your site :clap:

Kallenator
11-01-2011, 05:31 AM
Hi Anvil! Very useful program you have written here! ;)

I was wondering, is there any possibility for this program to run an endurance test on removable devices?
(The reason I ask is because the CF card I am going to test even though it's connected to IDE won't set itself to Fixed disk, and it does not seem to support it either.)

Takk! ^^

Anvil
11-01-2011, 04:26 PM
I have disabled removable devices but I'll have a look at it.
I might need some details on how the device identifies in "WMI".

The workload is probably not ideeal for such a device, it might be interesting testing CF or other typical memory cards for DSLR's.

Kallenator
11-02-2011, 01:57 AM
Great!

Whenever you have time to look at it, just let me know and I will access the WMI ;)

95blackz26
12-12-2011, 12:48 PM
thank you for the App and thank you for taking the time to develop it.

love the level of detail it provides.

Anvil
12-12-2011, 03:55 PM
Thanks!

There is more to be included, just need some time to get them finished.

95blackz26
12-13-2011, 11:52 AM
just wondering the difference in choosing different compression levels for the benchmark.

what would be the optimal test size to run. i have been doing the 1gb test size.

Anvil
12-13-2011, 03:32 PM
It depends on the controller on the SSD.

If you've got a "normal" SSD controller w/o compression then you can run any compression ratio you like.
There might be a very small performance penalty as generating random data can have a tiny impact on the score but in general the scores should be very close if not identical.

If you are running an SSD that utilizes compression (SandForce based) then you should go by 46% or more for the most realistic/true-to-life scores.
(it really depends a lot on your data)

Arctucas
12-17-2011, 06:48 PM
4x ADATA S599 RAID0

http://i291.photobucket.com/albums/ll305/Arctucas/Volume0_240GB_1GB-20111217-2146.png


Excellent application.

I only wonder why it is showing my motherboard as Socket 423, when it is Socket 1366 (x58)?

95blackz26
12-18-2011, 06:03 AM
It depends on the controller on the SSD.

If you've got a "normal" SSD controller w/o compression then you can run any compression ratio you like.
There might be a very small performance penalty as generating random data can have a tiny impact on the score but in general the scores should be very close if not identical.

If you are running an SSD that utilizes compression (SandForce based) then you should go by 46% or more for the most realistic/true-to-life scores.
(it really depends a lot on your data)

Ok i tried one of the tests at 46% compression.

both of the SSD's i got are Mushkin drives. one has a SF-2281 controller and the other has SF-1200 controller.

Anvil
12-18-2011, 12:27 PM
4x ADATA S599 RAID0

I only wonder why it is showing my motherboard as Socket 423, when it is Socket 1366 (x58)?

The information displayed is retrieved using WMI (Windows Metrics) and it means that it's querying the hardware and the hardware is obviously not behaving for some reason. Could simply be that the MB is reporting the wrong info?

Did you upgrade the installation from a socket 423 MB or is this a clean install?

Arctucas
12-23-2011, 09:38 AM
The information displayed is retrieved using WMI (Windows Metrics) and it means that it's querying the hardware and the hardware is obviously not behaving for some reason. Could simply be that the MB is reporting the wrong info?

Did you upgrade the installation from a socket 423 MB or is this a clean install?

New screenshot, fresh install done yesterday, after secure erase was run on each SSD:

http://i291.photobucket.com/albums/ll305/Arctucas/Volume0_240GB_1GB-20111223-1236.png

I am not sure about the hardware not behaving; screenshot form AIDA64:

http://i291.photobucket.com/albums/ll305/Arctucas/AIDA64moboinfo.jpg

Anvil
01-26-2012, 09:48 AM
Beta11 2012-January-12
- Expires May 2012
- Next beta will include more on full-span testing, expected within a few weeks

Download link for Beta11 (http://www.ssdaddict.com/apps/AnvilBenchmark_Beta11.zip)

minpayne
01-26-2012, 10:05 AM
Beta11 2012-January-12
- Expires May 2012
- Next beta will include more on full-span testing, expected within a few weeks

Download link for Beta11 (http://www.ssdaddict.com/apps/AnvilBenchmark_Beta11.zip)

Still only 10 minutes of enterprise endurance testing :shakes: Can't wait to be able to stress my SSD overnight! :D

Anvil
01-26-2012, 04:16 PM
I'm working on it, it just didn't make it and I needed to release a beta as the old one expires this month.

You should be able to go more than 10 minutes though, a few hours should be possible, I'll check tomorrow.

Christopher
01-29-2012, 08:32 PM
Every once in a while, during a benching run, I get a seemingly random error.

Today I got I/O Error 145 in a dialog box which disappeared shortly after. The toolbar at the top migrated to the right side of the ASU window, and after a few seconds the bench continued.

Anvil
01-30-2012, 03:16 AM
I've had that report earlier, will check...

Christopher
01-30-2012, 07:34 AM
https://www.box.com/shared/static/6tnr7fttjcece16cltma.png

I've had it happen several times, not sure why. The error dialog disappeared before I could get the capture.

Arctucas
01-30-2012, 09:57 AM
@Anvil,

Thank you for continuing development of this application.

I am still getting the wrong socket listed, however.

http://i291.photobucket.com/albums/ll305/Arctucas/Anvilbench1-29-122.jpg

It is not a big deal to me, but I thought you might like to know.

Anvil
01-30-2012, 01:52 PM
The socket etc is retrieved using Windows Management Instrumentation so there isn't much I can do about it.

I could disable displaying the socket as it's the one value that has been wrong a few times, not many though.

Arctucas
01-30-2012, 09:37 PM
The socket etc is retrieved using Windows Management Instrumentation so there isn't much I can do about it.

I could disable displaying the socket as it's the one value that has been wrong a few times, not many though.

I believe it must be some specific incompatibility unique to my system that the benchmark is having trouble with, since every other application I have used that identifies the motherboard does so correctly.

Anyway, it is not something to worry about. You still have the best SSD benchmark, in my opinion.

Thanks again.

B Gates
03-27-2012, 08:53 PM
Anvil when are you going to finalize the software and sell it? I will be the first in line to buy it. What do you think of this result?
http://i876.photobucket.com/albums/ab323/obamaliar/bench5-7.jpg
its 2 SanDisk Extreme 240GB drives RAID 0

B Gates
03-27-2012, 08:59 PM
here is my best with a single drive
http://i876.photobucket.com/albums/ab323/obamaliar/sandisk%20on%20asus%20board/bench5.png

johnw
03-28-2012, 07:05 AM
It would be more realistic for the default of ASU to use 100% incompressible data.

Christopher
03-28-2012, 10:20 AM
I think 0 fill should be illegal. You should get carted off to the gaol for using 0-fill.

I don't even run non-SF drives with 0 fill in the endurance test.

tived
04-03-2012, 11:15 PM
125023125024125025

Hi guys,

Hopefully I have done this correctly, there should be 3 images attached. First off a great thank to Anvil for this great tool :clap:, that I now have to learn how to use...back in the cue mate!!! :cool:

Now if someone could write an optimisation script so that you could get all your drives up to speed that would be brilliant :up:

Can someone help me interpretate these images, eg. the number they represent and tell how I can improve on it, please

thanks

Henrik

splmann
04-03-2012, 11:21 PM
Here a Result from my new Workstation :

http://www.bilder-upload.eu/thumb/76ebf5-1333524042.jpg (http://www.bilder-upload.eu/show.php?file=76ebf5-1333524042.jpg)

tived
04-03-2012, 11:32 PM
nice cache!! :-)

Anvil
04-04-2012, 08:59 AM
It would be more realistic for the default of ASU to use 100% incompressible data.

0 Fill won't be default for the next release.

100% incompressible is not normal though, except for media files. (there will always be a mix of compressible and incompressible)

I'll probably make an option where one can disable the continuous re-generating of random data. (it will be less cpu intensive and won't matter for drives that don't do compression)

Anvil
04-04-2012, 09:05 AM
@tived

I'll have a look at your tests, at first glance they do look normal based on your info.

johnw
04-04-2012, 09:26 AM
100% incompressible is not normal though, except for media files. (there will always be a mix of compressible and incompressible)


That is incorrect. Most user day-to-day data will look close to 100% incompressible to Sandforce SSDs.

OS and program installs have data that can be significantly compressed by Sandforce, but most people only install those once, so it is not a good indication of day-to-day saved data, especially with the bigger SSDs.

Christopher
04-04-2012, 09:33 AM
I just think that when benchmarking, it's ridiculous to just show zero fill. It's principle more than anything.

That's why I like ASU -- I can just bench a SF with every compressibility level and then weight the results as I please. 47% to 67% are far more realistic an average than 0/100%, but 67% on SF is pretty much incompressible I think.

I'd have to double check, but I did break out a SF2281 the other day to upgrade some FW and do a SE. I was pleased with it's incompressible performance.

Anvil
04-04-2012, 09:47 AM
johnw,

It's not black and white.

i.e loading applications will result in reading compressible files, how much writes are affected depends on what type of files one are working with.
Databases takes "compression" to the extreme as most are highly compressible, I might end up Endurance testing the Force 3 I've still got using the database setting as it would make sense for my kind of usage.

I've still got the Vertex 3's running my VM's and one of these days I'll check how they have developed.
From what I've seen WA is well below 1.0. (based on the SMART data, how that translates to real WA is of course not known)

Christopher
04-04-2012, 10:53 AM
Mushkin Chronos Deluxe 120GB 5.02ABB0 Reference FW


Zero Fill
125040

46%
125039

Incompressible
125038


AS-SSD
125041

-----------------------------------
I think 46% should be the ASU default. Anything other than 0-fill, though. 67% is almost incompressible to SF -- surely whatever compression it can effect is offset by overhead from not-so-compressible data. While my personal experience is that much of the daily writes are frequently compressible to a degree, it's generally not enough to offset the larger incompressible writes. My workload generates an average WA of ~1.2ish, but my workload is hardly universal.

Anvil
04-04-2012, 11:06 AM
How was the 67% on the Mushkin.

I know from earlier tests (also confirmed by Vapor's tests) that there is not much difference from 46% to 100% on 2281 based drives, earlier SF controllers will suffer more and Async NAND will still make a difference for all SF based drives. (on current controllers)

johnw
04-04-2012, 11:12 AM
Databases takes "compression" to the extreme as most are highly compressible, I might end up Endurance testing the Force 3 I've still got using the database setting as it would make sense for my kind of usage.


Unless you want ASU to cater to Enterprise users, it would be a bad idea to base the defaults on database writes, since very few non-Enterprise users have a lot of database writes to their SSDs.

For typical non-Enterprise users, the best thing to use for a benchmark to correspond to day-to-day usage is 100% incompressible / random data. That should be the default. If you start arbitrarily choosing "randomness" of less than 100%, then your benchmark will be arbitrary and not suitable for objective comparisons. There is a reason why the SNIA tests specify random data. It is fine to have a choice, but the default should be 100% incompressible.

Christopher
04-04-2012, 11:16 AM
I didn't include the 67%, but it's just about the same as the incompressible. I swear, I never remembered this drive pulling more than 70mb/s QD1 4K RW in CDM, but here it is hitting 111MB/s in AS-SSD. Not too shabby.

Without a radical redesign of the SF/SF FW, I really think the next gen SF should go back to 28% OP and should probably remove RAISE. Just having a proper OP seems to really even out the sequential writes. Newer SFs have a funky waveform-like write pattern, a problem I don't think my Vertex LE 100 has.

---------------
Here is the 67%
125042

Hmm. The last time I checked (last year) this drive was just about even with 100% and 67%, but I see now it looks closer to 47% than %100. That could be the 5.xx series FW at work. The writes are a good bit higher than they were on 3.xx FW.

Anvil
04-04-2012, 12:14 PM
Unless you want ASU to cater to Enterprise users, it would be a bad idea to base the defaults on database writes, since very few non-Enterprise users have a lot of database writes to their SSDs.

For typical non-Enterprise users, the best thing to use for a benchmark to correspond to day-to-day usage is 100% incompressible / random data. That should be the default. If you start arbitrarily choosing "randomness" of less than 100%, then your benchmark will be arbitrary and not suitable for objective comparisons. There is a reason why the SNIA tests specify random data. It is fine to have a choice, but the default should be 100% incompressible.

The default won't be Database, I've just not decided.

100% would be worst case for the SF based drives and I'm not sure that it's fair vs other non compressing controllers, there is a portion of compressible data in any workload and real life tests show that SF drives are generally as fast and sometimes faster than most drives. (up till now that is)

So it will be in the range of 46-100%.

Where does SNIA say that random data means incompressible data?

johnw
04-04-2012, 12:16 PM
Where does SNIA say that random data means incompressible data?

Random data is incompressible. SNIA does not need to say it. It is a basic fact of information theory.

tived
04-04-2012, 03:53 PM
Thanks, Anvil

Henrik

B Gates
04-05-2012, 05:41 PM
125023125024125025

Hi guys,

Hopefully I have done this correctly, there should be 3 images attached. First off a great thank to Anvil for this great tool :clap:, that I now have to learn how to use...back in the cue mate!!! :cool:

Now if someone could write an optimisation script so that you could get all your drives up to speed that would be brilliant :up:

Can someone help me interpretate these images, eg. the number they represent and tell how I can improve on it, please

thanks

Henrik you dont have write caching turned on thats why your score is so low. you should be in the 9000 point range

tived
04-05-2012, 06:49 PM
hmm, i am looking MSM, but I can for some reason not find where I can get into the properties of the controller and set that "write caching" on

Henrik

tived
04-05-2012, 06:53 PM
also, i am a bit disappointed with my performance on my bootdisk with 4x Intel 520 in RAID-0, only giving me 590's, would that be because its on the SATA-II controller?? this one has in windows write cache turned on, my two arrays on the M1015, I can't enable this.

Henrik

Christopher
04-06-2012, 07:32 PM
My personal belief is that the default should be either 46 or 67.

mobilenvidia
04-06-2012, 08:18 PM
I think some folk are getting write cache confused.

Disk cache policy on/off is for the Disk drive cache (SDD or HDD) it's best to turn this off I found in RAID0 arrays.
You can find this in MSM under 'Logical', then the array you want to change the cache policy on. (right click on it)
A reboot is needed for this to take affect
This is the only cache policy available on the IBM M1015 !!
For any single drives on the M1015 you can change this in Device Manager (windoze)

With cached controllers you ofcourse get:
Read Policy, Read ahead on/off
Write policy, Write through, Write back and Write back with BBU
I/O policy, direct I/O or cached I/O

nik58
04-06-2012, 09:22 PM
Plextor M3Pro 2x128 GB in raid0

http://img717.imageshack.us/img717/6430/anvilraid.jpg (http://imageshack.us/photo/my-images/717/anvilraid.jpg/)


http://img801.imageshack.us/img801/720/assdraid1.jpg (http://imageshack.us/photo/my-images/801/assdraid1.jpg/)

Anvil
04-10-2012, 09:42 AM
Random data is incompressible. SNIA does not need to say it. It is a basic fact of information theory.

There are lots of applications that generate random test data at the application-level, this kind of data is normal data used during application testing.
Neat if one can't export/import current systems.

If you have a look at SNIA's specs they are testing with other patterns and are also debating "how random is random enough".

3.6 Data Patterns
All tests shall be run with a random data pattern. The Test Operator may execute additional
runs with non-random data patterns. If non-random data patterns are used, the Test Operator
must report the data pattern.
Note: Some SSS devices look for and optimize certain data patterns in the data payloads
written to the device. It is not feasible to test for all possible kinds of optimizations, which are
vendor specific and often market segment specific. The SSS TWG is still trying to characterize
“how random is random enough” with respect to data patterns."

johnw
04-10-2012, 10:38 AM
If you have a look at SNIA's specs they are testing with other patterns and are also debating "how random is random enough".


I have read the SNIA SSS documents, no need to quote them to me, unless you have a point you are trying to make. I'm not sure what your point is here.

I think it is clear what is being referred to in the passage you quoted. If the data stream consists of repeated blocks of the same "random" data, then how large a block size is necessary in order to fool all SSS devices into thinking it is a continuous stream of truly random data? The answer obviously depends on the compression and de-duplication algorithms used by various SSS devices, so it is difficult for SNIA to come up with a universally applicable definition of a sufficiently random data stream. Nevertheless, it is obvious that if the data stream can be compressed significantly by a specific SSS device, then the data stream is not "random enough" to be used for the mandatory random data stream.

Christopher
04-11-2012, 12:07 PM
I'm no Shill for SF and SF's marketing, but I'd rather have a universally accepted SNIA spec than a fractured comittee. If you're looking at writes by volume, most are small compressible writes anyway. If you look at it by size, the larger the write, the less likely it's compressible. On average, I think the 46 to 67 percent encapsulates an average client workload. The host system is always writing small bits of data, while larger accesses are usually initiated by the user.

I'd like to see more transfer size and access patterns vs. data compressibility. The larger the transfer, the less random and less compressible it becomes.

If SNIA has any chance of getting the acceptance it deserves on client side storage, some concessions to SF will have to be made. But the more time I spend trying to understand SF, the more I come to terms with that. The truth is, the best SF drives are extremely competitive on speed. Latency will be a problem in some cases, but you do get some advantage even with 80% compressible data. After 46% SF really plateaus, but there is still an advantage there. Now, my time futzing with SF leads me to believe that there is more overhead than is user visible, perhaps enough to overcome the compression endurance advantage, but it could also be a case where more over provisioning would pay dividends as I've maintained for some time.

Now, this is entirely separate from steady state performance, but the more compressible the data, the longer the time to achieve steady state. I'll have to play around some more when I get home, but I also believe some of the housekeeping algorithms in 5.0 reference FW are different, but steady state performance will continue to be an area where improvement is needed. But some SF do have some generally desirable attributes above and beyond the obvious, like stellar 4k qd1 performance. That's not a SF exclusive trait, but it's one area of performance I prize.

johnw
04-11-2012, 12:37 PM
You can babble all you want, but the fact is that the only objective test possible is using random data, as the SNIA SSS documents specify for the mandatory test. SNIA allows testers to use non-random data streams (as long as the data stream is specified) in addition to the mandatory random data stream test, but the random data stream is mandatory.

As for you're other vague claims, they are highly debatable (even evidence posted in these forums contradicts them), and have no place in an objective test specification such as SNIA SSS testing protocols.

Christopher
04-11-2012, 12:39 PM
Those are merely my own observations, but SNIA is an industry group which needs unity from its members.

Let me know when you get elected king of SNIA.

Computurd
04-21-2012, 10:49 AM
It is easier for some to set back and criticize others efforts, than it is for them to actually do anything to contribute.

I guess that could be some sort of nice saying;

It is easier to criticize than it is to do.

flamenko
04-22-2012, 07:49 PM
I just think that when benchmarking, it's ridiculous to just show zero fill. It's principle more than anything.

That's why I like ASU -- I can just bench a SF with every compressibility level and then weight the results as I please. 47% to 67% are far more realistic an average than 0/100%, but 67% on SF is pretty much incompressible I think.

I'd have to double check, but I did break out a SF2281 the other day to upgrade some FW and do a SE. I was pleased with it's incompressible performance.

Happened to come across this and thought I would throw in my two cents. I don't agree at all with this statement and think, quite honestly, that such negates a very important piece of the pie when we speak of benchmark testing and it's relationship to computer use. In simple terms, the importance of testing in 0Fill, or highly compressible data, cannot be understated for the consumer side of things, just as testing in incompressible data (or random data samples) holds a more specific value for the enterprise side of things.

I can go back to the beginning of testing with this same argument and, quite honestly, would have believed the naysayers of testing in highly compressible data (OFill) would have seen the light by now. WE went head to head for years now with many berating the ideal that reviewers, myself included, would test in highly compressible data and show it's meaning and value on a review.

Imagine if you would how confused the consumer would be if we had never shown that side of things and explained the difference between the two.

Moving on, PCMark Vantage is recognized by all reviewers as being the 'industry standard' of consumer SSD testing and, well the simple facts show that the scoring realized through Vantage follows that of testing highly compressible data (OFill) much more accurately than testing that of incompressible data. Actually the new Vertex 4 is pretty much the icing on the cake with respect to an example on this one.

I know I may be going against the views of two very good friends on this one but, the truth is that, for the typical consumer, oFill (or testing with highly compressible data) is just as important as testing with incompressible data for more specific needs such as video and photography and even reaching right into the business and enterprise side of things.

To make a statement that says that you believe that testing with oFill data should be outlawed (colorful term) shows a very close minded attitude and really negates the entire side of the debate. Quite frankly, it goes so far as to even put the credibility of the person making the statement into jeopardy.

Just my thoughts! It is kind of amusing actually because I can probably pull up threads on this forum just over two years ago where I stood strong on this exact subject. To my advantage, test results show a very clear picture.

As for SandForce and their marketing, as much as many might not like hearing it, it borders on brilliance. There have been very few in technology (much less the storage industry) to make the steps that they did over the course of just under three years. They have become a part of every consumer SSD manufacturer today EXCEPT for Samsung and Crucial. Yup, that includes Intel. They were then purchase by LSI to top things off. Just how big would the line have been if they went public before the LSI purchase?

canthearu
04-22-2012, 08:41 PM
I can go back to the beginning of testing with this same argument and, quite honestly, would have believed the naysayers of testing in highly compressible data (OFill) would have seen the light by now. WE went head to head for years now with many berating the ideal that reviewers, myself included, would test in highly compressible data and show it's meaning and value on a review.


While I don't think 0-fill testing is completely pointless, I have the following points to make:

a) Sandforce drives do far too well on 0-fill testing .... almost no real world load has the same results. A sandforce with 64gig Async NAND does just as well as a sandforce drive with 64gig Toggle NAND in 0-fills. Real world situations start at 47% compressible and really just get less compressible from there.
b) Running a benchmark in 0-fill mode on a sandforce controller really reveals very little since you are really just speaking to the controller and barely touching the NAND behind it. It really isn't a very useful benchmark for exploring a sandforce SSD. If you want to see any differences, you need to look at less compressible datasets.
c) In my real world monitoring of sandforce drives, the average write amplification tends to suggest loads of between 47% and 67% compression. (Of course, during long periods of idling, I see quite a lot of NAND activity, which increases NAND writes but doesn't increase Host Writes much)

However this makes those cheap Sandforce drives with Async NAND viable. Most people can buy a dirt cheap async NAND 120gig, and it will actually perform very well for real world tasks in their system. Drives without compression using Async NAND can have pretty ugly performance stats.

Ao1
04-22-2012, 11:59 PM
0 fill is a worthless benchmark statistic unless you happen to be in marketing and would like to completely misrepresent the performance of your product.

The only benefit of running a 0 fill benchmark is that it enables an end user to mimic how marketing people came up with misleading performance statistics.

SNIA is the only benchmark that properly tests a SSD’s performance.

Edit: Disagree? Name one application that uses 0 fill data and then identify how it benefits from 60,000 IOPS.

canthearu
04-23-2012, 12:21 AM
SNIA is the only benchmark that properly tests a SSD’s performance.

Do SNIA publish the details of this benchmark?

Ao1
04-23-2012, 12:26 AM
http://www.snia.org/forums/sssi/pts

http://www.snia.org/forums/sssi/knowledge/education

canthearu
04-23-2012, 12:59 AM
http://www.snia.org/forums/sssi/pts

http://www.snia.org/forums/sssi/knowledge/education

Type have nice ideas there .... but unfortunately there is no downloadable benchmark or results from any drives for comparison (the drives they have done are anonymous ... I can stab in the dark at what they are, but cannot easily confirm)

XavierMaxx
04-23-2012, 04:57 AM
Results from recently purchased 2x120 GB SanDisk Extreme SSD (http://www.sandisk.com/products/solid-state-drives/sandisk-extreme-solid-state-drive) at 46%. I used that setting because I'm using them for OS/apps and when I apply the lightest of compression to a full backup, it comes out to 44.9% (and I pre-archive a lot of my stuff), so it is "realistic" for at least my usage. When the drives have some wear and tear, they score closer to 5400. Sorry for leaving out the drives on the image, I hadn't planned on posting this but thought what the heck. Still experimenting with these, the read scores are low on this particular run (usually score 100 points higher on read); is the OS drive after 500GB of writes in a few days, 64k stripe.

125998

johnw
04-23-2012, 07:13 AM
0 fill is a worthless benchmark statistic unless you happen to be in marketing and would like to completely misrepresent the performance of your product.

The only benefit of running a 0 fill benchmark is that it enables an end user to mimic how marketing people came up with misleading performance statistics.

SNIA is the only benchmark that properly tests a SSD’s performance.


With SNIA, a random data stream is a mandatory part of the test, which makes a lot of sense. SNIA also allows additional tests with non-random data streams (which must be reported by the tester), but the random stream is still mandatory.

Clearly, random data streams should be the default for any test. If someone is testing an SSD that has the ability to do compression, they may want to add some non-random data streams (and report in detail what they used), but they should ALWAYS include a random data stream.

SNIA got this exactly right with their SSS enterprise AND consumer ("client") tests -- a random data stream is mandatory for both enterprise AND consumer tests.

http://www.snia.org/tech_activities/standards/curr_standards/pts

flamenko
04-23-2012, 03:33 PM
0 fill is a worthless benchmark statistic unless you happen to be in marketing and would like to completely misrepresent the performance of your product.

And I think exactly this thought process would negate the purpose of testing which is to explore the performance in all typical environments. Of course enthusiasts such as yourself want the 'incompressible data' testing first and foremost but the truth is that compressible data is utilized just as incompressible data is and, in fact, many would say more so in the typical user experience. To state on one side that 0 Fill (or testing with highly compressible data) is a worthless benchmark is paramount to stating that testing in 100% incompressible data is just as useless.

Effective testing explores all the variables.

canthearu
04-23-2012, 03:54 PM
SNIA got this exactly right with their SSS enterprise AND consumer ("client") tests -- a random data stream is mandatory for both enterprise AND consumer tests.

http://www.snia.org/tech_activities/standards/curr_standards/pts

So how does one go about testing an SSD to SNIA standards. I mean, it is great to have standard and the SNIA tests look fairly comprehensive ... but without a means of carrying out the tests, and with nobody doing these tests, it is all a bit academic at the moment.

But I do feel that for at least sandforce drives, all compression levels are worth looking at, because all will appear in typical workloads. Few workloads will persistantly and consistantly present an incompressable or 0-fill load, so it is good to see how performance graduates between fully compressable and fully incompressible.

johnw
04-23-2012, 03:58 PM
but the truth is that compressible data is utilized just as incompressible data is and, in fact, many would say more so in the typical user experience.

If "many" would say that, then "many" would be wrong. You are arguing with Ao1 who looked in depth at that very question.

Data compressible by Sandforce controllers is relatively rare for most users day-to-day SSD writes. About the only commonly compressible data is OS and program installs (not day-to-day things for most users, just very occasional), and database and VM applications, which if those are in use, they are usually power users and are well aware of the compressibility of their data. Most users do not run large databases or VMs.

johnw
04-23-2012, 04:09 PM
But I do feel that for at least sandforce drives,

And why should anyone care about your vague "feel"?

flamenko
04-23-2012, 04:31 PM
If "many" would say that, then "many" would be wrong. You are arguing with Ao1 who looked in depth at that very question.

Data compressible by Sandforce controllers is relatively rare for most users day-to-day SSD writes. About the only commonly compressible data is OS and program installs (not day-to-day things for most users, just very occasional), and database and VM applications, which if those are in use, they are usually power users and are well aware of the compressibility of their data. Most users do not run large databases or VMs.

John W. my old friend... So what you are saying is that, according to Ao1, the typical user utilizes incompressible data more often in typical things such as ...ohhhh I don't know.... system starting, system software such as explorer and e-mail use and even MS Word file creation?

Ao1
04-23-2012, 11:56 PM
John W. my old friend... So what you are saying is that, according to Ao1, the typical user utilizes incompressible data more often in typical things such as ...ohhhh I don't know.... system starting, system software such as explorer and e-mail use and even MS Word file creation?

Whilst I believe compressiblity for client applications is limited my statement was based on 0 fill. I am always prepared to be enlightened if you can tell me how an end user benefits from 0 fill. Name one application in which it is relevant and in which the IOPS are utilised and I will change my view.

Ao1
04-24-2012, 12:00 AM
First, a sincere apology to Anvil for derailing his thread and detracting from ASU, which is a great benchmark for end users, providing flexibility and ease of use.

The SNIA tests are beyond an end users ability to undertake, but I believe this is the benchmark that vendors should use for their specifications. The benchmark is something that all major SSD vendors have contributed towards and it provides granularity and comparative performance assessments that are beyond any other method of testing.

Here is a shot of 17 drives that were tested with the SNIA specification using 65% reads/ 35% writes. (17 SSD’s and one Enterprise HDD [edit: in yellow]) It is clear to see that there is a significant difference in performance between SSD’s.

126059
http://www.brighttalk.com/webcast/23848%20

Here is a shot of a Sandforce drive. Blue is incompressible. Red is a data base pattern and the green line is 0 fill. Interestingly 0 fill is close to the data base load, but the max IOPS come out at ~35K. Sandforce specify 60,000 burst/20,000 sustained (@4K blocks) for the SF2x drives and 30,000 burst/10,000 sustained (@4K blocks) for SF1x drives.

126060

Sandforce don’t state how they arrived at their specification figures, but presumably they were obtained on a FOB drive using 0 fill. The SNIA test is based on steady state, which is the representative condition of a drive in use.

To prevent Anvil’s thread from being further derailed there should be a separate thread to discuss SNIA. There is already a thread to discuss SF compression.

Computurd
04-24-2012, 06:07 AM
here are results with incompressible and compressible data with three different drive states, FOB, Steady State, and Overprovisioned with SandForce enterprise class drives:

http://thessdreview.com/our-reviews/smart-storage-systems-xceedstor-500s-240gb-mlc-enterprise-ssd-review/4/

http://thessdreview.com/our-reviews/smart-storage-systems-xceediops-2-200gb-emlc-6gbps-enterprise-ssd-review/4/

johnw
04-24-2012, 06:59 AM
John W. my old friend... So what you are saying is that, according to Ao1, the typical user utilizes incompressible data more often in typical things such as ...ohhhh I don't know.... system starting, system software such as explorer and e-mail use and even MS Word file creation?

System starting is almost entirely reads. MS Word files are already compressed. Email, without attachments, is a VERY small amount of writes, and the attachments are usually incompressible.

kaktus1907
04-25-2012, 08:04 AM
Corsair P256 Pro on Sata2

http://www.abload.de/img/mfauuiro.png

http://www.abload.de/img/mfauuiro.png

Anvil
04-25-2012, 09:04 AM
Corsair P256 Pro on Sata2


That looks very good for a socket 775 based system :)

Anvil
04-25-2012, 09:24 AM
First, a sincere apology to Anvil for derailing his thread and detracting from ASU, which is a great benchmark for end users, providing flexibility and ease of use.

The SNIA tests are beyond an end users ability to undertake, but I believe this is the benchmark that vendors should use for their specifications. The benchmark is something that all major SSD vendors have contributed towards and it provides granularity and comparative performance assessments that are beyond any other method of testing.

...
To prevent Anvil’s thread from being further derailed there should be a separate thread to discuss SNIA. There is already a thread to discuss SF compression.

It is not a problem at all, as long as it's conducted in a civilized manner :)

The next or subsequent beta will include an option to use real-life data when testing, One_Hertz gave me the idea some time ago and I've been testing a lot of configurations (drives/controllers) the last few weeks, looking good so far.

Rubycon
04-29-2012, 06:51 AM
Here's my results with 8 Intel 520 180GB SSD in RAID0 with 64KB stripe size. Areca ARC 1880ix-24, 4GB Unigen DIMM, Battery Backup.

First is zero fill, second is 100% incompressible. The hit isn't too bad considering it's a Sandforce controller. This is a 32GB test as well since the 1GB tests are heavily influenced by the controller's cache. 1GB scores are over 14,000!

http://i157.photobucket.com/albums/t71/C6FT7/ArecaNORWEGIANDAWNSCSIDiskDevice_1440GB_32GB-20120429-1036.png

http://i157.photobucket.com/albums/t71/C6FT7/ArecaNORWEGIANDAWNSCSIDiskDevice_1440GB_32GB-20120429-1040.png