You should be faster in iometer. It scales with more drives in R0. Real apps don't.
Printable View
real apps dont scale better because you are getting towards the limit of the throughput used for the cpu gpu ram etc...and that is also not always entirely true for hardware raid. on-board solutions doubly so, yes of course, the limitation of the bus. NOW pci-e is a faster demon. still strangled you say at x4??? well there is a solution for that. a x8 pci-e 2.0 specification raid card. guess what the only one in the world is? pci-e 2.0 x8 raid card?
lsi 9260 :clap:
Why havent you posted a screenie of your result? I am curious for later.
I agree about the benchmark issue for SSD’s. It does not give any indication of real life performance for (non xtreme) desktop users like me. I don’t see the point in getting anything faster than what is currently available if it does not translate into faster real time performance.
It seems to me that the problem is not the SSD, it’s the controller interface, it’s the controller drivers, it’s the lack of optimised quad core processing, it’s the OS, and it’s the software. In other words until these issues are resolved is it justifiable to spend huge sums of money to get very little return because nothing else is optimised to take advantage of it? :shrug:
Areca 1231ML-2G/Acard 9010 8xR0 - anand's game profile, lines 19 and (reran at same settings) line 20 - if I ran it right -
http://img337.imageshack.us/img337/9...esults1231.jpg
I wish I had remembered to softraid the 9260 with the 1231ML while I still had it.
That might have yielded some great numbers with the acards!
Anyway below is not too bad - this is dynamic disk created from 4xR0 acard 9010s on ich10r softraided with 4xR0 acards on the 1231ML-2G.
Test config - i7 965 on a Gigabyte Extreme mobo, 3x1GB G. Skill 2000/C9, 2xGTX295 Quad SLI. Boot drive via Mtron Mobi on the Gigabyte (non-ich10) controller.
Clocks - cpu @ 4.48 (35x128) ht off, memory @ 1792 (9-9-9-24), pcie @101
Has anyone ever figured out how to boot from a dynamic disk?
http://img169.imageshack.us/img169/4...r1231ml2ga.jpg
^^^
Damn that's pretty nice. Post 105 proves how iometer doesn't translate. My setup gets 40% more iops while being slower in real world tasks.
Computurd - I like how you haven't posted your result.
Computurd- I'd like to see some of those "instantaneous" Crysis level loads and other games. I'd really like to see how you're computer loads "everything" instantaneously.
I could not agree more. there is definitely a point of diminishing returns when it comes to this, as any, area of storage. some people just do it because they are storage junkies:cool:Quote:
@ audienceofone....It seems to me that the problem is not the SSD, it’s the controller interface, it’s the controller drivers, it’s the lack of optimised quad core processing, it’s the OS, and it’s the software. In other words until these issues are resolved is it justifiable to spend huge sums of money to get very little return because nothing else is optimised to take advantage of it?
not that i am aware of. the OS has to initialize in order to initialize the array. at least that is my understanding. those are very interesting results with that nested array, however, that dynamic disk is spectacular in its IOPS...very neat stuff i may have to play with that soon for fun. interesting results!:up:Quote:
SteveRo...Has anyone ever figured out how to boot from a dynamic disk?
well i may be overstating a tad, i do not wish to leave you with the impression of instantaneous loads, that is impossible there may be milliseconds involved there of course...:rofl: seriously though, it is very fast, and i may come off as brash about how fast it is. but it is not just the storage system that enables these types of things, you also have to have cpu gpu and ram etc, optimized OS, etc etc. it is much like napalms system, a total approach that allows him to load 200 apps in 20 seconds or maybe more now im not sure, that dude only gets faster. but your tone towards me with that sounds reminiscent of peoples tone with napalm.Quote:
@Griff805...Computurd- I'd like to see some of those "instantaneous" Crysis level loads and other games. I'd really like to see how you're computer loads "everything" instantaneously.
speaking of tones...
It pleases you that my wifes birthday was yesterday, and her party is today? that her and the children are demanding my time for this? thank you for the sentiment and your approval of my loving family.Quote:
Computurd - I like how you haven't posted your result.
~~~~~~your smart ass tone is not appreciated. i have no qualms showing my benchmarks and you know that. unfortunately i have a life as well~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
stevero's posting with his Areca 1231ML-2G/Acard 9010 8xR0
1 manager 1 worker 1 disk queue depth 64.... 4kb random
his iops were 54017....mine:
http://i517.photobucket.com/albums/u...randomness.png
at the same setting my 4k sequential is (above i posted one with wrong queue i think)
http://i517.photobucket.com/albums/u...quential-1.png
seeing as how 69 percent of reads during gaming etc are sequential i think that is very telling^^^^at such a low block size. well the lowest block size actually. it only gets faster from there.
as to the anandtech my results are this:
http://i517.photobucket.com/albums/u...ileexccedl.png
there are serious disparitys with this benchmark, however i do reach higher iops than the acards on the 1231. not sure what that means, as one hertz, my dear friend, has stated that stevero is faster, even though he scored lower iops than he did. so does that mean lower iops is better? i dont get it. for some reason i cannot view the profile of this profile that was made by one hertz, so i am not sure of exactly what is going on here. this is a benchmark that is atttempting to match anandtechs, but i think we need work on it. that is all i have time for,. i am sure that we will have much to discuss after the party!!
New benches with a new sys. (Core i7 920 @ default - Win7 x64)
"Old" hardware 2x Acard 9010 + Areca ARC-1261ML 2GB ECC Ram (all IOmeter-Benches with 64 outstanding IOs)
4k - 100 % write - 100% random
http://www.abload.de/img/4k-100write-100random-k2iw.jpg
4k - 100 % read - 100% random
http://www.abload.de/img/4k-100read-100random-im2md.jpg
4k - 100% write - 100% seq.
http://www.abload.de/img/4k-100write-100seq-i70501.jpg
4k - 100% read - 100% seq.
http://www.abload.de/img/4k-100read-100seq-i7l3x5.jpg
workstation default
http://www.abload.de/img/workstation-8k-64thrw1d8.jpg
workstation 64k
http://www.abload.de/img/workstation-64k-64thr_e0wk.jpg
workstation 64k and 1024 outstanding IOs
http://www.abload.de/img/workstation-1024k-64thl4qd.jpg
database default
http://www.abload.de/img/database-64thra3in.jpg
You sat on XS for 1h+ on Friday night (as did I) and did not post it so don't give me that.
I've got another E back from RMA so now I am up to 3x X25-E in R0 on ICH10R. I get 103,000 in that test.Quote:
stevero's posting with his Areca 1231ML-2G/Acard 9010 8xR0
1 manager 1 worker 1 disk queue depth 64.... 4kb random
his iops were 54017....mine:
http://i517.photobucket.com/albums/u...randomness.png
I get 130k there. It is not telling at all because storage devices aren't stupid. If you only throw one access pattern at it, it will realize this and start predicting your requests (reading ahead), therefore being faster. Real world access can not be predicted like this.Quote:
at the same setting my 4k sequential is (above i posted one with wrong queue i think)
http://i517.photobucket.com/albums/u...quential-1.png
seeing as how 69 percent of reads during gaming etc are sequential i think that is very telling^^^^at such a low block size. well the lowest block size actually. it only gets faster from there.
This is why you took so long to post it. My new setup gets 10,100.Quote:
Mainly because queue is 8. The more drives you have in R0 the faster your high queue iops will be. Lower queue doesn't benefit as much and this is why you are slow in this benchmark and in real world stuff. Another reason is because there are a few different access types + random and sequential accesses are mixed. This is harder for your controller to predict and preserve the data using all of your drives at once.Quote:
there are serious disparitys with this benchmark
No, it means that iometer is usually not a predictor of real world performance, like you claim it is.Quote:
however i do reach higher iops than the acards on the 1231. not sure what that means, as one hertz, my dear friend, has stated that stevero is faster, even though he scored lower iops than he did. so does that mean lower iops is better? i dont get it.
You can make your own profile. I have posted all the settings I used. Everything is 100% correct as per anand.Quote:
for some reason i cannot view the profile of this profile that was made by one hertz, so i am not sure of exactly what is going on here. this is a benchmark that is atttempting to match anandtechs, but i think we need work on it. that is all i have time for,. i am sure that we will have much to discuss after the party!!
jsut because my computer is on at midnight doesnt mean i am on it...hmm wifes birthday.. alittle b'day action maybe??:clap:Quote:
You sat on XS for 1h+ on Friday night (as did I) and did not post it so don't give me that.
at first i did not post, no. i ran it, results seemed messed up to me. out of whack. of course i didnt understand at the time that my results were better than a bunch of acards in raid 0 :) i tried to see the profile, didnt work, for some reason i still cant view it. i even re-downloaded the damn thing, then in comes wifey with something better than random i/o's....
now, stevero scores less iops than i did, yet you tell him that his system is faster than yours. i come and beat him on iops, but still (supposedly) under you, and you tell me how much slower mine is? against acards. so the same thing, the queue, that is supposedly hurting my system, is also being faced by the 1231 on other peoples systems when running this profile. but i beat them. so what you are saying then is that the ich10r is faster than the 1231? that is exactly what you are saying.steve ro ran the same damn thing. you have a serious issue with denial man. i have posted every benchmark you have asked for, those above are at the very same settings as stevero's 64 oustanding iops, 1 manager, one worker one disk, i mean to the letter. you continue to say that i am slower, even though i have posted all of these benchmarks, everything you have requested, and smoked them all. jesus christ man look at the spec sheet of the lsi card, it is light years ahead of these 1231's. you are in full denial. if you believe that i/o meter is so inaccurate, how can you continue to proclaim ANYTHING about a card you have NEVER USED IN YOUR LIFE. if benchmarks are unrealistic and unreliable to the point that you claim they are, how can you make these claims? how can you claim anything? if there is no way of measuring anything, without you yourself using it, how can you make any claims?Quote:
I've got another E back from RMA so now I am up to 3x X25-E in R0 on ICH10R. I get 103,000 in that test
I get 130k there. It is not telling at all because storage devices aren't stupid. If you only throw one access pattern at it, it will realize this and start predicting your requests (reading ahead), therefore being faster. Real world access can not be predicted like this.
Mainly because queue is 8. The more drives you have in R0 the faster your high queue iops will be. Lower queue doesn't benefit as much and this is why you are slow in this benchmark and in real world stuff. Another reason is because there are a few different access types + random and sequential accesses are mixed.
no i/o meter is not perfect. especially when it comes to mixed profiles, that is definitely its weakness. i never said it was a perfect system, but i said specifically that it will give you a BLURRY view of real system performance. it is the ONLY way that you can negate the performance of other system components, cpu motherboard, gpu ram. there is no other way of comparing these things. if you are doing a straight up analysis of the cards, with no mixed profile, say like the 4k reads above, it is goddamn accurate. if the queue is set correctly and the workers, disks, managers, blah blah blah it is accurate. it is going to tell you that you read at 4k at "xx.x" speed. now when you mix in a bunch of different sizes, and random/sequential reads it will be wrong, because as you pointed out, they run concurrently. that is probably the only area that i do agree with you on. we have few places of common ground that is for damn sure.
post your benchmarks btw if you are going to speak of them, take the time to show them. i am not calling you a liar, however, i can tell you i scored five million iops if i choose to.
you are running out of points to argue man. i looked over the changelog on the last two firmware updates for the lsi card the other night, because i was looking at old benches off of this card, and the new ones. the difference is amazing. in the changelog for the last revision one of the fixes was "LSID100132598 (DFCT) DDR2 Latency is greater than DDR1 latency" lsi changelog last F/W rev two revisions ago one of the fixes was specifically small file transfer performance. this is a new specification, a new controller, this thing is still maturing. but where it is at now is far far ahead of most anything out there. go to lsi's site and read the changelogs if you must, man i am sick of putting stuff on phototbucket for you to look at.
for you to set here and make these claims of its performance considering you have never used one, is ill-advised. your continuation of refusing to accept the very facts presented before you, is bullheaded and ignorant. jealous maybe?:down:
let me set a nice example for you:
@FEAR-
nice benches man..here are some of mine in comparison. i have not ran the workstations and that one with the 1024 outstanding i/os is particularly interesting, i will run one when i get some time..these are from the other night/..
(all IOmeter-Benches with 64 outstanding IOs)
4k 100 percent random reads:
http://i517.photobucket.com/albums/u...1/4krandom.png
4k 100 percent sequential reads...
http://i517.photobucket.com/albums/u...sequential.png
i would love to compare the write benchmarks, however i dont wanna beat on my array that much...god with as much as i have abused it in the past, i am afraid to hurt her much more :D
Hertz- Ignorance is bliss, just let him be.
@Griff805...seeing as how you havent contributed anything at all other than questions, who are you to say that? what is your knowledgebase, bro?
I'm no one- just like you. Only reason I am giving you crap is because you came on here with your all knowing attitude and aren't open to any kind of correction. You put so much weight on these benchmarks, when you have no way to even prove the benchmark is accurate or if you are even benchmarking the right area of performance. There's so many different things going on in the storage system and you're trying to prove that a drive/card is better because of 1 or 2 benchmark numbers. Hertz is trying to clue you in on where you're missing the mark, but you keep touting your benchmark numbers. There's more to it than just throughput-
You're the one coming off as "bullheaded and ignorant" - and no one here is jealous.
a few benchmarks? look back over the previous 115 posts. that is quite more than a few benchmarks. it was one hertz who made the challenges, and who keeps insisting and using phrases like "lsi is crap" i have demonstrated more than just throughput as well, when it comes to large sequentials. there is also alot of good random performance numbers there as well. it has to be at least a little impressive that the lsi is keeping up with, or passing acards in raid0. i want to say that i give up trying to prove anything, but i wont. I have benchmarked every area of performance that i have been asked to, with very good results. and to say that i have been inflexible in my assertions is incorrect, i have agreed with him on a few issues along the course of the thread that i did not agree with a the beginning of it. i hate to say jealous, but it makes me wonder what is the motivation here? pride maybe? i am not saying it is the best thing in the world ever, BUT i am saying that you can do super fast storage at a price point a HELL of a lot better than 11 dollars a GB. i did it for six, with a array that has a lot more flexibility and forward mobility, i have the chance for future upgrades, with the i/o extreme what you get is it, period. there is no upgrading. in a few months i will have the choice of expanding to 6gb/s ssd devices, while the supposedly "better" solutions of the extreme and the 1231 will not. there is also the bootable question of course. when you write to it its performance goes to :banana::banana::banana::banana:?? for 11 bucks a GB??
I would say that it is becoming increasingly hard to make such defenitive statements as :
Quote:
The 9260-8i seems to have similar performance to the card on the Zdrive.
Quote:
That LSI is complete crap at everything but large files
Quote:
there is no argument. The LSI is indeed crap just like adaptec. Your whole array (6 or 8 vertexes?) is a lot slower than 2x x25-es.
these are definite statements by someone who has no means of proving them. if i have to be bullheaded and arrogant to prove my point, then i will be.Quote:
The 1680 also sucks for the same reason as the 9260 sucks.
i will say, that someone who at least argues with me on the points is more interesting than an observer who comes in touting his knowledge of...nothing?
How much is that thing?
http://www.abload.de/thumb/apict1iuxy.jpg http://www.abload.de/thumb/idf7-1mil_2l60t.jpg
:shocked::shocked::shocked: what is it?
Looks like a naked Super Talent RAIDDrive GS 1536GB
http://p.gzhls.at/463350.jpg
But the IOps are too high for a consumer-SSD.
Nice post FEAR!
Best way to brake these guys up! :D
That looks very interesting. PCI-E SSDs are the future IMO. At least I hope they will evolve into something more widely accepted. :)
That RAIDDrive isn't sold at many places and the only one listed on Froogle has it for... well idiotic prices..
$5000 for 1TB.
Are you sure?
1,5TB Super Talent costs 10.000€ or US-$ 15.000 in europe.
But isn´t the card in the pics
http://www.google.com/products?q=RAIDDrive+GS&scoring=p
Actually costs more now :(
Well the only place that sold this cheaper version is sold out: Fusion IOXtreme 80Gb
Now lets see if some reviews from actual users show up here.
I wonder... :confused:
There is no 256GB- and 512GB-Version available in europe.
Holy sh_t, what prices (ES/WS 768GB) ~ $ 20.000 http://translate.google.com/translat...000%26sort%3Dp
jeez lowfat... i think i am more impatient for you to receive that than you are!!:hump:
does that even make sense?:rofl:
(i have no idea why i am humping your leg, it just happens):shrug:
Lol!
:d
He has been sitting on it for nearly two weeks. Actually i think the issues is w/ his wife. :p: I am about ready call UPS myself (and take the HUGE hit for brokerage fees, likely a few hundred $$$) and get UPS to pick up the card directly from their house.
Ohhh nooooo, poor IOxtreme is in between a divorce battle! "I'll take the dog!"; "No you wont"; "O.K. fine I'll take the Fusion IO then!". ;)
You should consider USPS.com too they have pick up service now and you can always use one of those Priority Mail International Small Flat-Rate Box for US$ 10.
Way cheaper then UPS.
In my experience....do NOT use USPS to get stuff across the border to Canada. bad bad bad bad bad. Takes over a week for no apparent reason.
This is not true in most cases.
USPS has better prices then any other carrier and the pick-up service for US$15 is a steal.
I've used them to send stuff to Australia, Brazil, Germany, Holland, Uruguay, England, Mexico, Canada and never had problems. The delay on delivery happens not because of USPS but because of the Countries customs policies and posts.
Big Countries like Canada and Brazil usually have 3 or 4 Customs entry posts to filter all the incoming mail and USPS guarantees the Airmail delivery from USA to this station. After that they will take ground route to the final destination because they wanna save money. So if you live next to or near a customs entry post your in luck and that mail will arrive in 6 days max, if not it will take more time, depending in your own official post office (Correios/Royal Mail/Canad Post/Deutsche Post etc).
All of that is applied to Priority Mail, Express Mail is guaranteed to be Airmail all the way to the final destination.
The Canada Post driver for my area is this incredibly obese woman that only comes every 2-3 days regardless of how much mail/stuff there is for us. Perhaps that is part of the reason. I have never, ever had anything delivered to me in under one week using USPS.
from performance whitepaper.pdf
http://i49.tinypic.com/2hh0j8m.png
areca 1231 decimates that and eats it for breakfast/launch/supper/and all the snacks in between :)
It's also possible that lowfat already has the card, but is holding on us.
You know guys it is better to work on the benches in peace and without any pressure! :D:D
Fast, but expensive...
RamSan-20 PCIe http://www.ramsan.com/products/ramsan-20.htm
Looks like my brother is actually sending the card out today http://smiliesftw.com/x/boweek5.gif
Should have it by next weekend.
Snail-mail:rolleyes:
That'S and Canada post for you - might get it at some point, might not :D
Hey, let us know where he lives so we can...errr make sure he mails it? :D
Can't believe nobody else got it and posted some new benches anywhere on the web... the sales must be dreadfully slow, since we see nobody bragging about it :)
Sorry about that ;)
In Dec. 2007 i bought 2x 16GB SLC for ~ $1200 + $400 for a ARC-1210 :D
http://www.xtremesystems.org/forums/...d.php?t=172012
@ low
Tomorrow? :)
yes we are having benching smackdowns! we need the i/o extreme in there as well!
Nice. :up: Looking forward to seeing how this thing performs.
Damn now I have to check this thread every 15 minutes until you post results haha. And I have an exam in a couple of hours.
:bounces:
:eleph::eleph::eleph:
Guys he will have to read the 142 pages on the user's manual first, calm down. I'm receiving my on Monday :rehab:
He need only the quickstart :D
He should have read the manual before he bought it! Come on lowfat it has been over an hour!:ROTF:
^ has you lsi turned up yet? (Good luck with your exam btw)
Quick run of the fulltest.icf from the OCZ Forums. 64 outstanding IOs.
http://i18.photobucket.com/albums/b1...Untitled-5.jpg
Damn near identical to 4 x X25-Ms on ICH10R. However the above bench was w/ a stock CPU. The run I did w/ the X25-M's was @ 4.2GHz. I can't run higher clocked ones until I have the time to throw a new motherboard in (sitting on my desk).
I don't have a whole lot of time to bench today so far. As I am just on my lunch break and likely will be working for the remainder of the day.
CPU speed actually doesn't effect IOMeter. Please do some real world tests when you get a chance. 4kb random reads and writes would be nice to see as well.
How well does this thing handle interleaving IO operations? One of the recurring weaknesses I've found with the X-25M is the inability to handle IO workloads while a large seq. write is going on. I.e. read ops simply "stop" when you try to copy a 2GB iso to the drive. This strangely does not happen with RANDOM writes.
the fulltest.zip you linked is 4kwrite? 100 percent random. i thought it was something else lol. i am trying to stay away from write testing:( i am afraid of beating on my array too much.
Pcmark05 Please!
cmon lowfat, we are waiting here....:hump:
its fine, i am just impatient :worship:
ok im going to forget the maths that 16 years im learning.....
1 IO/s is 4kb/s
20094,53 is 257,60Mb/s????????????
lol ok just do the equation right..... if 1 OI/s is 4kb/s then 20094,53 IO/s
how many Mb/s are???
simple..... x= 20094,53 * 4 = 80378,12.... kb/s then?????????????????????????????????????????????? ?????????????????????????????????????? how the heck got 257Mb/s?
LOL i think he is trying to say that the numbers dont add up...that was a 4k random write test? because that profile 'fulltest.icf' that was posted is 4k random write test. i was wondering the same thing, is that the test you ran?:confused:
Ya it was the test I ran. Downloaded it this thread.
http://www.ocztechnologyforum.com/fo...hlight=iometer
wow impressive results! cant wait to see a 4k random read or just sequential of anything lol...any feedback on access times, could ya tease us with that? a pcie device it should be spectacular
The default test for that iCF is "All-in-one" - all possible 4-64KB, 0-100% random, read/write 0-100% dispersiion. Check it.. it is not 4K random write test.
@alfa-thanks! i had a feeling i was missing something, that is why i kept asking., i was running the 4k random write file on there!
http://i18.photobucket.com/albums/b1.../Capture-6.jpg
Got everything installed. Will run some gaming/photoshop tonight if work isn't too busy.
lowfat
Could you please run CDM 3.0, it includes 4K QD32.
(CDM 3 it's still in beta but you should be able to find it)
Link to post on forum where you can safely download the 3.0 Technical Preview.
Hm, why is seq. write so bad? :confused:
Now IOmeter 4k 100% random 100% r/w 64 threads
and FC-Test (create, read, copy) with my big-pattern.
Please not less than 3 runs per create/read/copy (for exactly results)
example: (the first 9 results 2x MTRON Mobi with ARC-1210 - the rest Solidata K5)
http://www.abload.de/thumb/hc_529x5fj.jpg
CrystalDiskMark 3.0 Beta 2 http://release.crystaldew.info/CrystalDiskMarkSetupBeta @ Low - please only 1000MB or 2000MB filesize ;)
How much did you pay?
And how much are X25-Ms in Can or USA?
Yes it uses ram. The amount used depends on the formatting block size.
http://i18.photobucket.com/albums/b1.../Capture-8.jpg
Ok, more to the point is read :)
Please read #190 once again ;)
I create a pattern with 5GB testfile for you ;)
Wait a moment...
edit:
2 patterns (4k - 100% random - 100% write/read - 10 minutes - 64 threads - 5GB filesize) http://filestore.to/?d=74677AED6
Please click your username at "Topology" and choose your favoured partition in "Targets". Don´t change other parameters!!!
Here is the read test. Will do write tonight
http://i18.photobucket.com/albums/b1.../Capture-9.jpg
May I suggest an IOmeter setup to test this card?
100% read 100% random 4KB, 5 sec ramp-up to avoid latency spike at start, 1-2GB test area, 1 min runtime. 1 worker, and following queue depths:
1, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 24, 28, 32, 64, 128, 256.
This will provide data for making a depth analysis of how the card handles random read at different queue depths. I have done simelar setups to analyze other SSDs, and it gives good graphs.
If it's not too much work, the same setup with 100% write would also give great info.
If you still feel up to more benching after that, a run with 66% read will how how the card handles mixed read/write .
I quote myself from the thread Storage > LSI 9211-8i: (partial quote)
So like i said in the other thread, I think ioXtreme will do great at the IOPS+IOPS/[average accesstime]. The reason for making a graph with those parameters is it weighs heavy on accesstime, wich is the great strenght of SSDs, and also level of parallell architecture and controller technology to allow scaling IOPS whitout gaining too much latency. But it also rewards high IOPS at high queues even with higher latency, wich the IOPS/[average accesstime] doesn't, and I think most of us agree that high IOPS throughput at high QD is valued despite higher accesstime, though lower acesstime is better.Quote:
As an example, here is Test 1 with almost the same parameters done on an x25-M connected with eSATA to a laptop. I know this is a bit different from what we will be testing, but the analysis method is the same and should bring usefull data and graphs. Benching data provided by Anvil, I've crunched the numbers.
Link to benchmark screenshots. (click spoilers to see screenshots)
Links to graphs generated from data:
IOPS by QD
Average accesstime by QD
Max accesstime by QD (this one went bad because of eSATA and craptop)
Snapshot of the spreadsheet used for first 3 graphs
And then the complicated and even more interresting stuff:
IOPS/average accesstime by QD
IOPS vs IOPS/accesstime by QD (2 competing graphs)
IOPS + IOPS/accesstime by QD (2 stacked graphs)
And finally link to excell 2007 spreadsheet with all raw data typed in, calculations (a few notes) and graphs. (XS wouldn't allow me to upload .xlsx, so it put it on a fileshare service)
Personally i love the last graph here with IOPS+IOPS/accesstime, as it depends on both high IOPS with simultaneously low accesstime for high scores. I bet ioDrive and ioXtreme will own at this particular graph, as they are designed for low latency and with massive parallell design.
@gullars youshould make a profile to do the testing that you need then we could just show you the results on the spreadsheet i.o meter creates!
oh wait i guess you cant put that all in one profile!