Quote Originally Posted by lowfat View Post
What are you using for drives? Unless you are going over the limit of ICH*R there is no reason to go a RAID card as it won't benefit sequential read/writes.
Yep I'm aware, thx for the reminder, I only have 1 x-25M G1, but I'm looking to get 4 G2's soon. I'm still on the fence about which card to get, hopefully Areca will bring some new ones to the table too

Quote Originally Posted by alfaunits View Post
I am talking about IOMeter database pattern I/O, which is random I/O.
I really don't get it what you are trying to say..
I'll try to clarify:

The "Database" workload is 100% random, but 67% read, 33% write and 8k blocks only. Obviously you can't generalize that to other patterns e.g 4k transfersize, or 0.5k transfersizek etc... So for those specific workloads that fit this pattern, that test may be valid - but only for the queue depth. But you have to plot different queue depths, and measure IOPS. Yes , the equation Total IOPS = 1000ms / avg IO response time is true; but "IOPS" itself is useless without context or testing conditions. Similarly, "access time" is useless without clarifying what access time you actually are referring to. It's like saying I have 1000 pounds... Well 1000 pounds of what? coal or gold?

The "access times" read from such programs like hdtune , everest etc... only apply to a linear read or write sequential pattern. The equivalent pattern in IOMeter would be 0% random, 1 Outstanding I/O. This is not very useful, because in reality, different workloads may have very different patterns. For example, surfing web is supposed to be 20-40 queue depth or something like that. "access time" isn't a constant . It's a function of many different things including workload, firmware, CPU etc....So similarly "access time" is useless without context or clarification.

Moreover, "average" access times might not even be all that useful a measure - it's only 1 measure of many that you should look at (Important to look at the whole picture and context). Maximum response time or maximum latency might be more important - because this is perceptible to humans. For example, High maximum response times on 4kb small random writes at practically all queue depths was seen on the early SSD's with jmicron = stuttering.

Sorry for long post...

Anxiously awaiting more benchmarks...