sorry for being away but i have to work to make my living
here is some new benches
http://img101.imageshack.us/img101/3...ptureya.th.jpg
Uploaded with ImageShack.us
Printable View
sorry for being away but i have to work to make my living
here is some new benches
http://img101.imageshack.us/img101/3...ptureya.th.jpg
Uploaded with ImageShack.us
http://img408.imageshack.us/img408/8...ture1na.th.jpg
Uploaded with ImageShack.us
http://img44.imageshack.us/img44/1467/newweb.th.jpg
Uploaded with ImageShack.us
http://img202.imageshack.us/img202/885/new311.th.jpg
Uploaded with ImageShack.us
[IMG=http://img140.imageshack.us/img140/6003/new31p.th.jpg][/IMG]
Uploaded with ImageShack.us
[IMG=http://img594.imageshack.us/img594/8962/new21.th.jpg][/IMG]
Uploaded with ImageShack.us
[IMG=http://img255.imageshack.us/img255/4913/new12.th.jpg][/IMG]
Uploaded with ImageShack.us
Welcome back.
Just a quick question: What drives, controller, and settings are you using to produce those numbers?
I see you get over 2000MB/s sequential read, and 1900MB/s+ 64KB (random?).
I'll also note, you get 100-120K IOPS both read and write in CDM, but around 180K IOPS in AS SSD. I will speculate you are CPU-bound in CDM since it's single-thread, but either controller-bound or drive-bound in AS SSD. 180K/8= 22,5K IOPS read pr drive if you are using 8R0. Considering 64/8 = QD 8 pr drive, that would suggest Intel x25 drives. The random write IOPS also supports the drives being x25-M, at about 10-15K each. But 500MB/s write from 8 drives seems a bit low, ca 60MB/s from each.
@ Gullars sequential reading test in IOmeter
LSI 9211 i8 with 8 intels @ R0 64k stripe size
Welcome back Tiltevros
+1 with Anvil - would love to see a 9211 tweakers guide from you!
im waiting some graphs from u to start writing but.........
Just to clarify, you are using integrated RAID, and still get 180K IOPS?
Would you mind running PCmark Vantage HDD test on the array?
Also, could you run this IOmeter config? It's the 512B-64KB random read QD 1-128 file i've used earlier to make some nice graphs.
It will take about 20-30 minutes to run.
I'd also love it if you could run the corresponding multi-threaded test (4 workers) to see if it makes a difference on IOPS. I custom made it for no cache so it will take 1/12 the time to run of the full test with 1 worker.
Since it's 4 workers, each will cycle QD 1-32 for QD 4-128.
Results should be fairly accurate for all controllers not using cache.
EDIT: if you run these 2 configs, i'll make you loads of nice graphs, and compare to other storage setups i already have the corresponding data for (the first config).
The second config will show if you are CPU-limited on one core on the first config, and comparing graphs will show how much compared to 4 cores.
I posted graphs wich can be seen as examples in the C300 thread.
EDIT2: link directly to the graphs:
http://www.xtremesystems.org/forums/...1&d=1272302732
http://www.xtremesystems.org/forums/...1&d=1272302732
here we go random read v4
great, and the second file with 4 workers?
and the 4 workers
Tilt - 1.7GB/s 64k random reads @ qdepth 64 - very nice!
@ 128QD i have 1,84GB/s :P
im still waiting for the graphs :P
welcome back Tiltevros!!! Let's see what you are showing off time....:eek::eek::eek::eek:
wowowowow 2 days and still waiting lol
breaking the 2GB/s
I've kinda been flooded the last week, will get to it later today or tomorrow.
I'm under group preassure for some gaming tonight, so i'll see how long it goes on.
holy cow! awesome results tilt!
4 days and still waiting lol
Here's something you can look at while i finish up making more graphs. It's 3D diagrams mapping IOPS performance by block size and QD, bandwidth by block and QD, and average accesstime by block and qd.
EDIT: There is something wrong with the file you sent me with 0,5-64KB QD 1-128 4 workers. The file you sent me only had 1 worker, but the attached config file made 4 workers when I ran it.
New graph.
8R0 x25-M G2 80GB LSI 9211-8i, 4KB random read (4K alligned) 1 vs 8 workers, QD 1-128 (you somehow dropped the QD 32 step in the 8 workers config Tilt...)
Attachment 103724
EDIT: first upload was the wrong image :P
EDIT2: theoretical possible scaling will be 240-320K IOPS. It looks like you can pass 200K IOPS with higher queue depths, 256 would likely give around 210-220. 512 may give 230-240. (8 workers or more).
EDIT3: perhaps a more powerfull OC could help too?
~160MBps at QD of 8 and ~80MBps at QD of 4 random read is about the same it shows at the 9260 graphs, maybe the 9211 is even a bit faster.
can i have the fixed xls file ?
or if u want to put all the results in the same 2D graphs.
thnx.