-
Yay! Finally a mod saw fit to authenticate me 
Greetings to you all, I'm new as a participant here on XS, but i've been reading a lot of SSD-related threads and lurking on XS for almost 2 years now. I hang out with Nizzen and Anvil on a norwegian forum where we work on two major SSD-threads, where I initiated the first and biggest one a bit over a year ago, and the other one is new this fall and focuses on benchmarks. The thread is in norwegian, but the performance numbers, settings, and unit names should be understandable and is sumarized in the first post.
I saw you requested suggestions for benchmarking your 9260-8i vs 9211-8i and also compare to Areca 1231, so I thought I'd share and suggest.
I'd just like to apologize right away for a lot of text... :P
As i enter the thread, I am aware of the current avalible setups for testing:
Computurd: LSI 9260-8i and 9211-8i with 8 OCZ Vertex
Nizzen: Areca 1680ix with 4GB RAM, other misc RAID controllers, a bunch of Vertex (12?), and some x25-M. And 10(+?) 15K SAS.
Anvil: LSI 9260-8i, PERC 6/i, 4x x25-M 160 G2, 4x x25-M 80 G1, (+1 x25-M 160GB G2 in laptop and a retired 250GB Vertex).
One-Herz: Areca with unnamed SSDs. x25-E?
Tiltevros: LSI 9211 and 6 (5?) x25-M.
Biker: LSI 92xx with unknown # OCZ Agility.
And possibly:
Stevero: 3 Acards in 6x4GB R0 on areca 1231ML. (1231ML/8xR0acard ?)
Jor3lBR: Kingston ? and 4x Supertalent ? 32GB.
Sadly i won't be able to contribute with bench numbers at this time. I've only got a Adaptec 5805, 2x Mtron Pro 32GB (bought aug 08) and 2x OCZ Vertex 30GB. I haven't got ICH10R either, as I'm on a AM2+ Phenom 9850 setup. Not an extreme system per say :P
SO, regarding the suggestions for benchmarking:
I suggest 3 IOmeter setups each with a range of Queue Depth (QD) runs. For the controllers with cache, test lenght should be at least 3-4 times cache to avoid meassuring the cache speed instead of the RAID. Also, the controllers with cache should have a run time long enough to ensure any burst to/from the cache is insignificant.
IOmeter setup 1: 100% read 100% random 4KB, 5 sec ramp-up to avoid latency spike at start.
Setup 2: 100% write, 100% random 4KB, 5 sec ramp-up for controllers whitout cache and cache/[cache brust speed] sec for those with cache.
Setup 3: either 75% read or 66% read (what do you think?), 100% random, same ramp-up as setup 2.
Cache burst is determined with CDM 3.0 50 or 100MB lenght (wichever scores highest) and is 4KB 32QD numbers.
Test lenght 2GB whitout cache, 4x cache for controllers with cache. Runtime 30 sec for controllers whitout cache, 1 min for controllers with cache.
QD 1-256. Suggested QD stepping 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, 24, 32, 64, 128, 256. I suggest initially 1 worker, and we can eventually compare the difference queues of same total lenght being split on more workers makes. On setups with a high number of devices in RAID we could possibly change the steppings to reflect the total number of channels?
These 3 tests with 18 QD steps each will provide us with enough raw data to run a fairly complete analysis of controller and SSD strenght regarding random IO performance in terms of IOPS and accesstimes. I know this will take some time, but i think it will definetly be worth it. Maybe we should just start with setup 1 to validate?
As an example, here is Test 1 with almost the same parameters done on an x25-M connected with eSATA to a laptop. I know this is a bit different from what we will be testing, but the analysis method is the same and should bring usefull data and graphs. Benching data provided by Anvil, I've crunched the numbers.
Link to benchmark screenshots. (click spoilers to see screenshots)
Links to graphs generated from data:
IOPS by QD
Average accesstime by QD
Max accesstime by QD (this one went bad because of eSATA and craptop)
Snapshot of the spreadsheet used for first 3 graphs
And then the complicated and even more interresting stuff:
IOPS/average accesstime by QD
IOPS vs IOPS/accesstime by QD (2 competing graphs)
IOPS + IOPS/accesstime by QD (2 stacked graphs)
And finally link to excell 2007 spreadsheet with all raw data typed in, calculations (a few notes) and graphs. (XS wouldn't allow me to upload .xlsx, so it put it on a fileshare service)
Personally i love the last graph here with IOPS+IOPS/accesstime, as it depends on both high IOPS with simultaneously low accesstime for high scores. I bet ioDrive and ioXtreme will own at this particular graph, as they are designed for low latency and with massive parallell design.
I would love to hear what you think of this 4K random read analysis of x25-M over eSATA to craptop i got done yesterday
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
Bookmarks