View Full Version : LSI 9271-8iCC Slow Read/Write

04-06-2013, 12:11 PM
We just got done finished setting up our new server, with a fairly high end array and were really put off by the results we got for reads/writes. First off, here is some specs:

2 x E5-2620's w/ SM board
8 x 2TB in Raid-10 for the main storage
2 x 120GB SSD in Raid-1 for ssd caching (cachecade)
LSI 9271-8iCC (with LSI recommended settings)

Here is what we are getting for read speeds:

Timing cached reads: 16740 MB in 1.98 seconds = 8436.40 MB/sec
Timing buffered disk reads: 1338 MB in 3.00 seconds = 445.70 MB/sec

We are using the latest firmware, drivers and its the same on CentOS 6's 2.6.32 or mainline 3.8.xx

If I compare this to another one of our servers (Adaptec 5805Z, 6x WD RE4 1TB):

Timing cached reads: 25604 MB in 2.00 seconds = 12816.07 MB/sec
Timing buffered disk reads: 2136 MB in 3.00 seconds = 711.98 MB/sec

With more disks, faster memory and twice the amount I would expect higher figures.

Any ideas why we are getting such slow speeds?

04-06-2013, 10:43 PM
How are you attaching 10x drives to the 8x port LSi9271 ?

The array is not in the middle of an 'Initialisation' ?
This can take many many hours, possible over a day and slow up reads/writes

Did you do the benchmark more than once, the first time round Cachecade will cache the data to the SSD's, so 2nd time+ should do better

04-07-2013, 02:41 AM
What type of SSDs are you using? do you have a BBU and it is in a learning cycle? how large is your hot dataset, and how repetitive is the workload? it might require extra time for cache priming.

04-07-2013, 07:30 AM
Thank you for the replies!

1. They are attacked to the LSI card by a SAS expander backplane.

2. No, it isn't. Its been online for many days now too.

3. Yeah, I did the test 6 times in a row and got similar results.

4. For SSD's we are using MKNSSDCR120GB-DX7 for the cachecade in raid-1.

Again though, with more disks in our LSI array, more ram, faster ram, PCI 3.0, FastPath, CacheCade, we would expect atleast better results than an older and lower spec adaptec 6 drive array:

Adaptec 6 disk:

10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 13.8036 s, 778 MB/s

LSI 8 disk:

10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 23.4447 s, 458 MB/s

04-07-2013, 08:36 AM
Some comments:
I would not take any SSD with a sandforce controller as caching SSD (Write performance collapses when high write activities fill up fast the SSD - like in a cache)
Your LSI controller utilizes the LSI 2208 and not the more modern LSI 2308 (Transaction rate on high throughput I/O is 2-3x higher)
The dual LGA-2011 motherboards are I/O marvels. My dual socket system reads a Terabyte in a minute (16 GB/sec average), highest peak I achieved is an I/O rate of 20 GB/sec streaming, or 2,2 mio IOPS with 4KB sectors (=8,6 GB/sec with random IO).

To identify the root cause, I'd turn off the cachecade mode, and evaluate the HD array in itself (no expander) and move on from there.


04-07-2013, 10:36 AM
Thw SAS expander will add latency.

Cachecade is in Write back ?
I would have choosen RAID0 for the 2x Cachecade SSD's

What RAID10 settings are you using on the 9271
Read Ahead on/off etc