Results 1 to 6 of 6

Thread: LSI 9271-8iCC Slow Read/Write

  1. #1
    Registered User
    Join Date
    Apr 2013
    Posts
    2

    LSI 9271-8iCC Slow Read/Write

    We just got done finished setting up our new server, with a fairly high end array and were really put off by the results we got for reads/writes. First off, here is some specs:

    2 x E5-2620's w/ SM board
    8 x 2TB in Raid-10 for the main storage
    2 x 120GB SSD in Raid-1 for ssd caching (cachecade)
    LSI 9271-8iCC (with LSI recommended settings)

    Here is what we are getting for read speeds:

    /dev/sdb:
    Timing cached reads: 16740 MB in 1.98 seconds = 8436.40 MB/sec
    Timing buffered disk reads: 1338 MB in 3.00 seconds = 445.70 MB/sec

    We are using the latest firmware, drivers and its the same on CentOS 6's 2.6.32 or mainline 3.8.xx

    If I compare this to another one of our servers (Adaptec 5805Z, 6x WD RE4 1TB):

    /dev/sda:
    Timing cached reads: 25604 MB in 2.00 seconds = 12816.07 MB/sec
    Timing buffered disk reads: 2136 MB in 3.00 seconds = 711.98 MB/sec

    With more disks, faster memory and twice the amount I would expect higher figures.

    Any ideas why we are getting such slow speeds?

  2. #2
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    How are you attaching 10x drives to the 8x port LSi9271 ?

    The array is not in the middle of an 'Initialisation' ?
    This can take many many hours, possible over a day and slow up reads/writes

    Did you do the benchmark more than once, the first time round Cachecade will cache the data to the SSD's, so 2nd time+ should do better
    ASUS P8Z77 WS
    Intel Core i5-3470T
    16GB 1333Mhz RAM
    PNY GeForce GTX470
    LSI 9266-8i CacheCade pro v2.0/fastpath
    IBM ServeRAID M5016
    IBM ServeRAID M1015 LSI SAS Controller (IR mode)
    4x 60GB OCZ Solid 3 SSDs
    6x Hitachi 2TB 7k2000 HDs

  3. #3
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    What type of SSDs are you using? do you have a BBU and it is in a learning cycle? how large is your hot dataset, and how repetitive is the workload? it might require extra time for cache priming.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  4. #4
    Registered User
    Join Date
    Apr 2013
    Posts
    2
    Thank you for the replies!

    1. They are attacked to the LSI card by a SAS expander backplane.

    2. No, it isn't. Its been online for many days now too.

    3. Yeah, I did the test 6 times in a row and got similar results.

    4. For SSD's we are using MKNSSDCR120GB-DX7 for the cachecade in raid-1.

    Again though, with more disks in our LSI array, more ram, faster ram, PCI 3.0, FastPath, CacheCade, we would expect atleast better results than an older and lower spec adaptec 6 drive array:

    Adaptec 6 disk:

    10240+0 records in
    10240+0 records out
    10737418240 bytes (11 GB) copied, 13.8036 s, 778 MB/s

    LSI 8 disk:

    10240+0 records in
    10240+0 records out
    10737418240 bytes (11 GB) copied, 23.4447 s, 458 MB/s

  5. #5
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    Some comments:
    I would not take any SSD with a sandforce controller as caching SSD (Write performance collapses when high write activities fill up fast the SSD - like in a cache)
    Your LSI controller utilizes the LSI 2208 and not the more modern LSI 2308 (Transaction rate on high throughput I/O is 2-3x higher)
    The dual LGA-2011 motherboards are I/O marvels. My dual socket system reads a Terabyte in a minute (16 GB/sec average), highest peak I achieved is an I/O rate of 20 GB/sec streaming, or 2,2 mio IOPS with 4KB sectors (=8,6 GB/sec with random IO).

    To identify the root cause, I'd turn off the cachecade mode, and evaluate the HD array in itself (no expander) and move on from there.

    Andy

  6. #6
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    Thw SAS expander will add latency.

    Cachecade is in Write back ?
    I would have choosen RAID0 for the 2x Cachecade SSD's

    What RAID10 settings are you using on the 9271
    Read Ahead on/off etc
    ASUS P8Z77 WS
    Intel Core i5-3470T
    16GB 1333Mhz RAM
    PNY GeForce GTX470
    LSI 9266-8i CacheCade pro v2.0/fastpath
    IBM ServeRAID M5016
    IBM ServeRAID M1015 LSI SAS Controller (IR mode)
    4x 60GB OCZ Solid 3 SSDs
    6x Hitachi 2TB 7k2000 HDs

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •