Results 1 to 14 of 14

Thread: 8 x Samsung 830 256GB's + Areca 1882-ix-16 - 800MB/s+ only. What's wrong?

  1. #1
    Registered User
    Join Date
    Dec 2012
    Posts
    3

    Unhappy 8 x Samsung 830 256GB's + Areca 1882-ix-16 - 800MB/s+ only. What's wrong?

    Hi guys,

    Saw that you guys did xtreme builds and I'd think my build's a little xtreme too so it'd probably qualify.

    Basically I've this setup which seems to be grossly underperforming in terms of read/write speed even though I've chucked in quite abit of decent stuff.

    I'm using 8 x Samsung 830 256GB's (I've 10 more new ones waiting to go into the chassis, though.) and an Areca 1882-ix-16 (which supports 16 internal + 4 external ports) running in RAID0.

    The chassis is a barebone Supermicro SYS-2027R-N3RFT+ (http://www.supermicro.com/products/s...27R-N3RFT_.cfm) and I've 8 x 16GB Hynix DDR3 1066 ECC REG PC3-8500R (128GB in total) and 2 x Intel Xeon E5-2620 CPU's.

    System is running VMware ESXi as host OS but I do not believe that it's the bottleneck - a simple dd test done within ESXi's Linux Console shows this:

    vmfs/volumes/500768db-efaf45e0-45a1-0025907a08fb # time sh -c "dd if=/dev/zero of=ddfile bs=8k count=2000000 && sync"
    2000000+0 records in
    2000000+0 records out
    real 0m 19.66s
    user 0m 0.00s
    sys 0m 0.00s

    That's around 813.83 MB/s.. which is horribly slow considering 8 x Samsung 830's should average around 3GB+/s?

    Here're more tests from within a Windows 2008 R2 VM running inside VMware ESXi:





    What could possibly have gone wrong to incur such low speeds?

  2. #2
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    Hi,

    I am not sure if this is helpful, I have the Areca 1882ix 24 with 4GB cache and this is what I got in ATTO


    its on the system below EVGA SR-2, just haven't updated my new storage yet



    Henrik
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  3. #3
    Registered User
    Join Date
    Dec 2012
    Posts
    3
    Quote Originally Posted by tived View Post
    Hi,

    I am not sure if this is helpful, I have the Areca 1882ix 24 with 4GB cache and this is what I got in ATTO


    its on the system below EVGA SR-2, just haven't updated my new storage yet



    Henrik
    Hi Henrik,

    Thanks for the reply!

    Wow, the speeds you get seems almost double of mine

    Did you do any other settings, say, in BIOS or something? Like perhaps forcing X16 for the PCIe slot?

    The server I'm having these drives on is a live, running server, hence I'm a little hesitant to mess with settings at this point.

    Cheers,
    Kelvin

  4. #4
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    Atto really means nothing here is my IBM M5016 (LSI 9265) with 6x Hitachi 2TB spindle drives in RAID6
    ASUS P8Z77 WS
    Intel Core i5-3470T
    16GB 1333Mhz RAM
    PNY GeForce GTX470
    LSI 9266-8i CacheCade pro v2.0/fastpath
    IBM ServeRAID M5016
    IBM ServeRAID M1015 LSI SAS Controller (IR mode)
    4x 60GB OCZ Solid 3 SSDs
    6x Hitachi 2TB 7k2000 HDs

  5. #5
    Registered User
    Join Date
    Dec 2012
    Posts
    3
    Quote Originally Posted by mobilenvidia View Post
    Atto really means nothing here is my IBM M5016 (LSI 9265) with 6x Hitachi 2TB spindle drives in RAID6
    Hmm. Could it be possible that your 6 x 2TB speeds are based on RAID cache?

  6. #6
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Quote Originally Posted by hot_wired13 View Post
    Hmm. Could it be possible that your 6 x 2TB speeds are based on RAID cache?
    Atto is usless wit controllers with cache.
    Use iometer or Anvil with testfilesize bigger than 8gb

  7. #7
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    Quote Originally Posted by hot_wired13 View Post
    Hmm. Could it be possible that your 6 x 2TB speeds are based on RAID cache?
    Yes it is, as below, Atto is measuring Cache speed, point I was trying to make

    Quote Originally Posted by Nizzen View Post
    Atto is usless wit controllers with cache.
    Use iometer or Anvil with testfilesize bigger than 8gb
    Here is Anvil result for the same 6x HDD's in RAID6 to show how Atto only measures Controller cache to System speed


    BTW, I'm not using the M5016 for the Array in real life, LSI9261 with Fastpath and CacheCade doing a much better job of it with Anvil results (but worse with Atto)
    Proving again Atto's in ability to test anything usefull other than bragging rites.
    Last edited by mobilenvidia; 01-03-2013 at 09:41 AM.
    ASUS P8Z77 WS
    Intel Core i5-3470T
    16GB 1333Mhz RAM
    PNY GeForce GTX470
    LSI 9266-8i CacheCade pro v2.0/fastpath
    IBM ServeRAID M5016
    IBM ServeRAID M1015 LSI SAS Controller (IR mode)
    4x 60GB OCZ Solid 3 SSDs
    6x Hitachi 2TB 7k2000 HDs

  8. #8
    Xtreme Member
    Join Date
    Jul 2008
    Location
    In the space between...
    Posts
    345
    Shouldn't be long until a resident raid-head pops in...I don't much at all about enterprise/server level raided storage setups myself.
    Last edited by Zaxx; 01-03-2013 at 01:40 PM.
    'Best Bang For The Buck' Build - CM Storm Sniper - CM V8 GTS HSF
    2500K @ 4.5GHz 24/7 - Asus P8Z68-V Pro Gen3 - GSkill 2x4GB DDR3-2400 C10
    Sapphire Vapor-X 7770 OC Edition - PC Power & Cooling Silencer MkIII 600W
    Boot: 2x 64GB SuperSSpeed S301 SLC Raid 0 Work: Intel 520 120GB
    Storage: Crucial M500 1TB - Ocz Vertex 4 128GB - 4x 50GB Ocz Vertex 2
    HDDs: 2 x 1TB WD RE4 Raid0 - Ext.Backup: 2 x 1.5TB WD Blacks Raid 1

  9. #9
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    I only added the Atto benchmark as it was the only one available to me at the time, and it was one that the OP had used as reference.

    I am not sure how much cache is on the OP's card, but we are comparing the same controller with similar drives, though mine are half the size which should have favored the OP.

    It isn't the actual speed that is the concern, it's the different results we are getting with similar gear.

    Henrik

  10. #10
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Try it outside of a virtual environment.

    Even the linux console in ESXi is a virtual machine, so it will have the same kinds of overheads as any other virtual machine.

  11. #11
    Registered User
    Join Date
    Mar 2010
    Location
    Canada
    Posts
    76
    Hot_wired13, did you ever get this resolved and identify the bottleneck?

    One thing you could try is disabling Read Ahead caching on the controller. I had a poor performing RAID setup with a similar card, and my numbers instantly jumped up a bit when I started disabling various caches. If I recall correctly, it was the Volume ReadAhead cache that was slowing me down.
    Last edited by rkagerer; 02-03-2013 at 04:23 AM.

  12. #12
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    i built a system with 48 ssd's (Samsung 830 128gb).
    Best read perf approx 20gb/s, write perf ca. 15 gb/s. these are sustainable numbers up to 1 terabyte. 1 petabyte read takes 16 hrs, which equals 16 gb/sec average (multiple passes over original dataset).

    some findings.
    the usual benchmark tools show suboptimal scaling behavior. many of those tose tools dont consider bigger datasets (above 100 gb for instance). use iometer instead for more predictability.
    make sure that all ssd's in one raid perform equally well. the lowest performing ssd impact the whole array. after 50 petabytes of write, perf differences between individual ssd's reached 1:10 in my case. some of the drives could not be recovered with secure erase.
    driver quality and raid chip performance matter. adaptec 7 series is great with hdds, but lack ssd performance like new Lsi controllers (with ssds). Dont know about ssd scaling of areca controller vs its hdd performance. need to be checked.
    Many of build in os tools dont have the same agressive threading model and double buffering managment like leading benchmark tools. Dont expect identical performance.
    Be careful on the data reliability of such an array.

    .... to name a few ...

    regards,
    Andy

  13. #13
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Quote Originally Posted by Andreas View Post
    i built a system with 48 ssd's (Samsung 830 128gb).
    Best read perf approx 20gb/s, write perf ca. 15 gb/s. these are sustainable numbers up to 1 terabyte. 1 petabyte read takes 16 hrs, which equals 16 gb/sec average (multiple passes over original dataset).

    some findings.
    the usual benchmark tools show suboptimal scaling behavior. many of those tose tools dont consider bigger datasets (above 100 gb for instance). use iometer instead for more predictability.
    make sure that all ssd's in one raid perform equally well. the lowest performing ssd impact the whole array. after 50 petabytes of write, perf differences between individual ssd's reached 1:10 in my case. some of the drives could not be recovered with secure erase.
    driver quality and raid chip performance matter. adaptec 7 series is great with hdds, but lack ssd performance like new Lsi controllers (with ssds). Dont know about ssd scaling of areca controller vs its hdd performance. need to be checked.
    Many of build in os tools dont have the same agressive threading model and double buffering managment like leading benchmark tools. Dont expect identical performance.
    Be careful on the data reliability of such an array.

    .... to name a few ...

    regards,
    Andy
    No picture or screenshot
    Here in XS; Picture or it did not happened

  14. #14
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    Quote Originally Posted by Nizzen View Post
    No picture or screenshot
    Here in XS; Picture or it did not happened
    http://www.xtremesystems.org/forums/...=1#post5153478

    rgds,
    Andy
    Last edited by Andreas; 02-10-2013 at 01:07 AM.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •