PDA

View Full Version : 8 x Samsung 830 256GB's + Areca 1882-ix-16 - 800MB/s+ only. What's wrong?



hot_wired13
01-01-2013, 12:41 AM
Hi guys,

Saw that you guys did xtreme builds and I'd think my build's a little xtreme too so it'd probably qualify.

Basically I've this setup which seems to be grossly underperforming in terms of read/write speed even though I've chucked in quite abit of decent stuff.

I'm using 8 x Samsung 830 256GB's (I've 10 more new ones waiting to go into the chassis, though.) and an Areca 1882-ix-16 (which supports 16 internal + 4 external ports) running in RAID0.

The chassis is a barebone Supermicro SYS-2027R-N3RFT+ (http://www.supermicro.com/products/system/2U/2027/SYS-2027R-N3RFT_.cfm) and I've 8 x 16GB Hynix DDR3 1066 ECC REG PC3-8500R (128GB in total) and 2 x Intel Xeon E5-2620 CPU's.

System is running VMware ESXi as host OS but I do not believe that it's the bottleneck - a simple dd test done within ESXi's Linux Console shows this:

vmfs/volumes/500768db-efaf45e0-45a1-0025907a08fb # time sh -c "dd if=/dev/zero of=ddfile bs=8k count=2000000 && sync"
2000000+0 records in
2000000+0 records out
real 0m 19.66s
user 0m 0.00s
sys 0m 0.00s

That's around 813.83 MB/s.. which is horribly slow considering 8 x Samsung 830's should average around 3GB+/s?

Here're more tests from within a Windows 2008 R2 VM running inside VMware ESXi:

http://yandao.com/temp/1.png
http://yandao.com/temp/2.png
http://yandao.com/temp/3.png

What could possibly have gone wrong to incur such low speeds?

tived
01-02-2013, 12:54 AM
Hi,

I am not sure if this is helpful, I have the Areca 1882ix 24 with 4GB cache and this is what I got in ATTO
http://img233.imageshack.us/img233/5827/areca1882ix8xsamsung830.jpg (http://imageshack.us/photo/my-images/233/areca1882ix8xsamsung830.jpg/)

its on the system below EVGA SR-2, just haven't updated my new storage yet



Henrik

hot_wired13
01-02-2013, 01:01 AM
Hi,

I am not sure if this is helpful, I have the Areca 1882ix 24 with 4GB cache and this is what I got in ATTO
http://img233.imageshack.us/img233/5827/areca1882ix8xsamsung830.jpg (http://imageshack.us/photo/my-images/233/areca1882ix8xsamsung830.jpg/)

its on the system below EVGA SR-2, just haven't updated my new storage yet



Henrik

Hi Henrik,

Thanks for the reply!

Wow, the speeds you get seems almost double of mine :(

Did you do any other settings, say, in BIOS or something? Like perhaps forcing X16 for the PCIe slot?

The server I'm having these drives on is a live, running server, hence I'm a little hesitant to mess with settings at this point.

Cheers,
Kelvin

mobilenvidia
01-03-2013, 01:08 AM
Atto really means nothing here is my IBM M5016 (LSI 9265) with 6x Hitachi 2TB spindle drives in RAID6
http://www.files.laptopvideo2go.com/webpictures/m5016_atto.png

hot_wired13
01-03-2013, 02:35 AM
Atto really means nothing here is my IBM M5016 (LSI 9265) with 6x Hitachi 2TB spindle drives in RAID6
http://www.files.laptopvideo2go.com/webpictures/m5016_atto.png

Hmm. Could it be possible that your 6 x 2TB speeds are based on RAID cache?

Nizzen
01-03-2013, 09:13 AM
Hmm. Could it be possible that your 6 x 2TB speeds are based on RAID cache?

Atto is usless wit controllers with cache.
Use iometer or Anvil with testfilesize bigger than 8gb

mobilenvidia
01-03-2013, 09:38 AM
Hmm. Could it be possible that your 6 x 2TB speeds are based on RAID cache?
Yes it is, as below, Atto is measuring Cache speed, point I was trying to make


Atto is usless wit controllers with cache.
Use iometer or Anvil with testfilesize bigger than 8gb

Here is Anvil result for the same 6x HDD's in RAID6 to show how Atto only measures Controller cache to System speed
http://www.files.laptopvideo2go.com/webpictures/m5016_anvil.png

BTW, I'm not using the M5016 for the Array in real life, LSI9261 with Fastpath and CacheCade doing a much better job of it with Anvil results (but worse with Atto)
Proving again Atto's in ability to test anything usefull other than bragging rites.

Zaxx
01-03-2013, 01:34 PM
Shouldn't be long until a resident raid-head pops in...I don't much at all about enterprise/server level raided storage setups myself. :shrug:

tived
01-03-2013, 05:12 PM
I only added the Atto benchmark as it was the only one available to me at the time, and it was one that the OP had used as reference.

I am not sure how much cache is on the OP's card, but we are comparing the same controller with similar drives, though mine are half the size which should have favored the OP.

It isn't the actual speed that is the concern, it's the different results we are getting with similar gear.

Henrik

canthearu
01-06-2013, 07:24 PM
Try it outside of a virtual environment.

Even the linux console in ESXi is a virtual machine, so it will have the same kinds of overheads as any other virtual machine.

rkagerer
02-03-2013, 01:32 AM
Hot_wired13, did you ever get this resolved and identify the bottleneck?

One thing you could try is disabling Read Ahead caching on the controller. I had a poor performing RAID setup with a similar card, and my numbers instantly jumped up a bit when I started disabling various caches. If I recall correctly, it was the Volume ReadAhead cache that was slowing me down.

Andreas
02-09-2013, 10:38 AM
i built a system with 48 ssd's (Samsung 830 128gb).
Best read perf approx 20gb/s, write perf ca. 15 gb/s. these are sustainable numbers up to 1 terabyte. 1 petabyte read takes 16 hrs, which equals 16 gb/sec average (multiple passes over original dataset).

some findings.
the usual benchmark tools show suboptimal scaling behavior. many of those tose tools dont consider bigger datasets (above 100 gb for instance). use iometer instead for more predictability.
make sure that all ssd's in one raid perform equally well. the lowest performing ssd impact the whole array. after 50 petabytes of write, perf differences between individual ssd's reached 1:10 in my case. some of the drives could not be recovered with secure erase.
driver quality and raid chip performance matter. adaptec 7 series is great with hdds, but lack ssd performance like new Lsi controllers (with ssds). Dont know about ssd scaling of areca controller vs its hdd performance. need to be checked.
Many of build in os tools dont have the same agressive threading model and double buffering managment like leading benchmark tools. Dont expect identical performance.
Be careful on the data reliability of such an array.

.... to name a few ...

regards,
Andy

Nizzen
02-09-2013, 03:11 PM
i built a system with 48 ssd's (Samsung 830 128gb).
Best read perf approx 20gb/s, write perf ca. 15 gb/s. these are sustainable numbers up to 1 terabyte. 1 petabyte read takes 16 hrs, which equals 16 gb/sec average (multiple passes over original dataset).

some findings.
the usual benchmark tools show suboptimal scaling behavior. many of those tose tools dont consider bigger datasets (above 100 gb for instance). use iometer instead for more predictability.
make sure that all ssd's in one raid perform equally well. the lowest performing ssd impact the whole array. after 50 petabytes of write, perf differences between individual ssd's reached 1:10 in my case. some of the drives could not be recovered with secure erase.
driver quality and raid chip performance matter. adaptec 7 series is great with hdds, but lack ssd performance like new Lsi controllers (with ssds). Dont know about ssd scaling of areca controller vs its hdd performance. need to be checked.
Many of build in os tools dont have the same agressive threading model and double buffering managment like leading benchmark tools. Dont expect identical performance.
Be careful on the data reliability of such an array.

.... to name a few ...

regards,
Andy
No picture or screenshot :(
Here in XS; Picture or it did not happened ;)

Andreas
02-09-2013, 09:05 PM
No picture or screenshot :(
Here in XS; Picture or it did not happened ;)
http://www.xtremesystems.org/forums/showthread.php?221773-New-Multi-Threaded-Pi-Program-Faster-than-SuperPi-and-PiFast&p=5153478&viewfull=1#post5153478
http://www.pbase.com/andrease/image/146536101/original.jpg
rgds,
Andy