Ages ago? he just got this raid card. FYI only hold your breath if your under water.
Printable View
Not for 1882 but for 1680 and 1880 :) Instead of showing some useful copy, he clipped a typical Explorer copy of an ISO file to boast with 2GB/s cache speeds :)
me or nizzen? i dont recall doing that one :)
Nizzen.
Well my areca contact asked me kind of out of the blue if I wanted to evaluate the new 1882 controller. More than anything I am interested in compatibility with the LSI SAS expander in the supermicro chassis. I know the 1880 did not fail well with this expander (at least the 3G version). Definitely interested to test the card to see if there is any difference compatibility wise with the expander. We currently don't use them in those systems but we could very well be doing it in the future.
I am having a problem atm. Will seek resolution :)
I should hopefully be getting an ARC-1882i pretty soon here as well. I can let people know the results I see as I will be mainly testing raid6 hooked up to 24x3TB Hitachi ultra stars via supermicro chasis SAS expander. I am most interested in how rebuild speed compares to the 1880i. I know the 1880i maxes out at around 1 GB/sec write speed in raid6 so I am definitely curious if this one can do better.
Excellent Sandon, once i get up and running we can compare notes :)
I was told they were out of ARC-1882's so that delayed things for me a bit. They finally did ship one today though. I was surprised to see it came from Baldwin Park and not Taiwan though. I should have it by tomorrow.
Finally got the ARC-1882i. Took some shots with the ARC-1882i, ARC-1880i, and ARC-1222 for comparison. Pics were taken with my cell phone's camera so not very good quality. With this model their manual actually has a color front-page and is a lot thinner compared to the past. Probably not going to test things to much until next week though:
http://box.houkouonchi.jp/arc1882/IMG_0133-small.JPG
http://box.houkouonchi.jp/arc1882/IMG_0134-small.JPG
http://box.houkouonchi.jp/arc1882/IMG_0135-small.JPG
http://box.houkouonchi.jp/arc1882/IMG_0136-small.JPG
http://box.houkouonchi.jp/arc1882/IMG_0137-small.JPG
http://box.houkouonchi.jp/arc1882/IMG_0138-small.JPG
http://box.houkouonchi.jp/arc1882/IMG_0139-small.JPG
i will be pleased to see your results! I am hindered a bit by the review angle, gotta wait until i am done there to disclose my stuff. What type of drives are you using?
Alrighty then! I did get a fix for the issue, which involves probably the worst motherboard on the planet... the one that i use. BIG thanks to Areca for addressing that in record time! Took another manufacturer like 8 months to fix the Option ROM issue with the EVGA (and a few other) boards. The problem being an abnormally small amount of option ROM present on the board. I swear to god that x79 needs to DROP NOW NOW NOW i need a new board!
Any-who, KUDOS to Areca for their awesome quick response :)
My initial tests will be with the supermicro 24 disk JBOD enclosure with :banana::banana::banana::banana:ty 3GB SAS expander and 1.5 TB seagate disks which the 1880 didn't work with. After that I will probably be testing with 24x3TB hitachi ultrastars and a 6GB expander. I wont be doing any SSD testing as we don't really have any machines using SSD's where I work except my own server which only has four.
I should be able to atleast see if the ~1-1.1GB write bottleneck on raid6 still exists with this newer controller like on the ARC-1880 which I see max out at around 1.7-1.8 GB/sec read and 1.1 GB/sec write in raid6.
I am most interested in raid6 rebuild times and read/write performance.
That will be very beneficial for purposes of really seeing the differences in teh ROC. a big suspicion of mine is that is where you will see the differences :) especially rebuilds!
Well I am impressed. The 1880 didn't work well at all with this 3G SAS expander in the supermicro chasis but so far I have seen none of the issues I was before. I still need to try doing really heavy I/O while it is rebuilding but it only took 8.5 hours to initialize a 33TB (24x1.5 TB in raid6) array which is not bad. Not 100% sure the dual-linking is working or not. Both cables are hooked up but not sure how to verify that in the areca BIOS or CLI. I have a rebuild going right now with no load to see how long that takes but I am guessing its gonna be under 14 hours going by how fast the first volume rebuilt. Not bad considering these are pretty slow old cheap seagate 1.5TB AS drives. Definitely curious to see how performance is with the 3TB 6GB ultrastars with the 6g expander.
Areca 1882ix-24 picture :D
http://i413.photobucket.com/albums/p...-1882ix-24.jpg
Support only 4gb cache :( ... :p
Datasheet and Manual is available @ Areca.com now :up:
Ok so like the 1880 the 3G sas expander was no good. Started getting timeouts about 90% into the rebuild with no I/O load and eventually failed volume with heavy I/O while rebuild.
I finally got to get the controller hooked up to 24x3TB hitachi ultrastar 6GB SAS. So far so good. It initialized the 80GB volume in under 30 seconds. I believe the controller is dual linking from the 8x6GB connections to the enclosure under sas chip. It appears to be initializing at 1% every 4 minutes for 24x3TB.
Some images of archttp:
http://box.houkouonchi.jp/archttp188...82_1-small.png
http://box.houkouonchi.jp/archttp188...82_2-small.png
http://box.houkouonchi.jp/archttp188...82_3-small.png
http://box.houkouonchi.jp/archttp188...82_4-small.png
http://box.houkouonchi.jp/archttp188...82_5-small.png
Will have some basic benchmarks tomorrow.
Hello Everyone,
Here are the results I got from the ARC-1882X card.
RAID 0 , Single MiniSAS Cable
Attachment 121481
RAID 5, Single MiniSAS Cable
Attachment 121482
RAID 6, Single MiniSAS Cable
Attachment 121483
System Used:
Areca ARC-1882X - PCB Ver.B / Default 1GB Cache / FW: 1.49 / Bios: 1.22
Supermicro X8DAH+-F / Intel Xeon E5506 2.13 Ghz Processor (x1) / 4GB DDR3
Windows 2003 64Bit / Using 6.20.00.21 SCSIPORT Driver
8x Seagate 3TB 6Gb/s SAS Constellation ES Drives - Model: ST33000650SS
Areca ARC-8026-12 6G Expander
AJA System Test / Single Volume / Frame Size: 1920x1080 10bit RGB / File Size: 16.0GB
Sandon...Your going to run the tests using the new dual core 6G card with a 3G SAS expander? Doesn't that defeat the purpose? :shrug: You should have told Ben to provide you with the 6G expander as well. :yepp:
Here's some pics of the ARC-1882X card.
With Full Height Bracket
Attachment 121510
With Low Profile Bracket
Attachment 121511
Full Contents
Attachment 121512
Would have had this yesterday but a network maintenance went south and I ended up spending almost my entire day on that and some other stuff. I only tested first with the 3GB expander as I was waiting on hardware. Anyway here is the array initialization times and rebuild times for 24x3TB (6GB expander):
Init:
Rebuild:Code:2011-10-20 12:47:06 ARC-1882-VOL#001 Complete Init 008:19:44
2011-10-20 05:17:16 ARC-1882-VOL#001 Start Initialize
2011-10-20 05:17:16 ARC-1882-VOL#000 Complete Init 000:00:27
2011-10-20 05:16:59 ARC-1882-VOL#001 Create Volume
2011-10-20 05:16:49 ARC-1882-VOL#000 Start Initialize
2011-10-20 05:16:47 ARC-1882-VOL#000 Create Volume
2011-10-20 05:16:27 Raid Set # 000 Create RaidSet
Code:2011-10-21 02:52:00 ARC-1882-VOL#001 Complete Rebuild 013:30:25
2011-10-20 13:21:35 ARC-1882-VOL#001 Start Rebuilding
2011-10-20 13:21:35 ARC-1882-VOL#000 Complete Rebuild 000:00:57
2011-10-20 13:20:37 ARC-1882-VOL#000 Start Rebuilding
2011-10-20 13:20:35 Raid Set # 000 Rebuild RaidSet
2011-10-20 13:20:35 Enc#2 Slot 12 Device Inserted
2011-10-20 13:20:02 Enc#2 Slot 12 Device Removed
2011-10-20 13:20:02 Raid Set # 000 RaidSet Degraded
2011-10-20 13:20:02 ARC-1882-VOL#001 Volume Degraded
2011-10-20 13:20:02 ARC-1882-VOL#000 Volume Degraded
Here are write tests (direct I/O vs non direct I/O). Going through the memory buffer (none direct I/O) usually is significantly slower at these speeds but I tested it because odly the %util on the drives was not near 100%. CPU usage did hit 100% at times from dd so it might be lowering results again. When using iostat it shows (5 second average):
DD:
Direct I/O
Non Direct I/OCode:raid7131:~# dd bs=1M count=60000 oflag=direct if=/dev/zero of=/dev/sdb
60000+0 records in
60000+0 records out
62914560000 bytes (63 GB) copied, 41.9692 s, 1.5 GB/s
Code:raid7131:~# dd bs=1M count=60000 if=/dev/zero of=/dev/sdb
60000+0 records in
60000+0 records out
62914560000 bytes (63 GB) copied, 40.5699 s, 1.6 GB/s
for direct i/O iostat:
1469 MB/sec at 66.51 %utilCode:avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 4.92 7.62 0.00 87.46
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 0.00 0.00 2938.12 0.00 1469.06 1024.00 1.05 0.36 0.23 66.51
Without direct I/O:
So 1898 MB/sec @ 100 %util.Code:avg-cpu: %user %nice %system %iowait %steal %idle
0.03 0.00 25.52 2.27 0.00 72.17
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 479820.40 0.00 6073.60 0.00 1898.22 640.08 131.72 21.68 0.16 100.00
I wanna say the card probably is topping out closer to 1.9 GB/sec write speeds its just hard to find processes that will write that fast. My ARC-1880x seemed to top out at around 970 megabytes/sec (iostat) so this appears almost twice as fast.
I also tried running two DD's (with direct I/O0 in parallel one writing to a little later in the driveand was able to see write speeds over 1800MB/sec:
(these ran in parallell)Code:raid7131:~# dd bs=1M count=40000 oflag=direct if=/dev/zero of=/dev/sdb
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 44.988 s, 932 MB/s
raid7131:~# dd bs=1M skip=80000 count=40000 oflag=direct if=/dev/zero of=/dev/sdb
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 44.8947 s, 934 MB/s
Reads with dd:Code:avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.00 5.25 7.29 0.00 87.38
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 0.00 0.00 3662.08 0.00 1831.04 1024.00 2.83 0.77 0.27 99.88
iostat:Code:raid7131:~# dd count=60000 bs=1M iflag=direct if=/dev/sdb of=/dev/null
60000+0 records in
60000+0 records out
62914560000 bytes (63 GB) copied, 32.8882 s, 1.9 GB/s
Doing the same thing parallel reads with dd I saw slightly higher reads and %util at 100:Code:avg-cpu: %user %nice %system %iowait %steal %idle
0.05 0.00 2.47 10.03 0.00 87.45
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 0.00 3714.37 0.00 1857.19 0.00 1024.00 1.35 0.37 0.24 87.35
dd in parallel:Code:avg-cpu: %user %nice %system %iowait %steal %idle
0.05 0.00 3.22 9.29 0.00 87.44
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 0.00 3781.60 0.00 1890.80 0.00 1024.00 3.15 0.83 0.26 100.08
Random seeks per second 128 threads:Code:raid7131:~# dd bs=1M count=40000 iflag=direct if=/dev/sdb of=/dev/null
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 43.3944 s, 967 MB/s
raid7131:~# dd bs=1M skip=40000 count=40000 iflag=direct if=/dev/sdb of=/dev/null
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 43.4524 s, 965 MB/s
A 20x2TB array I have on a ARC-1280 with no expander gets around 2100/sec so this is not bad. The expander should add a bit of latency.Code:raid7131:~# ./seeker_baryluk /dev/sdb 128
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdb [128749975552 blocks, 65919987482624 bytes, 61392 GB, 62866199 MB, 65919 GiB, 65919987 MiB]
[512 logical sector size, 512 physical sector size]
[128 threads]
Wait 30 seconds..............................
Results: 2521 seeks/second, 0.397 ms random access time (686739552 < offsets < 65919562706031)
So It looks like now the card is maxing out at around 1.9 gigabytes/sec (both read and write). I am curious if I am hitting some bus limitation that would prevent me from going faster. The card does say 8x5GB during the boot.
Motherboard testing with is a Supermicro X8DT6.
Could you be on x8 Gen1 or x4 Gen2 PCI-e? You'd top before 1.9GB/s though.
Could you test a copy from/to that array? (file copy or dd copy, not just read/write)
Like I said during boot the card says 8/5G (not x8/2.5G which it does if PCI-E 1.0) so its definitely doing x8 PCI-E lanes. Here is from my home machine (1880x) which also shows the same speed:
http://box.houkouonchi.jp/areca_post/arc_post3.png
A dd copy (from sda (80GB boot slice) to sdb) was 880 megabytes/sec. Here is how the array is setup:
I did run into one problem. Slot 20 got a timeout (causing the card to be reset by the driver) and the card reset itself twice and eventually failed the disk:Code:raid7131:~# cli64 vsf info
# Name Raid Name Level Capacity Ch/Id/Lun State
===============================================================================
1 ARC-1882-VOL#000 Raid Set # 000 Raid6 80.0GB 00/00/00 Degraded
2 ARC-1882-VOL#001 Raid Set # 000 Raid6 65920.0GB 00/00/01 Rebuilding(0.2%)
===============================================================================
GuiErrMsg<0x00>: Success.
event history (top entries are uptime since card boot since card was rebooted while machine was running):Code:raid7131:~# dmesg | grep -i reset
[ 5744.818726] arcmsr1: executing eh bus reset .....num_resets = 0, num_aborts = 67
[ 5744.826370] arcmsr1: executing hw bus reset .....
[ 5859.967601] arcmsr: scsi bus reset eh returns with success
[ 6084.888519] arcmsr1: executing eh bus reset .....num_resets = 1, num_aborts = 135
[ 6084.896250] arcmsr1: executing hw bus reset .....
[ 6200.257875] arcmsr: scsi bus reset eh returns with success
The problem I had with the 3G SAS expander was drives randomly getting timeouts and eventually failing (until the array became a failed state). I am hoping that in this case it was just a bad drive but if that was the case it surprises me it survived through an entire rebuild and init, it didnt start having trouble until I did rebuild + extremely heavy disk I/O that I saw the problem. My disk I/O is so extreme that the 80 GB boot slice took over 60 minutes (vs 60 seconds) to rebuild compared to before.Code:raid7131:~# cli64 event info
Date-Time Device Event Type Elapsed Time Errors
===============================================================================
3712 ARC-1882-VOL#001 Start Rebuilding
3712 ARC-1882-VOL#000 Complete Rebuild 000:59:51
121 ARC-1882-VOL#000 Start Rebuilding
118 000:000002F70000 Restart Rebuild LBA Point
117 H/W MONITOR Raid Powered On
59 E2 Fan3 Removed
59 E2 Fan2 Removed
59 E2 Fan1 Removed
4294967266 Enc#2 Slot 20 Device Failed
4294967265 Raid Set # 000 RaidSet Degraded
4294967265 ARC-1882-VOL#001 Volume Degraded
4294967265 ARC-1882-VOL#000 Volume Degraded
4294967245 Enc#2 Slot 20 Time Out Error
4294967020 ARC-1882-VOL#000 Start Rebuilding
4294967017 000:000000D77200 Restart Rebuild LBA Point
4294967016 H/W MONITOR Raid Powered On
4294966958 E2 Fan3 Removed
4294966958 E2 Fan2 Removed
4294966958 E2 Fan1 Removed
2011-10-22 04:31:49 Enc#2 Slot 20 Time Out Error
2011-10-22 04:22:29 ARC-1882-VOL#000 Start Rebuilding
2011-10-22 04:22:27 Raid Set # 000 Rebuild RaidSet
2011-10-22 04:22:26 Enc#2 Slot 01 Device Inserted
2011-10-22 04:21:53 Enc#2 Slot 01 Device Removed
2011-10-22 04:21:53 Raid Set # 000 RaidSet Degraded
2011-10-22 04:21:53 ARC-1882-VOL#001 Volume Degraded
2011-10-22 04:21:53 ARC-1882-VOL#000 Volume Degraded
2011-10-22 02:42:51 H/W MONITOR Raid Powered On
Hoping its just a bad disk.
Nice copy speeds there - those are with degraded arrays??? Even with regular RAID6 that is quite awesome.
If you have time to set up RAID0 arrays, I gather it might be able to do better. Thanks for doing the test that Nizzen avoided doing :)
LOL @ another Areca disk f*** up :)