nope..pretty sure its not :shrug:
Printable View
I'm not sure where the actual specs for it are, but when I google it half the etailers list it as an FB-DIMM. MemoryTen for example says FB-DIMM (will not work) and 256x4 (also no).
Good afternoon all,
Ok - so my 1880 arrived yesterday - fired it up on the UD7/980 last night and continued to tinker with it this morning. :) It is probably 3/4 inch longer than the 1231.
Results so far have been mixed. I could not format the array with the drives set to sata 300 and with the drives set to sata 150 the performance - particularly sequential reads - seems well below par. :confused:
I have tried all different controller settings, also storeport and scsiport drivers, all using the latest F/W.
Below is AS SSD - 980 at 4500, pcie 107, all raid 0 acard 9010 - 3x up to 11x, any suggestions would be greatly appreciated! :( -
http://img241.imageshack.us/img241/8233/1880asssd.png
On a lighter note – how about a little “dynamic disk” fun! :)
So UD7/980 at stock, Areca 1231-4G, Areca 1880 1GB, each with 4xR0 Acard 9010s.
Softraided with W7 dynamic disk/stripping – NTFS, allocation (cluster) size – 64K –
http://img337.imageshack.us/img337/1...aid1gbnooc.png
How about using this dynamic array as a pcm05 target? – not bad considering no oc :) –
http://img408.imageshack.us/img408/8...skpcmark05.png
Happy snaps to follow –
http://img267.imageshack.us/img267/6456/dsc03762k.jpg
http://img835.imageshack.us/img835/1206/dsc03763n.jpg
http://img443.imageshack.us/img443/1769/dsc03764c.jpg
http://img217.imageshack.us/img217/884/dsc03770q.jpg
Also - forgot to mention - this is with 1880 using 1GB cache - 4GB should arrive next week. :)
This is the 1880 with 12xR0 Acards on the little GGBT H55N/655K (clarkdale) at 4500 - again bad sequential reads :shrug:
http://img685.imageshack.us/img685/3...2xr0acard4.png
@SteveRo - wow, something is definitely wrong there. I wonder why the SATA 300 isn't working?
To everyone- I've been running my 1880 for a few weeks now and it seems like my average controller temp is around 54 C. Is this acceptable for normal use? Or should I try and put some additional air on it?
strange. are you using the beta firmware? seems to work best. i would fire off an email to areca asking them about your formatting issue. they respond very quickly (they are on different time zone though)
can you try setting the drives at 150 and testing the sequential with it? maybe we can have a baseline result from the 1231 to compare to.
Steve
I'm using the latest official fw/bios.
I did have bad numbers for sequential io way back when I had the 1GB stick installed :), can't recall if the 4GB stick resolved the issue or if it was some setting.
Here is my current system config settings, 2R0 X25M = 500MB/s ++ on reads. (4KB strip size)
Attachment 108312 Attachment 108313
@Spoiler,
Mine is hovering at ~45C, I've got a small 8cm low speed fan blowing directly at it. (In a Corsair 800D case, no special cooling at all, almost silent)
Without that little fan it would be close to your temp.
I´m looking for a comparison (only realworld) between ARC-1231/1261/1280ML and ARC-1880.
I to prefer OS-Boot (incl. initialisation), App- and Game-Loading. (or the 104-app-loadup from NapalmV5)
And which SATA-HDDs (Non-Raid-Edition) running stable with the 1880?
I think, the 1880 is very b-i-t-c-h-y with SATA-Drives.
I the past i heard the ARC-16xx-Series simulates the SATA-Protocoll only by software. And the 1880 is a SAS-HBA, too.
I tested 16hitachi 2tb drives, and it worked great in raid-5 and 6 wit Areca 1880ix-24. I do not know about other sata drives, but Hitachi is good enough for me :)
~1gb/s read and write raid-6
Which Hitachi? http://geizhals.at/deutschland/?cat=...tachi~958_2000
1gb/s without cache?
1880ix-24 +4gb cache
16x Hitachi Deskstar 7K2000 2TB
16x Hitachi 2TB Raid-0
Testing LSI based SAS expanders in supermicro cases with 48 1.5 TB seagates in raid60 is giving me:
Reads:
Writes:Code:livecd ~ # nice -n -20 dd iflag=direct bs=1M count=50000 if=/dev/sda of=/dev/null
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 38.2311 s, 1.4 GB/s
Also I verify the standard areca BBU works fine with it:Code:livecd ~ # nice -n -20 dd oflag=direct bs=1M count=50000 if=/dev/zero of=/dev/sda
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 63.796 s, 822 MB/s
Not really liking the timeout errors I am seeing while stress testing though... init completed ok:Code:livecd ~ # ./cli64 hw info
The Hardware Monitor Information
=====================================================
[Controller H/W Monitor]
CPU Temperature : 50 C
Controller Temp. : 46 C
12V : 12.160 V
5V : 5.160 V
3.3V : 3.408 V
DDR-II +1.8V : 1.872 V
CPU +1.8V : 1.872 V
CPU +1.2V : 1.280 V
CPU +1.0V : 1.072 V
DDR-II +0.9V : 0.928 V
Battery Status : 100%
[Enclosure#1 : ARECA SAS RAID AdapterV1.0]
[Enclosure#2 : LSILOGICSASX36 A.1 7015]
Fan 00 : 0 RPM
Fan 01 : 0 RPM
Fan 02 : 0 RPM
Enclosure Temp : 35 C
[Enclosure#3 : LSILOGICSASX36 A.1 7015]
Fan 00 : 5550 RPM
Fan 01 : 5280 RPM
Fan 02 : 5550 RPM
Enclosure Temp : 32 C
=====================================================
GuiErrMsg<0x00>: Success.
livecd ~ # ./cli64 vsf info
# Name Raid Name Level Capacity Ch/Id/Lun State
===============================================================================
1 ARC-1880-VOL#000 Raid60 80.0GB 00/00/00 Normal
2 ARC-1880-VOL#003 Raid60 65920.0GB 00/00/01 Normal
3 VOL#000R60Vol2-1 Raid Set # 000 Raid6 40.0GB 00/00/00 Normal
4 VOL#000R60Vol2-2 Raid Set # 001 Raid6 40.0GB 00/00/00 Normal
5 VOL#003R60Vol2-1 Raid Set # 000 Raid6 32960.0GB 00/00/01 Normal
6 VOL#003R60Vol2-2 Raid Set # 001 Raid6 32960.0GB 00/00/01 Normal
===============================================================================
GuiErrMsg<0x00>: Success.
livecd ~ # ./cli64 disk info
# Enc# Slot# ModelName Capacity Usage
===============================================================================
1 01 Slot#1 N.A. 0.0GB N.A.
2 01 Slot#2 N.A. 0.0GB N.A.
3 01 Slot#3 N.A. 0.0GB N.A.
4 01 Slot#4 N.A. 0.0GB N.A.
5 01 Slot#5 N.A. 0.0GB N.A.
6 01 Slot#6 N.A. 0.0GB N.A.
7 01 Slot#7 N.A. 0.0GB N.A.
8 01 Slot#8 N.A. 0.0GB N.A.
9 02 000 ST31500341AS 1500.3GB Raid Set # 000
10 02 001 ST31500341AS 1500.3GB Raid Set # 000
11 02 002 ST31500341AS 1500.3GB Raid Set # 000
12 02 003 ST31500341AS 1500.3GB Raid Set # 000
13 02 004 ST31500341AS 1500.3GB Raid Set # 000
14 02 005 ST31500341AS 1500.3GB Raid Set # 000
15 02 006 ST31500341AS 1500.3GB Raid Set # 000
16 02 007 ST31500341AS 1500.3GB Raid Set # 000
17 02 008 ST31500341AS 1500.3GB Raid Set # 000
18 02 009 ST31500341AS 1500.3GB Raid Set # 000
19 02 010 ST31500341AS 1500.3GB Raid Set # 000
20 02 011 ST31500341AS 1500.3GB Raid Set # 000
21 02 012 ST31500341AS 1500.3GB Raid Set # 000
22 02 013 ST31500341AS 1500.3GB Raid Set # 000
23 02 014 ST31500341AS 1500.3GB Raid Set # 000
24 02 015 ST31500341AS 1500.3GB Raid Set # 000
25 02 016 ST31500341AS 1500.3GB Raid Set # 000
26 02 017 ST31500341AS 1500.3GB Raid Set # 000
27 02 018 ST31500341AS 1500.3GB Raid Set # 000
28 02 019 ST31500341AS 1500.3GB Raid Set # 000
29 02 020 ST31500341AS 1500.3GB Raid Set # 000
30 02 021 ST31500341AS 1500.3GB Raid Set # 000
31 02 022 ST31500341AS 1500.3GB Raid Set # 000
32 02 023 ST31500341AS 1500.3GB Raid Set # 000
33 03 000 ST31500341AS 1500.3GB Raid Set # 001
34 03 001 ST31500341AS 1500.3GB Raid Set # 001
35 03 002 ST31500341AS 1500.3GB Raid Set # 001
36 03 003 ST31500341AS 1500.3GB Raid Set # 001
37 03 004 ST31500341AS 1500.3GB Raid Set # 001
38 03 005 ST31500341AS 1500.3GB Raid Set # 001
39 03 006 ST31500341AS 1500.3GB Raid Set # 001
40 03 007 ST31500341AS 1500.3GB Raid Set # 001
41 03 008 ST31500341AS 1500.3GB Raid Set # 001
42 03 009 ST31500341AS 1500.3GB Raid Set # 001
43 03 010 ST31500341AS 1500.3GB Raid Set # 001
44 03 011 ST31500341AS 1500.3GB Raid Set # 001
45 03 012 ST31500341AS 1500.3GB Raid Set # 001
46 03 013 ST31500341AS 1500.3GB Raid Set # 001
47 03 014 ST31500341AS 1500.3GB Raid Set # 001
48 03 015 ST31500341AS 1500.3GB Raid Set # 001
49 03 016 ST31500341AS 1500.3GB Raid Set # 001
50 03 017 ST31500341AS 1500.3GB Raid Set # 001
51 03 018 ST31500341AS 1500.3GB Raid Set # 001
52 03 019 ST31500341AS 1500.3GB Raid Set # 001
53 03 020 ST31500341AS 1500.3GB Raid Set # 001
54 03 021 ST31500341AS 1500.3GB Raid Set # 001
55 03 022 ST31500341AS 1500.3GB Raid Set # 001
56 03 023 ST31500341AS 1500.3GB Raid Set # 001
===============================================================================
GuiErrMsg<0x00>: Success.
Code:livecd ~ # ./cli64 event info
Date-Time Device Event Type Elapsed Time Errors
===============================================================================
2010-10-10 12:07:17 Enc#2 017 Time Out Error
2010-10-10 12:00:47 Enc#2 SES2Device Time Out Error
2010-10-10 11:53:44 Enc#3 020 Time Out Error
2010-10-10 11:53:32 Enc#3 001 Time Out Error
2010-10-10 11:53:19 Enc#2 022 Time Out Error
2010-10-10 11:24:41 Enc#3 010 Time Out Error
2010-10-10 11:06:27 E2 Fan 02 Failed
2010-10-10 11:06:27 E2 Fan 01 Failed
2010-10-10 11:06:27 E2 Fan 00 Failed
2010-10-10 11:06:27 H/W MONITOR Raid Powered On
2010-10-10 10:33:37 E2 Fan 02 Failed
2010-10-10 10:33:37 E2 Fan 01 Failed
2010-10-10 10:33:37 E2 Fan 00 Failed
2010-10-10 10:33:37 H/W MONITOR Raid Powered On
2010-10-10 10:30:59 E2 Fan 02 Failed
2010-10-10 10:30:59 E2 Fan 01 Failed
2010-10-10 10:30:59 E2 Fan 00 Failed
2010-10-10 10:30:59 H/W MONITOR Raid Powered On
2010-10-10 10:27:53 E2 Fan 02 Failed
2010-10-10 10:27:53 E2 Fan 01 Failed
2010-10-10 10:27:53 E2 Fan 00 Failed
2010-10-10 10:27:53 H/W MONITOR Raid Powered On
2010-10-10 10:12:37 E2 Fan 02 Failed
2010-10-10 10:12:37 E2 Fan 01 Failed
2010-10-10 10:12:37 E2 Fan 00 Failed
2010-10-10 10:12:37 H/W MONITOR Raid Powered On
2010-10-10 09:44:32 VOL#003R60Vol2-1 Complete Init 029:28:15
2010-10-10 09:12:45 VOL#003R60Vol2-2 Complete Init 028:56:29
2010-10-09 04:16:16 VOL#003R60Vol2-2 Start Initialize
2010-10-09 04:16:16 VOL#003R60Vol2-1 Start Initialize
2010-10-09 04:11:07 010.003.068.087 HTTP Log In
2010-10-09 04:09:25 E2 Fan 02 Failed
2010-10-09 04:09:25 E2 Fan 01 Failed
2010-10-09 04:09:25 E2 Fan 00 Failed
2010-10-09 04:09:25 H/W MONITOR Raid Powered On
2010-10-09 04:08:10 VOL#003R60Vol2-2 Stop Initialization 001:48:04
2010-10-09 04:08:10 VOL#003R60Vol2-1 Stop Initialization 001:48:01
2010-10-09 02:20:09 VOL#003R60Vol2-1 Start Initialize
2010-10-09 02:20:08 VOL#000R60Vol2-1 Complete Init 000:02:13
2010-10-09 02:20:06 VOL#003R60Vol2-2 Start Initialize
2010-10-09 02:20:05 VOL#000R60Vol2-2 Complete Init 000:02:10
2010-10-09 02:18:07 ARC-1880-VOL#003 Create Volume
2010-10-09 02:17:55 VOL#000R60Vol2-2 Start Initialize
2010-10-09 02:17:54 VOL#000R60Vol2-1 Start Initialize
2010-10-09 02:17:53 ARC-1880-VOL#000 Create Volume
2010-10-09 02:17:31 Raid Set # 001 Create RaidSet
2010-10-09 02:17:14 Raid Set # 000 Create RaidSet
Only 48 drives, I wish someone would come and show some stats using a large array.
:rofl:
Good afternoon all,
Ok so this is 1880 1GB on left and 1231 4GB on right.
Arrays on both are 4xR0 acards, 4k stripe, 64k cluster
980 at 4500, pcie 107, all drives are set to sata 150
I am hoping that adding 4GB to the 1880 will make a nice difference :) -
http://img85.imageshack.us/img85/311...31compare1.png
http://img138.imageshack.us/img138/9...31compare2.png
http://img121.imageshack.us/img121/9...31compare3.png
Seems like it has higher access times...
But I don't find these speeds to be low in any way, with 4 ACards at SATA150, you would max out at ~450-480 MB/s for all 4.
The ridicolously high reads/writes (>1GB/s) are just cached reads/writes, so they're not comparable.
As for the 12 ACard not doing >1GB/s, that's another thing.. try setting volume read ahead to aggressive? It could be the 1880 just doesn't like SATA well.
FYI - go see the other thread if you're interested - the 4GB mushkin (correction proline - bot from mushkin) memory is giving me problems.
http://www.xtremesystems.org/forums/...d.php?t=260307
When I can get the 4Gb mushkin to work - I get nice numbers, the seqential reads are much improved.
For me the 1880 really seems to need the large cache to shine.
And yes, read ahead is set to aggressive.
The 1880 seems to work fine with many SSDs based on several posts in this thread.
The drives I have available to test with are not the standard SSDs, acards and mtron mobi's.
Neither supports ncq, acards based on ddr2 dimms and mobi's are 1st gen SSDs based on SLC flash.
I plan to order the tekram 4GB memory this evening and give that a try.
so what is the cap limit of this controller??
Good morning Tilt!
Based on what I have seen from atto, it looks like cache-through-pcie max's out at over 3GB/s
I would have to go back and look at what Paul and Anvil posted to get an idea of max array-through-pcie
yes ok for the cache but what is the HDD cap limit???? how many ssd's can max out the controller???
Mr Nizzen is getting seq reads of almost 2GB/s, seq writes of about 1.4GB/s in a 2GB atto
- here - http://www.xtremesystems.org/forums/...4&postcount=19
This was with 8 c300s :eek:
I received a stick of the mushkin yesterday and it doesn't work for me. The controller boots up and detects 4GB but in the log it shows a memory error and I can't boot into windows. I went ahead and ordered a 4GB module directly from Tekram and got an RMA to send the mushkin back.
I'm having a problem getting archttp to recognize my 1880 in Win 7 X64. Any suggestions what I can do to get that working?