Page 8 of 8 FirstFirst ... 5678
Results 176 to 197 of 197

Thread: RAID5 file server build advice

  1. #176
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I noticed one strange think in my tests.
    Using all 8 of my 1TB WD drives my RAID5 reads are still damn slow compared to write speeds.
    I get like 90MB/s on reads while my writes are 3x that. ? ? ?
    Even just one of those drives have ~65MB/s reads so this sound a bit strange to me.





    Can the 3ware 3650SE controller suck that much on reads, while being pretty good on writes?
    P.S. I have a BBU and writeback caching is enabled.
    Last edited by XS Janus; 07-09-2008 at 10:54 AM.
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  2. #177
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    ^ wow 8x drives @ under 100MB/s ?

    what mobo/chipset/pcie ?


    Quote Originally Posted by jcool View Post
    Wow, your read speed's even better.. maybe new HDTach is faster?
    Any idea how your highpoint fares with writes?
    no difference between the two versions.. same fluctuations

    os: winxp 32bit installed @ array as the previous hdtach 3.0.4.0

    256K stripe

    and no difference between 64K@3.0.4.0 256K stripes.. same fluctuations.. except for the graphs


    edit: full bench @ winxp 256K stripe


    write benches coming up..
    Last edited by NapalmV5; 07-09-2008 at 04:39 PM.

  3. #178
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by jcool View Post
    Hey guys,

    almost forgot about this thread - almost
    Got my fileserver up and running now, using the Adaptec 5405 and 4x WD6400AAKS SATA drives (dirt cheap but they rock).

    Anyway, here's how it looks, full size raid 5, 256kb stripe, write cache enabled:





    Really impressed with the controller here, single drives do around 90MB/s read/write.

    If you want to see more benches, click the Adaptec 5405 link in my sig.. just scroll down, got raid 0, 10 and 6
    a more proper comparison..

    raw/no os

    rr3510: 4x wd6400aaks raid5 256K




  4. #179
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Ha, at least I beat you on the writes
    Even though your Highpoint seems more consistent. But the Adaptec is still very new and raw, I guess there are some firmware updates to come.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  5. #180
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Quote Originally Posted by NapalmV5 View Post
    ^ wow 8x drives @ under 100MB/s ?

    what mobo/chipset/pcie ?



    E8200, Giga: Intel G33, PCIE x4 slot, 3ware 9650SE card.
    Looks pretty weak to me to.
    What on earth could hold it back on the writes?

    I will try larger blocks test in HDtune later, cause I noticet that improves the writes a lot.
    Last edited by XS Janus; 07-10-2008 at 08:04 AM.
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  6. #181
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by XS Janus View Post

    E8200, Giga: Intel G33, PCIE x4 slot, 3ware 9650SE card.
    Looks pretty weak to me to.
    What on earth could hold it back on the writes?

    I will try larger blocks test in HDtune later, cause I noticet that improves the writes a lot.
    theres your problem

    and i take it thats the x4 9560se ?

    if so looks like its the whole system: mobo/chipset/pcie/controller

    for 8x raid you really need x8 controller + minimum x8 pcie 1.0

    max performance 8x raid: x8 beefy controller + x8 pcie 2.0

  7. #182
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I don't think that is true in this case.
    A single lane is capable of transmitting 2.5Gbps in each direction, simultaneously. Add two lanes together to form an x2 link and you've got 5 Gbps, and so on with each link width.
    http://arstechnica.com/articles/paed...are/pcie.ars/5

    I don't think they would make the 8port card 4x if it would be bottlenecked with 8 of the slowest drives out there.
    I think it is more to do with the I/O capabilities of the WD drives. They are GP series and all those tests are runing 64kb test blocks when stock.
    Tried runing 4 in Raid5 and results were worse.

    Now when i re-did the tests in HDtune with 8 drives but this time upped the test block size I got MUCH better results. So the controller CAN push it through.

    Here
    HDtune 8xWDFYPS in Raid5 and 8MB block size test:



    What now?
    Is there any other explanation?
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  8. #183
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    wow, been away from here for a while, those #'s Janus are not that bad a little on the slow side for the reads ~42MiB/sec per disk basically 1/2 the max sustained for that drive. where is that ~2TB partition that you're testing (first, middle, end of the 8 drives?) also is anything else hitting the system?

    As for writes, with RAID-5 (3/4/5/6) writes will be faster than reads if you have write caching enabled and have memory on your controller, most controllers are able to buffer writes to help co-inside with writing full stripes of data to the drive(s) and will (if possible)re-order or batch requests to help performance.

    Two other items of note is that with parity raids (3/4/5/6) your random (100&#37 writing will be equal to that of a single drive. You get gains the further you are away from a 100% random workload. The other item (just for clarification) that PCIe (v1 spec) is actually 2.0Gbps (250MiB/s) bi-directional per lane, not a big deal just FYI.

    If you're able can you try building a temporary array off that card as a raid-0 with the 8 drives that way you don't have the controller (parity) calculation overhead only the pure block allocation scheme. With a 4 lane pci-e that would be 8Gbps or 1GiB/s electrical, with your 8 drives (max say 82-84MiB/s spec on outer tracks which would be something like 40MiB/s or so on the inner ones roughly. That should show you a curve of ~670MiB/s outer down to ~320/s inner depending on your queue depth and ability to feed the drives. If that holds true we can start looking at parity calcs being part of the problem.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  9. #184
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Quote Originally Posted by XS Janus View Post
    I don't think that is true in this case.
    I have to agree with what you say on this one. Each PCIe 1.x lane has a 250MBps theoretical maximum transfer rate on each direction.

    A 4x PCIe 1.x controller has a theroretical maximum transfer rate of 1GBps. Even with a 30% overhead (that's rather huge) you'll still have plenty of room, since not many HDDs are capable of 100MBps+ transfers, and RAID never scales linearly - meaning an 8-disk RAID array would probably be capped at 600~700MBps, provided the controller could keep up with the data transfer.

    Quote Originally Posted by XS Janus View Post
    What now?
    Is there any other explanation?
    Tipically, bigger block sizes have better performance results. There are less disk requests to be processed by the firmware, and usually less chances for data to be accessed all at once (like skipping a block and having to wait a full rotation to be able to access it, for instance).

    Firmware optimization may also have a huge impact in this case. The consumer GP drives are clearly optimized for data warehouse duties (movie/MP3/ISO repository), as opposed to system drive status. That means the firmware will clearly favour this kind of block size.



    That being said (all of it), do keep in mind that the 4x PCIe conection the controller sits on has two downsides:

    1) It has a bigger latency (it's implanted on the southbridge, meaning data has to travel one more hop to the memory), which may impact performance;

    2) The 1GBps data throughput is shared among ALL components connected to the southbridge (including ODDs and NICs), meaning other resource hogs can bog down the interface; also, anything sharing an IRQ with the PCIe slot (especially the NIC) WILL cause erratic performance if used at the same time (CPU kernel time shoots through the roof when this happens, and everything grinds to a halt on the data transfer points...)

    I don't know if it's possible, but I'd recommend using anything that puches that big of a punch on bandwidth on the 16x PCIe slot, since that one is directly connected to the Northbridge and usually sits alone on one IRQ.

    Oh! I almost forgot! Make sure cached reads and writes are active if you want the best performance possible. You've got the battery backup, so you're safe there...



    In other news, please check here. It appears multi-speed platters are actually possible. I can't wait to check the performance of these new drives, and to see if there will be 750GB variants.


    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  10. #185
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    if you guys think x4 is more than enough.. i apologize for my suggestion

    other explanations? if the raid system would be running @ high efficiency.. similar performance @ 64K vs 8MB

  11. #186
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Depends on what you mean by performance. You _may_ get better streaming speeds out of a 8MiB request than a 64KiB one but that's dependent on the type of raid, it's stripe size and stripe width, controller optimizations, hash memory (if using a parity raid) among other items. However the differences that you see are really based on implementations not in any underlaying issues with the design itself. > 4 pcie lanes does have some benefits (mainly for burst traffic which would help keeping the controller's cache filled up).

    IRQ sharing is actually important as well (and distributions of the interrupts between multiple cpu's/cores) but that's hard to control under windows, though generally it's not something you'll really end up seeing with block device transfers at the block sizes mentioned here or at the bandwidth rates and with normal CPU systems, though this is on your lower power system right? So there may be something there holding it back easy way to find out though is to do a raid-0 stripe and see what the cap is to the same slot/drives/controller. Most times it's a chipset design (ie, ~800MiB on the IOP34x series) or the cpu processing power to do parity calcs I've found.

    As for the 4x pci lanes on that giga board they should NOT be shared with anything on the SB. That chipset should have 2GiB/sec going to the SB in total and from the diagram: http://www.intel.com/Assets/Image/di...ck_Diagram.jpg the PCIe's are on their own (and disregard the marketing #'s of the 500MB there, it's 2.0Gbps/channel full duplex). But yes if you fully saturate the SB you can oversubscribe the NB/SB connection though you'd have to have everything plugged in and running at a decent clip to do that.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  12. #187
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Quote Originally Posted by NapalmV5 View Post
    if you guys think x4 is more than enough.. i apologize for my suggestion
    No need to apologize, man. Your opinion is as valid as anyone else's.

    Quote Originally Posted by stevecs View Post
    > 4 pcie lanes does have some benefits (mainly for burst traffic which would help keeping the controller's cache filled up).
    Yes, I subscribe to that. Though burst traffic might not happen all that often. Still a valid point, nevertheless.

    Quote Originally Posted by stevecs View Post
    IRQ sharing is actually important as well (and distributions of the interrupts between multiple cpu's/cores) but that's hard to control under windows, though generally it's not something you'll really end up seeing with block device transfers at the block sizes mentioned here or at the bandwidth rates and with normal CPU systems, though this is on your lower power system right?
    IRQ sharing is a major pain to go around in Windows, indeed. Usually you can only manage IRQ attribution by physically moving hardware around or disabling the offending hardware...

    My brother once had lowsy throughput on a 10/100 NIC (25~50%) on his P3-1000 router, with 100% CPU usage when transferring anything over the LAN. Many hours later he found out somehow the AGP card and the PCI slot where the NIC was installed were sharing an IRQ. He had to swap the NIC to another PCI slot (not a problem, he had six of them available... lol) to get the thing working properly.


    Quote Originally Posted by stevecs View Post
    As for the 4x pci lanes on that giga board they should NOT be shared with anything on the SB. That chipset should have 2GiB/sec going to the SB in total and from the diagram: http://www.intel.com/Assets/Image/di...ck_Diagram.jpg the PCIe's are on their own (and disregard the marketing #'s of the 500MB there, it's 2.0Gbps/channel full duplex). But yes if you fully saturate the SB you can oversubscribe the NB/SB connection though you'd have to have everything plugged in and running at a decent clip to do that.
    Hehe, I was clearly in dire need of sleep when I posted earlier... I meant exactly that about the "bandwidth sharing" issue. PCIe lanes have dedicated bandwidth, but the global SB bandwidth is not.

    Though correct me if I'm wrong, but isn't the DMI bandwidth 1GBps in each direction, as opposed to 2GBps? Since the PCIe ports appear at 500MBps each (and we all know that's 250MBps bidirectional), my guess is the same applies to the DMI interface, which I've always heard was basically a slightly altered 4x PCIe connection, btw.

    Any thoughts on this one?

    Oh! I just remembered something I wanted to say on my earlier post: poor RAID performance can sometimes be attributed to controller firmware and HDD incompatibilities. Not too frequent, but you might want to check that too.

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  13. #188
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    @Steve, that 2TB partition you are seeing is HDtune not reporting the array correctly. This is was done on a just formed 8 drive array, no partitioning
    The HDtach sees it correctly and reports the same results while doing its read test.

    I am concerned as to why my reads are so slow, as you can see even in Raid0 they are not that good using 64k block test. Any guesses why it is behaving this way? Just about 50% than just a single drive.

    I've done some Raid0 test all are with 256k stripe.
    Here are the shots an a summary table:


    HDtune 64k tests



    HDtune 8MB tests


    HDtach reads just for comparison:




    What are your thoughts now?
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  14. #189
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Quote Originally Posted by __Miguel_ View Post
    Though correct me if I'm wrong, but isn't the DMI bandwidth 1GBps in each direction, as opposed to 2GBps? Since the PCIe ports appear at 500MBps each (and we all know that's 250MBps bidirectional), my guess is the same applies to the DMI interface, which I've always heard was basically a slightly altered 4x PCIe connection, btw.
    And this is where _I_ needed the sleep before I posted. Yes, you are correct that is 1GiB/s bi-directional good catch and yes that will be a bottleneck when all the traffic is merged.

    @janus, ah, ok didn't know that HDtune had a problem with >2TiB volumes. Can you create a 2TB or 2TiB volume (below the hdtune limit) for testing. hd tune may have an internal calculation error where it's dividing either size or time wrong as it can't understand the array (it tries to read the entire drive if it's larger there may be other parts of the code that may not be able to handle the calculations properly) doesn't explain the 8MiB being faster than the 64KiB directly but it will remove that question from the table.

    Also is this a GPT partition I'm assuming here in windows?

    Is there a way to create a pass-through disk on your controller (ie, no raid, jbod) of a single drive and to run a test against that single drive to get a baseline.

    The stripe size you have set 256KiB is also contributing to your lower 64KiB #'s as well (4 requests to a single drive basically). The controller should be smart enough to combine them if they are adjacent but perhaps not I haven't dug into 3ware for a while now but if not you have 4x the requests.

    Also what firmware are the drives running? 02.01B01 or later? There are known issues w/ desktop type drives from WD in raid configs with lower than normal performance: http://www.3ware.com/KB/attachments/...b8476f86d2.pdf
    Last edited by stevecs; 07-11-2008 at 03:08 AM.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  15. #190
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    All of these tests I posted in post #108 were done without any partitioning. I just built an array and did the tests. Didn't even go to disk management.

    I use RE2 drives WD1000FYPS on this controller, not desktop drives.
    Tomorow I will try and make a smaller partition and a just one drive to see what happens.
    I will also post results with 64k stripe.

    Thanks for all your imputs.
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  16. #191
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Location
    Toronto, ON
    Posts
    517
    Bump from the dead -- the 9650SE-8 looks like what I may be getting (fits in my price/port range). Any news on the slow read performance?
    i7 3770k - p8z77-v pro - 4x4gb - gtx680 - vertex 4 256gb - ax750
    i5 3570k - z77-pro3 - 2x4gb - arc-1231ml - 12x2tb wdgp r6 - cx400
    heatware

  17. #192
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I'll have to do some catching up on the last few posts...
    First vacation got in the way of making any tests and than I managed to fry my mobo again by fiddling with fans... ridiculous.
    Im waiting for G43 or G45 Gigabyte mobo now, so I'll test those last few things then...

    Stupid 5in3 enclosure from Icy dock has backplane molex connectors a little to wide and weak, because of it you can turn the molex both ways if you don't examine it very carefully. Twice that has resulted in a dead mobo... S T U P i D...
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  18. #193
    Registered User
    Join Date
    Apr 2007
    Location
    DFW, TX
    Posts
    87
    Quote Originally Posted by zoob View Post
    Bump from the dead -- the 9650SE-8 looks like what I may be getting (fits in my price/port range). Any news on the slow read performance?
    I just upgraded the firmware on my 9650SE-4LPML in my Windows box from the 9.4.2 codeset to 9.5.1 codeset. Check out the results:

    Before:


    After:


    I have a 9650SE-8LPML in my fileserver, but I haven't bothered upgrading the firmware as I do not have room to compile a new driver on a 512MB DOM. I suppose I could compile one in a vmware session though...

  19. #194
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Whoa!
    VERY interesting!
    What does it say in the change log?
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  20. #195
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Location
    Toronto, ON
    Posts
    517
    Very nice! I had to look at the charts twice because the scales are so different
    i7 3770k - p8z77-v pro - 4x4gb - gtx680 - vertex 4 256gb - ax750
    i5 3570k - z77-pro3 - 2x4gb - arc-1231ml - 12x2tb wdgp r6 - cx400
    heatware

  21. #196
    Registered User
    Join Date
    Apr 2007
    Location
    DFW, TX
    Posts
    87
    Quote Originally Posted by XS Janus View Post
    Whoa!
    VERY interesting!
    What does it say in the change log?
    Something like:

    Code:
    9.5.1 Release Highlights
    · Significantly reduced rebuild time for NTFS, FAT and FAT32 with Rapid RAID
    Recovery
    · Significantly reduced verify or initialization time when recovering from unclean
    shutdown without BBU
    · Improved multi-stream write and read performance with Advanced Content
    Streaming
    · Improved read performance for data recently written
    · Reduced foreground array initialization time on RAID 5, 50 or 6
    · Password protection for 3BM
    · Support for autocarving LUN sizes greater than 2TB (up to 32TB)
    · Drive performance monitoring, to help diagnose drive performance issues
    · Improved auto-verify capability that combines a basic, weekly verify schedule
    with default auto-verify settings
    · Various bugs fixed and enhancements – See details below
    Code:
    Bugs Fixed and Enhancements
    · Improved rebuild completion times under light I/O load
    · Auto-rebuild and Auto-verify settings are now enabled by default
    · Faster write performance on Linux ext3 file system
    · Fixed Auto-rebuild issue where rebuild will not start if enabled after the unit is
    degraded
    · Increased performance for buffered I/O
    · Increased performance for systems sharing data via the CIFS protocol
    · Increased the timeout for ATA flush commands from 12 to 14 seconds
    · Fixed a controller reset and server hang issue for RAID5 256K striped units when
    there are large random writes issued to the controller
    · Added capability to support MSI in the Linux driver
    · Added Chassis Control Unit (CCU) support for 9690SA controllers
    · Allow staggered spin-up of drives when hot plugged

    Release notes pdf link

    Let me know what kind of results you guys get as well as what firmware you upgraded from.

  22. #197
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Thanks!
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

Page 8 of 8 FirstFirst ... 5678

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •