MMM
Results 1 to 25 of 114

Thread: Adaptec vs Areca vs HighPoint

Threaded View

  1. #11
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by Eternalightwith View Post
    Err, well OCing could cause Hardware controllers to fail too esp. if instability is affecting PCIe channel. You're right though, unstable CPU will affect software cards due to software using CPU.

    In that same line of thinking, I think what Serra meant was that the software controller will only work or scale up to the Core speed of your CPU. Faster speed (MHz) equates to better performance/scaling of the software RAID system... theoretically.

    If he was referring to hardware based cards, then I would take a guess and say that it would be more to do with a combination of CPU/PCIe bandwidth/Raid card's CPU. I assuming this, but don't hardware RAID cards work like network cards... what I'm referring to is an article that was put out some months ago that showed that as CPU speed increased, network throughput increased. As we all know, no one ever gets the theoretical output of 100mbps It would be interesting to test this (CPU scaling) with a hardware based raid card.

    I'm ruminating that last paragraph in my head. Just speaking aloud what I've been wondering. Hope it made sense.

    ETlight

    P.S. Some questions for everyone.
    If a person dedicated two cores for just the software raid card, would the scaling improve or is there some other factor involved? I would this that the software solution would continue to scale as you add more drives till you maxed the load on a core....or two

    While it is certainly true that the faster your processor the more drives you could scale to... it is also my assertion that the overhead for RAID-0 and RAID-1 is trivially low. Scaling for me is more a result of the fact that software controlled add-on cards are generally crippled by their bus (either PCI or PCI-E x1), so it gets pointless after a few drives anyway.

    Edit:
    In response to your question about dedicating processors to software RAID: If you were talking about RAID0 or RAID1, it's a moot point. The utilization just isn't there for it to make a difference. For RAID-5/6... maybe... frankly I've never done testing on it and I don't think I would, it's not a best practice by any stretch.

    As for hardware CPU bottlenecking, and responding to itznfb on this as well - I was, of course, referring to RAID-5/6 (as stated). And yes, in those cases the CPU can be the bottleneck. You could buy a $800 Areca with the very latest IOPs available from Intel 5 months ago and buy a new one with dual-core, faster IOPs today and pull different speeds... and you say the CPU can't be the bottleneck? itznfb stated that the card couldn't have been properly high end if it couldn't deal with all the drives it was designed to handle at line speed all the time... but frankly, as hard drive speeds increase, so too must the CPU speed on the hardware card. You simply cannot get around that itznfb. If your RAID card handled 8x drives with 70MB/s throughput one day, don't you think it might be possible it would bottleneck when you went to drives which could sustaint 110+MB/s?

    Mind you, in that last paragraph I'm just mirroring (in a less eloquent way) what Alexio said... which you also didn't respond to...
    Last edited by Serra; 04-25-2008 at 10:40 AM.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •