Page 3 of 5 FirstFirst 12345 LastLast
Results 51 to 75 of 114

Thread: Adaptec vs Areca vs HighPoint

  1. #51
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by [XC] itznfb View Post
    4, hardware cards scale with its only limit being the bus its attached to
    By claiming others don't know what they are talking about you are basically implying that you do, right? Well, apparently you don't and you are also incredibly rude. Hardware RAID certainly doesn't scale until the bus' limit is reached; the controller chip often craps out before that, although when using a high-end controller like an Areca 1210 you only hit this limit with 4+ very fast drives like the new Raptors or SSDs

    I too see no reason to not use the onboard RAID controller when you are using RAID 0 with 3-4 drives max. The Intel ICH9 is a perfect controller for this application. It's okay for RAID 1 too for most users.
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

  2. #52
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by swiftex View Post
    comforting was the word, slip of the finger - you know it does point up as well
    lol nice, that was actually a good one

    Quote Originally Posted by swiftex View Post
    nonetheless,

    QUOTE = ""yes, it has its place in the world""

    Thank you very much - point acknowledged

    nothing more - nothing less
    this wasn't the point. both you and mainly Serra have been defending the stance that software RAID is better than hardware RAID.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  3. #53
    Xtreme Member
    Join Date
    Oct 2006
    Posts
    412
    Quote Originally Posted by [XC] itznfb View Post
    lol nice, that was actually a good one



    this wasn't the point. both you and mainly Serra have been defending the stance that software RAID is better than hardware RAID.

    we are not saying software raid is better then hardware raid,

    we are saying software raid is more than adequate for its intended purposes

  4. #54
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Guys, guys - let's keep this civil

    Again, keeping in mind that I am referring to software-driven add-on hardware when I say "software RAID" and cards like an Areca 1210 when I say "hadware RAID", unless otherwise stated...

    My statement is not that software RAID is better than hardware RAID - my statement would be more along the line that they are effectively comparable for RAID-0 and RAID-1, with a limited number of drives. I do also believe that Areca cards do not carry all the enhancements they could for RAID-1 and that by that virtue software RAID cards may in fact outperform in that area (though I could be wrong about that - once we get some single drive results from Napalm, we might get to see something telling).

    Oh, and certainly hardware RAID has cache which is a bonus as well in any RAID array - BUT in terms of actual benefit, that will depend heavily on usage patterns. I'm not convinced desktop users would really see much of a benefit, though there's a chance I'm wrong about that. I don't think that desktop users do anything repetitive enough for there to really be a difference (though for some benches, for sure, it's a bonus).


    @itznfb:
    Quote Originally Posted by [XC] itznfb View Post
    <hardware versus software is better because:>
    1, hardware cards are faster
    2, hardware cards are more reliable
    3, hardware cards are more expandable
    4, hardware cards scale with its only limit being the bus its attached to
    1. We will see I have asked you repeatedly to look up some specs of your card and find any commands it could issue that mine couldn't, or reason it could issue them in a better way - for our purposes - etc. I have offered proof to the contrary in the form of VirtualRains high-quality review, bringing up an algorithm which I know I use (which you didn't know existed), and brought up the command set offered by the SATA protocol. You have as yet to respond. Please do so before asserting this again.
    2. I have asked you why this is the case as well, again for our purposes. Please respond with why you feel this is so before asserting it again. I'm still confused as to how this could be so, excepting in the case where your OS is *not* on the RAID array. I think that's a fair exemption, as most people here use RAID to see faster boot times etc.
    3. Agreed (but I never disagreed on that)
    4. Like Alexio said... no. The limit is the processor, sorry 'bout that (well... maybe we could say "operation dependent" and leave it at that, it's outside the scope of what we're worried about anyway).

    And again, I invite you to tell me what you thought of the review I posted by VirtualRain. That, as well as the reliability and "faster" issues are things you are going to need to respond to if we're to continue this conversation.
    Last edited by Serra; 04-25-2008 at 09:16 AM.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  5. #55
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    I had edited my last post, but because I'm not sure whether Napalm read the post before I edited it or not...

    I missed noticing that Napalm put up his single drive tests as well. Napalm, if I could ask you one last favor, would you please run the tests on just your regular SATA ports as well? For comparison versus the hardware card. Sorry, should have asked for that earlier. Should be the last test I can think of
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  6. #56
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by Serra View Post
    1. We will see I have asked you repeatedly to look up some specs of your card and find any commands it could issue that mine couldn't, or reason it could issue them in a better way - for our purposes - etc. I have offered proof to the contrary in the form of VirtualRains high-quality review, bringing up an algorithm which I know I use (which you didn't know existed), and brought up the command set offered by the SATA protocol. You have as yet to respond. Please do so before asserting this again.
    first of all, you haven't provided proof of anything. second, stop saying that i didn't know some algorithm existed because has been said that suggest anything along those lines. i have only seen the term "elevator" used with promise cards and therefore couldn't care less about the term. you called this "elevator algorithm" an optimization... which is incorrect. its just a disk scheduling algorithm which the link YOU posted even states performs worse than shortest seek first.

    Quote Originally Posted by Serra View Post
    2. I have asked you why this is the case as well, again for our purposes. Please respond with why you feel this is so before asserting it again. I'm still confused as to how this could be so, excepting in the case where your OS is *not* on the RAID array. I think that's a fair exemption, as most people here use RAID to see faster boot times etc.
    software cards are just that, and are prone to software failure due to an exponential amount of variables. every component in the system effects the OS which therefore effect directly or indirectly the performance of the software raid card. one example being an 'unstable' OC will directly and adversely effect a software raid card while not effecting a hardware raid card. there are no exemptions, it either performs the same or better as your saying or it doesn't period. a software card has no where near the capabilities of a hardware card, and a hardware card has no where near the limitations of a software card.

    Quote Originally Posted by Serra View Post
    3. Agreed (but I never disagreed on that)
    4. Like Alexio said... no. The limit is the processor, sorry 'bout that (well... maybe we could say "operation dependent" and leave it at that, it's outside the scope of what we're worried about anyway).

    And again, I invite you to tell me what you thought of the review I posted by VirtualRain. That, as well as the reliability and "faster" issues are things you are going to need to respond to if we're to continue this conversation.
    the limit is the processor? i hope that was a sarcastic statement.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  7. #57
    Xtreme Addict
    Join Date
    Oct 2004
    Location
    Boston, MA
    Posts
    1,448
    Quote Originally Posted by [XC] itznfb View Post
    the limit is the processor? i hope that was a sarcastic statement.
    http://www.nextlevelhardware.com/storage/battleship/

    "[...]these hardware Raid cards are inserted into PCIe 8X slots and they run with either full 4x or 8x lane compatibility, the cards do have plenty of theoretical bandwidth. However, the processor on each controller differs from the midrange ARC-1220 and the high end ARC-1231ML. The card we were using for the review initially was an Areca ARC-1220 with the 400mhz Intel IOP333 processor. Take a quick look at our next HDTach shot of 5 drives in Raid 0 and than we can do more explaining:

    [...]Five drives put out only 386 MB/s sustained read when we should be anywhere from 550 to 600 MB/s easily[...] The limitation happens to be right around 400 to 450 MB/s max on the 1220. I had one of my suppliers overnight me an Areca 1231ML and I junked the 1220 immediately[...] the #1 difference and reason for upgrading being the high end 800 MHz Intel IOP341 processor onboard. I simply plugged my existing array into the new controller and BOOM. Look what magically appeared:

    Right off the bat using the Areca 1231ML and the same 5 drives, sustained read went up to 608 MB/s and burst jumped a couple hundred points to 1200 MB/s."

    File Server:
    Super Micro X8DTi
    2x E5620 2.4Ghz Westmere
    12GB DDR3 ECC Registered
    50GB OCZ Vertex 2
    RocketRaid 3520
    6x 1.5TB RAID5
    Zotac GT 220
    Zippy 600W

    3DMark05: 12308
    3DMark03: 25820

  8. #58
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by [XC] itznfb View Post
    first of all, you haven't provided proof of anything. second, stop saying that i didn't know some algorithm existed because has been said that suggest anything along those lines. i have only seen the term "elevator" used with promise cards and therefore couldn't care less about the term. you called this "elevator algorithm" an optimization... which is incorrect. its just a disk scheduling algorithm which the link YOU posted even states performs worse than shortest seek first.
    It is worse, or can be with different seeks types, than "shortest seek first". However, it has yet to be established that hardware based cards use shortest seek... and I think I will be showing you that they don't in a moment (give me a few minutes to finish some benching here and we'll see). You didn't know about the elevator algorithm and you state you don't care about it because you saw it used with Promise cards?

    No sir, I have given you examples of what my card can do. My card is fully compliant with all SATA protocols, as I assume yours is... so my question to you is: what about your card makes it faster? It's a very simple question which you continue to dance around.

    Oh, and I have provided you with the review by a member of these forums... which I see, again, you failed my challenge to address.


    Quote Originally Posted by [XC] itznfb View Post
    software cards are just that, and are prone to software failure due to an exponential amount of variables. every component in the system effects the OS which therefore effect directly or indirectly the performance of the software raid card. one example being an 'unstable' OC will directly and adversely effect a software raid card while not effecting a hardware raid card. there are no exemptions, it either performs the same or better as your saying or it doesn't period. a software card has no where near the capabilities of a hardware card, and a hardware card has no where near the limitations of a software card.
    Given that RAID does not protect against data corruption, your argument is moot. Data corruption randomly impacting the RAID array upon which an OS sits is reason to wipe an entire drive. The chances of it hitting only one driver and nothing else is ridiculous.


    Quote Originally Posted by [XC] itznfb View Post
    the limit is the processor? i hope that was a sarcastic statement.
    You must be right. I'm so silly, as are manufacturers. It's so strange they would continually release new products which operate on the same busses with the same number of ports with the only modification being adding more processing power. It surely can't be because the processor becomes a bottleneck.

    Edit: Hey look, HiJon grabbed out some benches while I was typing this. Thanks, HiJon!


    Anyway, like I said, results coming soon.
    Last edited by Serra; 04-25-2008 at 10:00 AM.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  9. #59
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by HiJon89 View Post
    http://www.nextlevelhardware.com/storage/battleship/

    "[...]these hardware Raid cards are inserted into PCIe 8X slots and they run with either full 4x or 8x lane compatibility, the cards do have plenty of theoretical bandwidth. However, the processor on each controller differs from the midrange ARC-1220 and the high end ARC-1231ML. The card we were using for the review initially was an Areca ARC-1220 with the 400mhz Intel IOP333 processor. Take a quick look at our next HDTach shot of 5 drives in Raid 0 and than we can do more explaining:

    [...]Five drives put out only 386 MB/s sustained read when we should be anywhere from 550 to 600 MB/s easily[...] The limitation happens to be right around 400 to 450 MB/s max on the 1220. I had one of my suppliers overnight me an Areca 1231ML and I junked the 1220 immediately[...] the #1 difference and reason for upgrading being the high end 800 MHz Intel IOP341 processor onboard. I simply plugged my existing array into the new controller and BOOM. Look what magically appeared:

    Right off the bat using the Areca 1231ML and the same 5 drives, sustained read went up to 608 MB/s and burst jumped a couple hundred points to 1200 MB/s."
    yes, you can put a processor on a card that isn't powerful enough to handle the amount of drives you're able to attach to it. you do this to save money which pretty much kills the reason for making the hardware card in the first place. what does that prove? that's its possible to make a hardware card that sucks? yea you proved that.

    it's obviously not even a 'decent' card. most mid-high end cards now use at least 800mhz procs, many are now using 1.2ghz procs which eliminates any type of cpu bottle neck leaving the bus its connected to as its only bottleneck.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  10. #60
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by Serra View Post
    It is worse, or can be with different seeks types, than "shortest seek first". However, it has yet to be established that hardware based cards use shortest seek... and I think I will be showing you that they don't in a moment (give me a few minutes to finish some benching here and we'll see). You didn't know about the elevator algorithm and you state you don't care about it because you saw it used with Promise cards?
    how many times are you going to say i didn't know about the elevator algorithm before you realize you don't know what you're saying? its a scan algorithm. that's all it is. the term elevator was made up by promise. its a marketing term. you're talking about it likes its some great revelation that only your software card is capable of executing. i haven't seen the term used by any other manufacturer yet they all have cards the execute the exact same algorithms.

    Quote Originally Posted by Serra View Post
    No sir, I have given you examples of what my card can do. My card is fully compliant with all SATA protocols, as I assume yours is... so my question to you is: what about your card makes it faster? It's a very simple question which you continue to dance around.
    i continue to dance around? i'm dancing around the fact that your software cards creates unnecessary overhead and has dependency limits that both factor into its lack of performance which hardware cards do not have? interesting i didn't realize i was dancing.


    Quote Originally Posted by Serra View Post
    Given that RAID does not protect against data corruption, your argument is moot. Data corruption randomly impacting the RAID array upon which an OS sits is reason to wipe an entire drive. The chances of it hitting only one driver and nothing else is ridiculous.
    i've never heard a hardware raid card taking out drives, or an entire set of drives. or having an array fail during every test that stresses I/O

    Quote Originally Posted by Serra View Post
    You must be right. I'm so silly, as are manufacturers. It's so strange they would continually release new products which operate on the same busses with the same number of ports with the only modification being adding more processing power. It surely can't be because the processor becomes a bottleneck.
    i didn't realize pci1x 4x, and 8x all had the same bandwidth. i'm glad i now know that pci8x provides no more bandwidth that pci1x.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  11. #61
    Xtreme Member
    Join Date
    May 2007
    Posts
    206
    Quote Originally Posted by [XC] itznfb View Post
    software cards are just that, and are prone to software failure due to an exponential amount of variables. every component in the system effects the OS which therefore effect directly or indirectly the performance of the software raid card. one example being an 'unstable' OC will directly and adversely effect a software raid card while not effecting a hardware raid card. there are no exemptions, it either performs the same or better as your saying or it doesn't period. a software card has no where near the capabilities of a hardware card, and a hardware card has no where near the limitations of a software card.

    the limit is the processor? i hope that was a sarcastic statement.
    Err, well OCing could cause Hardware controllers to fail too esp. if instability is affecting PCIe channel. You're right though, unstable CPU will affect software cards due to software using CPU.

    In that same line of thinking, I think what Serra meant was that the software controller will only work or scale up to the Core speed of your CPU. Faster speed (MHz) equates to better performance/scaling of the software RAID system... theoretically.

    If he was referring to hardware based cards, then I would take a guess and say that it would be more to do with a combination of CPU/PCIe bandwidth/Raid card's CPU. I assuming this, but don't hardware RAID cards work like network cards... what I'm referring to is an article that was put out some months ago that showed that as CPU speed increased, network throughput increased. As we all know, no one ever gets the theoretical output of 100mbps It would be interesting to test this (CPU scaling) with a hardware based raid card.

    I'm ruminating that last paragraph in my head. Just speaking aloud what I've been wondering. Hope it made sense.

    ETlight

    P.S. Some questions for everyone.
    If a person dedicated two cores for just the software raid card, would the scaling improve or is there some other factor involved? I would this that the software solution would continue to scale as you add more drives till you maxed the load on a core....or two
    Last edited by Eternalightwith; 04-25-2008 at 10:16 AM. Reason: wanted to add some questions for peeps

  12. #62
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by Eternalightwith View Post
    In that same line of thinking, I think what Serra meant was that the software controller will only work or scale up to the Core speed of your CPU. Faster speed (MHz) equates to better performance/scaling of the software RAID system... theoretically.

    ETlight
    this is correct, but would only be the case when the manufacture chooses to put a processor on a card that's not capable of supporting the card. therefore you can't consider this to be a mid-high end card regardless of its price.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  13. #63
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Alright, I just finished a few quick benchmarks of my own. I have run each test a number of times and received the same results each time. I'll give a brief explanation of each attachment, then offer my conclusions below. When making comparisons between myself and Napalm, lets please try and stick to only HDTach results, as HD Tune does not work for me and the two are not directly comparable (at least not in my experience).

    Oh, my results were achieved on an el-cheapo Promise TX2300 software driven add-on card. It is limited to the PCI bus it sits on, so burst rates aren't going to be anything fancy... but that's fine, it's not like ridiculously high burst rates do you anything with 16MB of cache on a SATA 1 bus anyway.


    HDTach seems to work, but it's not without issues. For example, no matter how many times I uninstall/reinstall, I can't seem to save the benchmark files (wtf). Still, screenshots work, so.. yeah.

    HDTune on the other hand is just plain out to lunch with this card... you'll see why. Clearly my results with it have to be thrown out.

    The first attachment is an HDTach benchmark showing the results of one of my raptors (blue) versus a RAID-1 array with both of my raptors (red). The bench of my old raptor is quite old - early 2007 by the date - but I have no reason to believe the hard drive has deteriorated any since then (apparently I still had an HD Tach file on this old computer with a few benches in it... convenient considering I can't save them now!). The new RAID-1 benchmark, it should be noted, is actually the worst I saw. The read times remained constant to within .1MB/s, but the access time on this one is higher by .1-.2ms. I can't say whether anyone else's results were their best or worst though, so I'm assuming they were the worst for comparisons sake. If not, let me know.

    The second/third attachments are two runs of HD Tune. Things don't look too crazy on it until you notice the CPU utilizations... over 50%. I'm sure some of you want to jump on that and yell "SEE!"... but then I'll also point you to the access times, which showed 3.0ms in the first test and a staggering -0.5ms in the second. Clearly there's something else going on there, like caching to system memory. I've included the pictures just for fun.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	RAID-1vsSingle_01.jpg 
Views:	469 
Size:	179.7 KB 
ID:	77300   Click image for larger version. 

Name:	HDTune_RAID1_n01.jpg 
Views:	474 
Size:	114.4 KB 
ID:	77301   Click image for larger version. 

Name:	TX2300_RAID1.png 
Views:	483 
Size:	38.0 KB 
ID:	77302  
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  14. #64
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    It turns out there was some confusion here. The RAID-1 test Napalm did was on an HP, but the single drive test on an Areca. So take these conclusions with a grain of salt and head on over to post ~91ish for a new review.

    Results we have seen thus far:

    Napalm:

    Single Drive on HP Card
    -----------------------
    Write back cache on:
    Avg Read: 78.2 [write cache on] / 78.1 [write cache off]
    CPU: 0&#37;
    Random Access: 8.1ms
    Burst: 750.9MB/s [write cache on] / 756.7MB/s [write cache off]

    RAID-1 on HP Card
    -----------------
    Write back cache on:
    Avg Read: 88.5MB/s [write cache on] / 78.2 [write cache off]
    CPU: 0%
    Random Access: 7.6ms [write cache on] / 7.4 [write cache off]
    Burst: 1307.5MB/s [write cache on] / 135.7 [write cache off]


    Myself

    Single drive on TX2300:
    Avg Read: 65MB/s
    CPU: 2%
    Random Access: 7.7ms
    Burst: 114.0MB/s

    RAID-1 on TX2300:
    Avg Read: 65MB/s
    CPU: 2%
    Random Access: 6.8ms
    Burst:126.5MB.s


    My analysis:

    Avg Read Time:
    My average read time remained a steady 65-65.1MB/s in all my tests. Napalms seems to fluxuate with write cache on/off, but 3/4 tests peg him at an average of 78.16MB/s (the last gives a 10MB/s improvement). Are they directly comparable? I'm not sure. Further testing and results are required here. It could just be that his drives are newer than mine, use different firmware, etc... or it could be that the controller adds on about 13MB/s average read. We'll see what Napalm is able to post about his drives on their motherboard controller, and I'll try the same on my P5K Deluxe.


    CPU Utilization:
    The battle here seems to be between mine at 2% and his being reported at 0%. I think we can agree that just running the program involved some kind of resources, so if you'll allow, I'd like to argue it's my 2% (HD Tach) versus his 0.4% (his lowest HD Tune). If not, fine - his program runs without resources. In either event, I'll add that his processor is at least a Q6700 - overclocked to 3.6GHz in one of his posts in another thread - where the one I'm using for this test bed is a dual-core Opty 170 at stock speeds that my wife has been using for the past year or so (and loaded it with garbage, but that's another story).

    Given the performance difference between a 3.6GHz Core 2 Quad and a 2.0GHz Opteron 170... I think we can agree that the utilization is next to nothing for both solutions.


    Seek time:
    In his testing, Napalm went from a steady 8.1ms average access time to an average of 7.5ms (between his two results). This is a decrease with RAID-1 of 0.6ms. In my results, I went from 7.7ms to 6.8ms, a decrease of 0.9ms.

    It is important to note, however, that although my decrease was better in absolute terms, it was also better in relative terms. His decrease represented a decrease of 7.41% average seek times, while mine was an 11.69% decrease.

    The only conclusion to be drawn here is this: My software RAID implements algorithms or optimizations that are not seen on his hardware RAID card.


    Burst speeds:
    In terms of Napalms results, well they're obviously a function of it having actual RAM cache. 'nuff said there. Mine weren't great, but I was bottlenecked by the PCI bus as well, so I can't really make a fair comparison.



    Final thoughts:

    Well, I hope I've demonstrated the following:
    1. Sustained Read Speed: More results are needed to speak to whether the hardware based card is actually pulling 13MB/s more than I am, or whether his drives are just plain better. Those will be coming.
    2. CPU Utilization: With a comparable CPU, this should be negligible either way
    3. Seek times: My card was the clear winner, both absolutely and relatively.
    4. Burst speeds: RAM cache versus a PCI bus... no contest there
    Last edited by Serra; 04-25-2008 at 07:05 PM. Reason: Cleaned up a bit for ease of reading
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  15. #65
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by Eternalightwith View Post
    Err, well OCing could cause Hardware controllers to fail too esp. if instability is affecting PCIe channel. You're right though, unstable CPU will affect software cards due to software using CPU.

    In that same line of thinking, I think what Serra meant was that the software controller will only work or scale up to the Core speed of your CPU. Faster speed (MHz) equates to better performance/scaling of the software RAID system... theoretically.

    If he was referring to hardware based cards, then I would take a guess and say that it would be more to do with a combination of CPU/PCIe bandwidth/Raid card's CPU. I assuming this, but don't hardware RAID cards work like network cards... what I'm referring to is an article that was put out some months ago that showed that as CPU speed increased, network throughput increased. As we all know, no one ever gets the theoretical output of 100mbps It would be interesting to test this (CPU scaling) with a hardware based raid card.

    I'm ruminating that last paragraph in my head. Just speaking aloud what I've been wondering. Hope it made sense.

    ETlight

    P.S. Some questions for everyone.
    If a person dedicated two cores for just the software raid card, would the scaling improve or is there some other factor involved? I would this that the software solution would continue to scale as you add more drives till you maxed the load on a core....or two

    While it is certainly true that the faster your processor the more drives you could scale to... it is also my assertion that the overhead for RAID-0 and RAID-1 is trivially low. Scaling for me is more a result of the fact that software controlled add-on cards are generally crippled by their bus (either PCI or PCI-E x1), so it gets pointless after a few drives anyway.

    Edit:
    In response to your question about dedicating processors to software RAID: If you were talking about RAID0 or RAID1, it's a moot point. The utilization just isn't there for it to make a difference. For RAID-5/6... maybe... frankly I've never done testing on it and I don't think I would, it's not a best practice by any stretch.

    As for hardware CPU bottlenecking, and responding to itznfb on this as well - I was, of course, referring to RAID-5/6 (as stated). And yes, in those cases the CPU can be the bottleneck. You could buy a $800 Areca with the very latest IOPs available from Intel 5 months ago and buy a new one with dual-core, faster IOPs today and pull different speeds... and you say the CPU can't be the bottleneck? itznfb stated that the card couldn't have been properly high end if it couldn't deal with all the drives it was designed to handle at line speed all the time... but frankly, as hard drive speeds increase, so too must the CPU speed on the hardware card. You simply cannot get around that itznfb. If your RAID card handled 8x drives with 70MB/s throughput one day, don't you think it might be possible it would bottleneck when you went to drives which could sustaint 110+MB/s?

    Mind you, in that last paragraph I'm just mirroring (in a less eloquent way) what Alexio said... which you also didn't respond to...
    Last edited by Serra; 04-25-2008 at 10:40 AM.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  16. #66
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by Serra View Post
    The first attachment is an HDTach benchmark showing the results of one of my raptors (blue) versus a RAID-1 array with both of my raptors (red). The bench of my old raptor is quite old - early 2007 by the date - but I have no reason to believe the hard drive has deteriorated any since then (apparently I still had an HD Tach file on this old computer with a few benches in it... convenient considering I can't save them now!). The new RAID-1 benchmark, it should be noted, is actually the worst I saw. The read times remained constant to within .1MB/s, but the access time on this one is higher by .1-.2ms. I can't say whether anyone else's results were their best or worst though, so I'm assuming they were the worst for comparisons sake. If not, let me know.
    i'll do you a favor and ignore the 2nd screen shot. in the first you said you've never even seen 1&#37; CPU utilization? so why is it at 3%? and why is your average read and burst lower than if the drive were just connected to the motherboard?

    and why can your software raid card be bottlenecked by the bus it sits on but a hardware card can't?
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  17. #67
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by [XC] itznfb View Post
    i'll do you a favor and ignore the 2nd screen shot. in the first you said you've never even seen 1&#37; CPU utilization? so why is it at 3%? and why is your average read and burst lower than if the drive were just connected to the motherboard?

    and why can your software raid card be bottlenecked by the bus it sits on but a hardware card can't?
    You'll have to ignore the 2nd + 3rd screen shots [both HD Tune results] (3.0ms response and -0.5ms response time... I wish), but I assume that's what you mean.

    As for the utilization - I'll have to agree that's a typo. I'll put it in red for you on the first post and apologize if you'd like. And it's not at 3% - it's at 2%. I was a little worried you were going to try to attack me there, but when you made that typo of 3% yourself I was relieved to see we're just human.

    My average read is the same on both HD Tach results in the comparison - 65MB/s either way, so I don't know what you mean there. Burst is bottlenecked by the PCI bus on the array.

    And this is RAID5/6 only...
    And I'm not saying that a hardware can't *can't* be bottlenecked by it's bus, but the bus is not always the first or primary bottleneck. I guarantee you you can find a top of the line card from a year or two ago that you can put into at least a PCI-Ex8 or even x16 slot... and you'll find you're bottlenecked versus a current (or slightly future) offering with newer IOPs in RAID6, say.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  18. #68
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Dorset, UK
    Posts
    439
    Quote Originally Posted by Eternalightwith View Post
    If he was referring to hardware based cards, then I would take a guess and say that it would be more to do with a combination of CPU/PCIe bandwidth/Raid card's CPU. I assuming this, but don't hardware RAID cards work like network cards... what I'm referring to is an article that was put out some months ago that showed that as CPU speed increased, network throughput increased. As we all know, no one ever gets the theoretical output of 100mbps It would be interesting to test this (CPU scaling) with a hardware based raid card.

    I'm ruminating that last paragraph in my head. Just speaking aloud what I've been wondering. Hope it made sense.

    ETlight

    P.S. Some questions for everyone.
    If a person dedicated two cores for just the software raid card, would the scaling improve or is there some other factor involved? I would this that the software solution would continue to scale as you add more drives till you maxed the load on a core....or two
    Hmmm, this MIGHT be possible. Since the hardware card has managed the parity calcs and combined the byte streams from the RAIDed disks into the correct order, all the CPU has to do is get an interrupt from the card and read the data that is arriving in the buffer, which it then transfers somewhere else where another thread can process it. This is all the card driver software is doing, pretty minimal stuff when the hardware on the card is taking care of the actual RAID algorithms.

    I'm trying to get some order-of-magnitude handle on this... so do correct me if I'm wrong... for these large disk arrays, data is arriving at a significant fraction of GBytes/sec, your CPU is running at GHz speed, so a single core should be just keeping up with the incoming data. Remember the practical limit on accessing this data once it's been received in a buffer (or to transfer it into one) is the actual read/write speed of the memory. There's also no way to parallelize the algorithm when you are dealing with a single input stream.

    Now IF you used software RAID to handle each individual disk, then up to the limit of the processor cores the throughput should be fully scalable as you could handle each disk with its own thread. But in parity RAID you then get the issue of combining the data to do parity calcs, which we know is the slow part. In that case, you've spent processor cycles switching between thread contexts, each thread has to respond to a system interrupt from the card for its relevant disk, then you are getting the data byte from buffer and storing it in memory, then you have to read it all over again in another master thread that also has to check whether all disks have returned the particular byte, do the XOR and then write the data byte out having confirmed the result against the previously calculated parity byte (and recalculate if it doesn't match, handling the error situation, serious overhead)... So yes a faster processor would help, but wouldn't change the results magically by any significant amount, there is so much processing going on in this regard which includes many "slow" memory writes. And the data throughput is (as I already said) reaching the processing limit of a single core, so this is why software parity RAID has such high processor overhead, affecting all your other apps.

    The situation with network cards is slightly different. Disk data is a raw data stream. Network data is not, it's like playing pass-the-parcel with many wrappers of information around the actual data sent. For a short course in why read up on the OSI model - Google offers this as a good page: http://www.tcpipguide.com/free/t_Und...lAnAnalogy.htm

    So to get at the actual raw data sent between two computers, you have to have software that strips away each successive layer of surrounding information. The name for this is the "TCP stack", a critical core part of any OS. This is all taking much processing overhead for each data byte you actually want to get at, much more compared to handling a raw input stream like a disk array, so processor power should have a more critical impact here, and the routing overhead is why the actual throughput is effectively limited and you can never reach the theoretical max.
    Last edited by IanB; 04-25-2008 at 11:09 AM.

  19. #69
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    wow. two pages of this? "RAID" is not a religious thing. It is simply a tool. As what Serra & other have alluded to. Barring implementation screw-ups, there is no 'magic sauce' that a software solution can perform differently than a hardware one or visa versa. What do you think is on a hardware card? It's software, called microcode but still the same, and these days probably written in the same languages.

    You just have to pick the right tool for the job at hand. (all sans implementation issues) you will not find any statistical differences between any raid level once you baseline towards relative performance of whatever CPU/bus/memory handling the load. (i.e. the operations and types of operations required to give the RAID-X/X level is the same).

    The deciding factors usually between software & hardware are usually down to System load and ease of management (and ease of management here I'm not talking about the people here, I'm talking about the $8/hour 1st level support tech at a company who could care less about computers and is supposed to service them). The system load is mainly for the point when you have an application that is generally too much already for a box (DB's, application servers, et al) and the system literally does not have enough CPU/memory/buss bandwidth to even do it's primary job. Any off-load is a boon as it saves the company from buying a larger system (larger capitol expense and in many cases licensing charges due to more cpus/processing power for the primary apps).

    What one can get out of a particular set up is _HIGHLY_ dependent on the workload presented to the solution.

    Striping (or RAID-0) you will not find any benefit regardless of hardware/software. Same # of cpu cycles, same # of interrupts and same amount of bus/memory bandwidth required to access for a software/hardware raid.

    RAID-1, to Serra's point there is some logic needed to find out what drive is less utilized for requests. HOWEVER, this I would label as an implementation bug if you ever find a card that does not do this automatically. Ie. Anyone who has _ever_ done a load balanced solution of _ANY_ type (raid, networking, bus, memory) finds out in the matter of hours of testing that a round robin request layout does not work except in only ONE case where _every_ request is the exact same size and takes the _exact same_ number of cycles to complete. Good thought experiment but it does not happen in the real world. To compensate software & hardware solutions use metrics like service time (derived from markov chains/little's law and others) and watching the command flow. This is more involved, yes, however this is not any more than say what a normal disk driver does for you (actually less)). Which is why for RAID-1 arrays there is no real distinction between hardware/software again for deployment.

    RAID-1/0 (0/1), 1E, 10E and variants I'll add here, these fall into utilizing the above two 'levels' in combination. Likewise here you will not find any appreciable difference at all between hardware/software solutions.

    Starting with the 'parity' raids. (skipping RAID-2 as it's more of a history lesson these days except for ECC memory which works similarly).

    RAID 3,4,5 and most definitely raid-6 have substantial calculation overhead WHEN WRITING to the array. These are the types of arrays that kill performance on systems that are already close to the edge. IF your system is _NOT_ however (ie, desktop box or whatever, lightly loaded system, service times/utilization below the ~60% mark) you will not notice any difference between hardware/software (assuming you are properly comparing your respective calculation power and system. Ie. You can't compare say a CPU that does ~60,000MIPS with one on a card that does say 6,000MIPS. and point to it and declare that 'software' is faster. No. It's that your system can just execute more software operations faster than the card you are not comparing the algorithms you are just comparing the implementations, which is different.

    Another item that 'skews' results in testing is write cache. This has nothing really to do with the algorithms themselves but is a way to make them more efficient. This can be done both in hardware & software as well, generally it's 'easier' in hardware from the end users point of view as they just have to add ram to a device. Once again this is an implementation item. What this does is to hold the data long enough WITHOUT writing to the drives UNTIL there is a full stripe width to write at once. This only applies to parity raid setups (mainly, can also have some other benefits but not for 99% of the people usually). This is due to fact that if you do not write a full stripe with parity raids you have to do 4 operations (raid 3, 4 ,5) or 6 operations (raid 6) to the drive subsystem to actually do your write. These operations add up (taking more MIPS) to complete, and if done in software more bandwidth on the host's side.

    Also parity raids do not just take cpu overhead they also take more memory bandwidth. Just like a network card or anything else, DMA helps but you are still hitting main memory for several operations sometimes up to 4x the bandwidth you 'see' being written out to your device. I find this more noticeable in high speed networks as many people do not push disk subsystems up to the 1GiB/s mark yet. With home computers (and many servers) limiting memory bandwidth to say 3-4GiB/s (i5000V) ~6GiB/s (975X) you do not have enough there to push the data AND run something else. (This is why the opteron systems even though they are slower in MIPS ratings generally compared to the current intels they have a wider memory bus). Or say the Sun SPARC systems, IBM SP2, et al).

    A computer is not defined by a single item, it is a balanced system, increasing any ONE part WILL NOT improve the overall system in equal proportion to the increase in performance of the single item. (amdahl's law of parallelization increase, colloquial similar to the 'law of diminishing returns').

    So the question comes directly back to what is the environment you are trying to use the tool of 'raid' on? If it's parity then you have to consider what else is part of that environment that also needs to co-exist with it and what your goals are.

    Not to ramble on ( too late I guess) to the OP, If you are just looking for 0/1/10/1E/10E then literally anything is good as long as it does not have a known implementation problem. I use LSI, adaptec, cards here, but also do on-board, and software raids for this on numerous platforms (intel, sun, linux et al).

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  20. #70
    Xtreme Addict
    Join Date
    Oct 2004
    Location
    Boston, MA
    Posts
    1,448
    Quote Originally Posted by [XC] itznfb View Post
    i didn't realize pci1x 4x, and 8x all had the same bandwidth. i'm glad i now know that pci8x provides no more bandwidth that pci1x.
    The benchmarks I posted show that a PCI-E x8 card couldn't go over 400MB/s sustained read because the bottleneck was the IOP333 CPU. In that case the CPU was the bottleneck, and soon enough the CPU's on today's RAID cards will bottleneck the array. The PCI-E x8 interface can theoretically handle 2GB/s, Serra was quite right in saying that "The limit is the processor"

    File Server:
    Super Micro X8DTi
    2x E5620 2.4Ghz Westmere
    12GB DDR3 ECC Registered
    50GB OCZ Vertex 2
    RocketRaid 3520
    6x 1.5TB RAID5
    Zotac GT 220
    Zippy 600W

    3DMark05: 12308
    3DMark03: 25820

  21. #71
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by HiJon89 View Post
    The benchmarks I posted show that a PCI-E x8 card couldn't go over 400MB/s sustained read because the bottleneck was the IOP333 CPU. In that case the CPU was the bottleneck, and soon enough the CPU's on today's RAID cards will bottleneck the array. The PCI-E x8 interface can theoretically handle 2GB/s, Serra was quite right in saying that "The limit is the processor"
    current 1.2ghz processors don't even come close to bottlenecking. you're comparing an ancient 400mhz processor to today's 1.2ghz processor? seriously?
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  22. #72
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    @Stevecs: I was wondering how long it was going to take for you to get in on this Hopefully they'll at least listen to you (your sig is much more impressive hard drive wise than mine is).

    Quote Originally Posted by [XC] itznfb View Post
    current 1.2ghz processors don't even come close to bottlenecking. you're comparing an ancient 400mhz processor to today's 1.2ghz processor? seriously?
    The IOP333's an 800MHz processor, not a 400MHz one. And yeah... they can also be bottlenecked before a PCI-E x16 lane will be.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  23. #73
    Xtreme Member
    Join Date
    May 2007
    Posts
    206
    Quote Originally Posted by Serra View Post
    While it is certainly true that the faster your processor the more drives you could scale to... it is also my assertion that the overhead for RAID-0 and RAID-1 is trivially low. Scaling for me is more a result of the fact that software controlled add-on cards are generally crippled by their bus (either PCI or PCI-E x1), so it gets pointless after a few drives anyway.
    Yes but PCIe x1 has a 250MB/s ceiling. If you got a PCIe 4x or 8x software card, wouldn't this raise the ceiling to 1GB/s or 2GB/s respectively?

    Edit: I just read Steve's post. I guess what I'm trying to say is that, 1. if you have a software RAID card that is say 8x PCIe in a slot that is full bandwidth 8x, shouldn't you get close to full transfer speed assuming you have enough drives to bring it there?
    If not, then was IS the bottleneck?

    I'm assuming that you implementation Serra, of 3-4 drives at 69MB/s max on a software raid card, 69 * 4 drives = 271MB/s If you had a PCIe x1 card then yeah, you're over the limit.

    ETlight

    P.S. thank you Serra and IanB for responding
    Last edited by Eternalightwith; 04-25-2008 at 11:49 AM.

  24. #74
    Xtreme Addict
    Join Date
    Oct 2004
    Location
    Boston, MA
    Posts
    1,448
    Quote Originally Posted by [XC] itznfb View Post
    current 1.2ghz processors don't even come close to bottlenecking. you're comparing an ancient 400mhz processor to today's 1.2ghz processor? seriously?
    I'm not saying that 1.2Ghz processors are bottlenecking current RAID arrays, but the processor is "the limit" so to speak. If you were to hypothetically keep increasing the throughput until the card maxxed out, you would be limited by the CPU before the PCI-E interface or anything else. So Serra was right in saying that the "The limit is the processor"

    File Server:
    Super Micro X8DTi
    2x E5620 2.4Ghz Westmere
    12GB DDR3 ECC Registered
    50GB OCZ Vertex 2
    RocketRaid 3520
    6x 1.5TB RAID5
    Zotac GT 220
    Zippy 600W

    3DMark05: 12308
    3DMark03: 25820

  25. #75
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by Serra View Post
    The IOP333's an 800MHz processor, not a 400MHz one. And yeah... they can also be bottlenecked before a PCI-E x16 lane will be.
    actually i referenced hijons post which is apparently incorrect as he said in his first test he used an IOP333 400mhz and he "upgraded" to the 800mhz version which eliminated the bottleneck.

    i couldn't find a 400mhz IOP333 processor on intels site.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

Page 3 of 5 FirstFirst 12345 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •