MMM
Results 1 to 25 of 114

Thread: Adaptec vs Areca vs HighPoint

Hybrid View

  1. #1
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Alright, new "Results Summary" based on Napalms new tests. A lot of it looks the same, but sentences/numbers have been changed.


    Results we have seen thus far:

    Napalm:

    Single Drive on Areca
    -----------------------
    Avg Read: 78.2 [write cache on] / 78.1 [write cache off]
    CPU: 0%
    Random Access: 8.1ms
    Burst: 750.9MB/s [write cache on] / 756.7MB/s [write cache off]


    Single Drive on Onboard
    ----------------------
    Avg Read: 77.3MB/s
    CPU: 2%
    Random Access: 8.1ms
    Burst: 137.1MB/s


    RAID-1 on Areca
    -----------------
    Avg Read: 77.8MB/s
    CPU: 0%
    Random Access: 8.7ms
    Burst: 745.2MB/s

    RAID-1 on HP Card
    -----------------
    Write back cache on:
    Avg Read: 88.5MB/s [write cache on] / 78.2 [write cache off]
    CPU: 0%
    Random Access: 7.6ms [write cache on] / 7.4 [write cache off]
    Burst: 1307.5MB/s [write cache on] / 135.7 [write cache off]


    Myself

    Single drive on TX2300:
    Avg Read: 65MB/s
    CPU: 2%
    Random Access: 7.7ms
    Burst: 114.0MB/s

    RAID-1 on TX2300:
    Avg Read: 65MB/s
    CPU: 2%
    Random Access: 6.8ms
    Burst:126.5MB.s


    My analysis:

    Avg Read Time:
    My average read time remained a steady 65-65.1MB/s in all my tests. Napalms fluxuated somewhat with write caching on/off, but that's to be expected. His drives, on hardware controllers, pulled an average of 78.16MB/s, and his onboard test gives 77.3MB/s. Realistically, those two numbers are so comparable the difference could just be the difference between one run and another.

    The bottom line here is that in RAID-1, his controller did not confer any benefits to him over my software-driven add-on card.


    CPU Utilization:
    The battle here seems to be between mine at 2% and his being reported at 0%. I think we can agree that just running the program involved some kind of resources, so if you'll allow, I'd like to argue it's my 2% (HD Tach) versus his 0.4% (his lowest HD Tune). If not, fine - his program runs without resources. In either event, I'll add that his processor is at least a Q6700 - overclocked to 3.6GHz in one of his posts in another thread - where the one I'm using for this test bed is a dual-core Opty 170 at stock speeds that my wife has been using for the past year or so (and loaded it with garbage, but that's another story).

    Given the performance difference between a 3.6GHz Core 2 Quad and a 2.0GHz Opteron 170... I think we can agree that the utilization is next to nothing for both solutions.


    Seek time:
    In his testing, Napalm went UP in access times with an Areca card and down with his HP. His single-drive access time is 8.1ms. With the Areca, he was given 8.7ms response on his RAID-1 array, and with the HP pulled an average of 7.5ms. In the first case, it is an increase of 0.6ms and the second a decrease of 0.6ms. In my results, I went from 7.7ms to 6.8ms, a decrease of 0.9ms.

    It is important to note, however, that although my decrease was better in absolute terms, it was also better in relative terms. His decrease on the HP represented a decrease of 7.41% average seek times, while mine was an 11.69% decrease.

    Why the differences? RAID-1 with any optimizations at all should reduce speed. His HP clearly introduced some form of load balancing between the two to reduce speeds, but seems to stop there. Areca cards, however, have never been notorious for their use in RAID-1... exactly why, I'm not sure. Clearly some sort of code fix is needed. For myself though, my software solution provides me use of the elevator seek algorithm. While *only* the second-best seeking algorithm we've yet found, it's still apparently pretty good.


    The only conclusion to be drawn here is this: My software RAID implements algorithms or optimizations that are not seen on his hardware RAID card.


    Burst speeds:
    In terms of Napalms results, well they're obviously a function of it having actual RAM cache. 'nuff said there. Mine weren't great, but I was bottlenecked by the PCI bus as well, so I can't really make a fair comparison.



    Final thoughts:
    Well, I hope I've demonstrated the following:
    1. Sustained Read Speed: Napalms sustained reads were faster, but so were his drives. His hardware card did not improve speeds noticeably (0.86MB/s). There is no speed advantage to be taken by hardware cards here.
    2. CPU Utilization: With a comparable CPU, this should be negligible either way
    3. Seek times: My card was the clear winner, both absolutely and relatively (0.9ms / 11.69% drop versus either 0.6ms / 7.41% [HP] OR an Increase [Areca]).
    4. Burst speeds: RAM cache versus a PCI bus... no contest there


    So I invite you Naplam and itznfb - what do you say? For RAID-1/0, can we be agreed that - for a limited number of drives - software-based solutions* are the equal and can in fact be the superior to hardware-based solutions but at a fraction of the price?

    *Edit: (at least those which use add-on cards, not necessarily on-board junk)

    With that said, hardware cards do certainly provide:
    - The ability to migrate to RAID-5/6
    - Additional cache, which can be a benefit in *some* desktop usage patterns
    - The ability to scale further (in general, due to software-based cards being generally limited to the PCI and PCI-E x1 busses)
    Last edited by Serra; 04-25-2008 at 07:33 PM.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  2. #2
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by Serra View Post

    So I invite you Naplam and itznfb - what do you say? For RAID-1/0, can we be agreed that - for a limited number of drives - software-based solutions* are the equal and can in fact be the superior to hardware-based solutions but at a fraction of the price?

    *Edit: (at least those which use add-on cards, not necessarily on-board junk)
    no you can't. you haven't factored in drive failure due to the cheap crappy card killing the drive. which happens often. why do you think software RAID cards aren't used in businesses? businesses find every way they can to save money. wouldn't they put software RAID cards in if they were just as good? well, they aren't, because of all the reasons i've already stated.

    and you said in another post that connecting 8 drives in raid 0 is asking for failure? why? what would make you think there would be a failure? maybe because you know software raid cards constantly fail? i have several systems and servers running between 8 and 16 drives in RAID0 and i've never had a failure on a 3ware card. i'm comfortable running in RAID0 because i know it isn't going to fail. a couple of the linux nas servers have been running for years. i have a few highpoint cards, and a couple promise cards that fail constantly with 2 drives. they don't run for more than a month without having to rebuild the array. they can't run RAID1 at all, everytime i reboot there's a failure and the array needs to be verified and rebuilt.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  3. #3
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by [XC] itznfb View Post
    no you can't. you haven't factored in drive failure due to the cheap crappy card killing the drive. which happens often. why do you think software RAID cards aren't used in businesses? businesses find every way they can to save money. wouldn't they put software RAID cards in if they were just as good? well, they aren't, because of all the reasons i've already stated.

    and you said in another post that connecting 8 drives in raid 0 is asking for failure? why? what would make you think there would be a failure? maybe because you know software raid cards constantly fail? i have several systems and servers running between 8 and 16 drives in RAID0 and i've never had a failure on a 3ware card. i'm comfortable running in RAID0 because i know it isn't going to fail. a couple of the linux nas servers have been running for years. i have a few highpoint cards, and a couple promise cards that fail constantly with 2 drives. they don't run for more than a month without having to rebuild the array. they can't run RAID1 at all, everytime i reboot there's a failure and the array needs to be verified and rebuilt.
    The drive controllers killing drives? *killing* drives? What, like overvolting them? Sending them the xMurderDeathKill drive command? No, there's nothing special these things can do to kill drives. Sure if your software is unstable you'll wreck the drives... but frankly, if your software is unstable it means your OS is unstable, and again renders it to being a moot point.

    8 drives in RAID-0 is a hazard because you multiply your failure rate by 8. If your AFR was, say, .08 your AFR increases to .64, and if that's not planning for a failure, I don't know what is.

    As to businesses not using them for reliability reasons:
    1. If your business is smart, it'll use RAID-5 or RAID-6 with a decent hardware card and so won't take more than a small performance hit
    2. Hardware cards generally provide more ports

    The only time to run RAID-0 or RAID-1 in a business is when your business is either sufficiently small or, perhaps, RAID-0 in some specialized server which is backed up regularly.

    I don't know what's wrong with your cards - have you checked out the slots they're on on your MB? My cards always worked just fine for me (well, excepting compatibility with this mobo... but that's an issue with any storage card, grrr).


    Edit: I take it you concede the performance point then (excepting as it applies to longevity), as you didn't mention it or dispute any numbers yet nor the review, nor found any commands your card can issue mine can't etc.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  4. #4
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by Serra View Post

    I don't know what's wrong with your cards - have you checked out the slots they're on on your MB? My cards always worked just fine for me (well, excepting compatibility with this mobo... but that's an issue with any storage card, grrr).


    Edit: I take it you concede the performance point then (excepting as it applies to longevity), as you didn't mention it or dispute any numbers yet nor the review, nor found any commands your card can issue mine can't etc.
    when i say for all that includes that software cards do indeed have terrible performance compared to hardware. you can find hundreds of people on this forum that will give you examples of software raid cards killing their drives. its kind of hard to diagnose what happened when the drive is dead. but you won't find any of these stories with hardware raid cards.

    many business use RAID0 configs on high IO clustered boxes. RAID5/6/1/0 are pretty much being phased out by RAID10 however.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  5. #5
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by [XC] itznfb View Post
    when i say for all that includes that software cards do indeed have terrible performance compared to hardware. you can find hundreds of people on this forum that will give you examples of software raid cards killing their drives. its kind of hard to diagnose what happened when the drive is dead. but you won't find any of these stories with hardware raid cards.

    many business use RAID0 configs on high IO clustered boxes. RAID5/6/1/0 are pretty much being phased out by RAID10 however.
    I doubt your claim that you could find "hundreds" of people whose software controllers killed drives.

    Facts:
    1. Software cards aren't exactly overvolting drives (kind of hard, when they don't supply power)
    2. Your controller cannot kill a drive by issuing it malformed commands
    3. No commands exist which can themselves kill a drive
    4. Controller cards are extremely simple electrical devices. Their ability to fail is quite small (contrast this with hardware cards, which perform the same job but actually produce notably more heat).
    5. Hard drives fail regularly. They are in fact the most common source of failure. Saying it's the fault of the software card (but, of course, not a *hardware* card) is just superstition.

    You tell me where the fault is in my logic there. If there is an associated higher failure rate (which I'd doubt), I would say it could be because people who can afford hardware cards can generally afford things like higher quality PSU's and drives. Saying it's the fault of an electrical component which doesn't supply power to the device and can only pass along a limited range of predefined commands sounds a lot like superstition to me.

    I'm not saying you couldn't find RAID-0 used in businesses, but it's not exactly a best practice. You can get the same (well, n-1) read performance out of a RAID-5 array on a good hardware controller. If they're using RAID-0, they're cutting corners. That said, yes - I have also seen RAID-0 in use... and I have seen it done with software RAID cards as well, because hardware is not warranted. Of course, any time you see a RAID-0 array it should ideally either be used by only a limited number of users who can afford to have it go down or when you have a large number of cheap systems being used in a load-balancing manner which can gracefully deal with the death of one component.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  6. #6
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    one last post on this..

    Quote Originally Posted by Serra View Post
    Edit: Yes, this was a typo. See post 67 for details. Should be about 3%. In fairness though, 4 drives in RAID-0 on add-on software-driven cards are bottlenecked by either a PCI or PCI-Ex1 bus anyway... so that's why you won't see more than that. Oh, and that was on an Opty 170 @ stock.
    so indeed youve edited your post.. its not 0% anymore now its about 3% ? lol


    Quote Originally Posted by Serra View Post
    It turns out there was some confusion here. The RAID-1 test Napalm did was on an HP, but the single drive test on an Areca. So take these conclusions with a grain of salt and head on over to post ~91ish for a new review.
    no confusion whatsoever, ive clearly stated what the benches are on and i posted what you asked for

    maybe you should take some salt to get rid of some of that bitterness


    not only youre not comparing the same hard drive.. different controllers.. different buses.. different mobos.. and LOL @ your 2007 hdtach result.. is that the best result hdtach/74gd raptor can muster?

    too bad i dont have a WD740GD raptor anymore..

    the only reason for the lower access time on the promise controller could be the lower latency of the pci bus

    same @ virtualrains results: lower latency

    just as hard raid controllers get faster/better so do onboard/soft raid controllers - 975x vs x38: 72MBs/ vs 77MB/s

    you keep saying 600$.. both the areca1210 and hpt 3510 are 300$ controllers

    look, you got issues.. this has not much to do with controllers/hdd/raid/etc.. you got issues with others having better stuff than you.. you got problem with my qx6700 which i dont have anymore im all dual-core and you are the one with the quad-core.. funny.. you got problem with my raid cards.. you think im rich or smtg? i work my arse off to pay for all my pc hardware/software + everything ive got
    Last edited by NapalmV5; 04-26-2008 at 11:19 AM.

  7. #7
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by NapalmV5 View Post
    so indeed youve edited your post.. its not 0% anymore now its about 3% ? lol
    Good thing I put it in a clear red so that it's not like I tried to hide anything. Oh, and it wasn't 0% ever. Thanks for putting words in my mouth. And yeah 3% is about the worst you'd see on even an Opty 170 w/3-4 drives because you're going to get strangled by the PCI/PCI-Ex1 bus... there's no opportunity to see more than that.


    Quote Originally Posted by NapalmV5 View Post
    no confusion whatsoever, ive clearly stated what the benches are on and i posted what you asked for

    maybe you should take some salt to get rid of some of that bitterness
    I didn't say you were confused - but there was some confusion. I apologized for not noticing you used different cards before. I made sure it was known at the top of the first review so people wouldn't think otherwise. As for the latter results though, well they didn't look too much like your hardware was any better than mine. Maybe you need to give it some more of that magic juice... you still haven't explained what property of yours would make yours faster.

    No bitterness, just trying to clear up a techno-myth.


    Quote Originally Posted by NapalmV5 View Post
    not only youre not comparing the same hard drive.. different controllers.. different buses.. different mobos.. and LOL @ your 2007 hdtach result.. is that the best result hdtach/74gd raptor can muster?
    Not a fair test? Let me review what I looked at:
    - Your results using the same two hard drives, singly and in RAID-1 on both an Areca and HP card. You also gave me results from your hard drive using your onboard controller. Excepting one test with write-back cache in RAID-1 on your HP card, all your single-drive results were extremely similar. Not "kind of" similar, nearly identical. Your tests on RAID cards also showed exactly what one would expect (excepting the Areca RAID-1 access times, though had you been watching this section a little over a year ago, you would have seen Areca's don't do as well with access time on RAID-1 as they should anyway). So really, no idea where the "bias" could come in here - they're *your* results, and they are consistent.
    - My results. I gave you results of one of my raptors (the other being identical) and both of my raptors in RAID-1. Access times dropped accordingly and read speeds remained rock steady - also as they should.

    The summary of it? Your hardware controller did not boost your hard drive read speeds in RAID-1 (as it would have had to for it to be faster hard drive to hard drive because the read test just runs off a single drive), and neither did mine. Neither card hampered performance either. But in access times, mine did decrease more than yours. I said it's because mine actually makes use of a better algorithm, and all you can say is "No"... not "No, mine makes use of the _____ algorithm, which should be similar" or any such argument.

    Face it - the results don't lie.

    Oh, and thanks for the jab at my hard drive. Last resort of someone who knows he can't win using logic.


    Quote Originally Posted by NapalmV5 View Post
    the only reason for the lower access time on the promise controller could be the lower latency of the pci bus

    same @ virtualrains results: lower latency
    Lower latency

    You're saying that my PCI card has lower latency than your PCI-E based cards? My card that runs at a maximum of 133MB/s experiences lower latency than your card which runs at a minimum of 250MB/s (PCI-E x1)? Excuse me, but how is my card which sends half or less signals per second experiencing less latency? Is your argument that your hardware-based cards increase latency by .5ms or more? I thought they were better because they had some magical property that made everything faster?

    With virtualrain you *maybe* had a bit of an argument, but I believe the NF4 SATA ports also sit on a PCI bus... which goes to the same PCI vs PCI-E question. Why would PCI-E devices be *slower*?

    Worse yet, you say that your access times are higher because my slower bus has less latency... ok, let's say that's true. Why does mine decrease in a way which is both relatively and absolutely better on RAID-1 than yours then? Maybe my bus runs even faster on RAID-1?

    The real nail in the coffin though is that I can post a benchmark of my raptors using my P5K Deluxe SATA ports with no difference in access time or read speed. From using an NF4 chipset to an ICHR9, PCI bus to PCI-E bus... same results.


    Quote Originally Posted by NapalmV5 View Post
    just as hard raid controllers get faster/better so do onboard/soft raid controllers - 975x vs x38: 72MBs/ vs 77MB/s
    You'll only see a difference if there's either something wrong with the controller or you're hitting some kind of bus limit. I can post consistent results accross NF4, ICH9R, and Promise TX2300 (on different buses too). Your results were also consistent to within .2ms access time and under .6MB/s read time (excepting the one HP bench with write-cache on).

    Sure, differences *can* be found... but not on anything we looked at. Results are fair.


    Quote Originally Posted by NapalmV5 View Post
    you keep saying 600$.. both the areca1210 and hpt 3510 are 300$ controllers
    I haven't looked at controller prices in a long time, and the last time I did I think $600 was about the price of the one I had decided I wanted. Frankly no, I didn't look at the cost of those exact units. If I overstated it, I am sorry. I honestly had just been thinking back to the last price I had in mind when looking at some controllers for RAID-5.

    However, there is a critical statement to make here: overstating price does not affect the results we've clearly seen.


    Quote Originally Posted by NapalmV5 View Post
    look, you got issues.. this has not much to do with controllers/hdd/raid/etc.. you got issues with others having better stuff than you.. you got problem with my qx6700 which i dont have anymore im all dual-core and you are the one with the quad-core.. funny.. you got problem with my raid cards.. you think im rich or smtg? i work my arse off to pay for all my pc hardware/software + everything ive got
    [/QUOTE]

    Wow, and there's another personal attack. Looks like you're finally scraping the bottom of the barrel. I don't have problems with people having better stuff than me, I have problems with people telling others they need expensive hardware that would not benefit them in any way.

    As for having a problem with your QX6700 (which you apparently don't have anymore) - *you challenged me to beat your scores in a CPU-intensive benchmark which you had run while you were using high-end hardware*. When I decline to even try you tell me it's because my controllers suck... and never seem to get the fact that it's because you used hardware both qualitatively and quantitatively better than most people here have. You're part of an overclocking community - stop thinking that regardless of setup everyone should be able to get the same results if only they had better hard drive controllers, you know better.


    So face it: you said this was to be your last post because:
    1. You saw the results yourself and know yours showed no single-disk improvement versus your on-board (as seen in the read average)
    2. You say your HP was able to reduce your seek times, but saw that mine did more both relatively and absolutely (for reasons I have outlined)
    3. You have no arguments relating to operation or manufacture as to why my controller wouldn't be able to perform the jobs outlined in an equal manner
    4. You don't want to admit the above
    Last edited by Serra; 04-26-2008 at 12:21 PM.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •