Page 5 of 5 FirstFirst ... 2345
Results 101 to 114 of 114

Thread: Adaptec vs Areca vs HighPoint

  1. #101
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by Serra View Post

    So I invite you Naplam and itznfb - what do you say? For RAID-1/0, can we be agreed that - for a limited number of drives - software-based solutions* are the equal and can in fact be the superior to hardware-based solutions but at a fraction of the price?

    *Edit: (at least those which use add-on cards, not necessarily on-board junk)
    no you can't. you haven't factored in drive failure due to the cheap crappy card killing the drive. which happens often. why do you think software RAID cards aren't used in businesses? businesses find every way they can to save money. wouldn't they put software RAID cards in if they were just as good? well, they aren't, because of all the reasons i've already stated.

    and you said in another post that connecting 8 drives in raid 0 is asking for failure? why? what would make you think there would be a failure? maybe because you know software raid cards constantly fail? i have several systems and servers running between 8 and 16 drives in RAID0 and i've never had a failure on a 3ware card. i'm comfortable running in RAID0 because i know it isn't going to fail. a couple of the linux nas servers have been running for years. i have a few highpoint cards, and a couple promise cards that fail constantly with 2 drives. they don't run for more than a month without having to rebuild the array. they can't run RAID1 at all, everytime i reboot there's a failure and the array needs to be verified and rebuilt.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  2. #102
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by [XC] itznfb View Post
    no you can't. you haven't factored in drive failure due to the cheap crappy card killing the drive. which happens often. why do you think software RAID cards aren't used in businesses? businesses find every way they can to save money. wouldn't they put software RAID cards in if they were just as good? well, they aren't, because of all the reasons i've already stated.

    and you said in another post that connecting 8 drives in raid 0 is asking for failure? why? what would make you think there would be a failure? maybe because you know software raid cards constantly fail? i have several systems and servers running between 8 and 16 drives in RAID0 and i've never had a failure on a 3ware card. i'm comfortable running in RAID0 because i know it isn't going to fail. a couple of the linux nas servers have been running for years. i have a few highpoint cards, and a couple promise cards that fail constantly with 2 drives. they don't run for more than a month without having to rebuild the array. they can't run RAID1 at all, everytime i reboot there's a failure and the array needs to be verified and rebuilt.
    The drive controllers killing drives? *killing* drives? What, like overvolting them? Sending them the xMurderDeathKill drive command? No, there's nothing special these things can do to kill drives. Sure if your software is unstable you'll wreck the drives... but frankly, if your software is unstable it means your OS is unstable, and again renders it to being a moot point.

    8 drives in RAID-0 is a hazard because you multiply your failure rate by 8. If your AFR was, say, .08 your AFR increases to .64, and if that's not planning for a failure, I don't know what is.

    As to businesses not using them for reliability reasons:
    1. If your business is smart, it'll use RAID-5 or RAID-6 with a decent hardware card and so won't take more than a small performance hit
    2. Hardware cards generally provide more ports

    The only time to run RAID-0 or RAID-1 in a business is when your business is either sufficiently small or, perhaps, RAID-0 in some specialized server which is backed up regularly.

    I don't know what's wrong with your cards - have you checked out the slots they're on on your MB? My cards always worked just fine for me (well, excepting compatibility with this mobo... but that's an issue with any storage card, grrr).


    Edit: I take it you concede the performance point then (excepting as it applies to longevity), as you didn't mention it or dispute any numbers yet nor the review, nor found any commands your card can issue mine can't etc.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  3. #103
    Xtreme Guru
    Join Date
    Mar 2004
    Location
    steelcity.pa.usa
    Posts
    3,522
    Quote Originally Posted by Serra View Post

    I don't know what's wrong with your cards - have you checked out the slots they're on on your MB? My cards always worked just fine for me (well, excepting compatibility with this mobo... but that's an issue with any storage card, grrr).


    Edit: I take it you concede the performance point then (excepting as it applies to longevity), as you didn't mention it or dispute any numbers yet nor the review, nor found any commands your card can issue mine can't etc.
    when i say for all that includes that software cards do indeed have terrible performance compared to hardware. you can find hundreds of people on this forum that will give you examples of software raid cards killing their drives. its kind of hard to diagnose what happened when the drive is dead. but you won't find any of these stories with hardware raid cards.

    many business use RAID0 configs on high IO clustered boxes. RAID5/6/1/0 are pretty much being phased out by RAID10 however.
    STARSCREAM
    3570k @ 4.6 GHz | Asus P8Z77-V LK | G.Skill F3-12800CL8D-8GBXM | ASUS GeForce GTX550 Ti
    Corsair Neutron GTX 240GB | Corsair Force GT 120GB | SK hynix 128GB | Samsung 830 64GB
    WD Black 640GB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i
    Corsair AX750 | CoolerMaster Hyper 212 | Antec P280 | Dell Ultrasharp U2410 | BenQ XL2420T
    ROCCAT Savu | Filco Majestouch-2 TKL w/Cherry MX Reds
    MEGATRON
    3770k @ 4.5GHz | Asus Sabertooth Z77 | G.Skill F3-12800CL8D-8GBXM
    SK hynix 128GB | Mushkin Enhanced Chronos 60GB | WD Red 3TB (4) | Seagate 7200rpm 3TB (2)
    WD Green 2TB (3) | Seagate 7200rpm 1TB | Dell Perc H310 xflashed to LSI 9211-8i (2)
    Corsair AX650 | Corsair H80i

  4. #104
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by [XC] itznfb View Post
    when i say for all that includes that software cards do indeed have terrible performance compared to hardware. you can find hundreds of people on this forum that will give you examples of software raid cards killing their drives. its kind of hard to diagnose what happened when the drive is dead. but you won't find any of these stories with hardware raid cards.

    many business use RAID0 configs on high IO clustered boxes. RAID5/6/1/0 are pretty much being phased out by RAID10 however.
    I doubt your claim that you could find "hundreds" of people whose software controllers killed drives.

    Facts:
    1. Software cards aren't exactly overvolting drives (kind of hard, when they don't supply power)
    2. Your controller cannot kill a drive by issuing it malformed commands
    3. No commands exist which can themselves kill a drive
    4. Controller cards are extremely simple electrical devices. Their ability to fail is quite small (contrast this with hardware cards, which perform the same job but actually produce notably more heat).
    5. Hard drives fail regularly. They are in fact the most common source of failure. Saying it's the fault of the software card (but, of course, not a *hardware* card) is just superstition.

    You tell me where the fault is in my logic there. If there is an associated higher failure rate (which I'd doubt), I would say it could be because people who can afford hardware cards can generally afford things like higher quality PSU's and drives. Saying it's the fault of an electrical component which doesn't supply power to the device and can only pass along a limited range of predefined commands sounds a lot like superstition to me.

    I'm not saying you couldn't find RAID-0 used in businesses, but it's not exactly a best practice. You can get the same (well, n-1) read performance out of a RAID-5 array on a good hardware controller. If they're using RAID-0, they're cutting corners. That said, yes - I have also seen RAID-0 in use... and I have seen it done with software RAID cards as well, because hardware is not warranted. Of course, any time you see a RAID-0 array it should ideally either be used by only a limited number of users who can afford to have it go down or when you have a large number of cheap systems being used in a load-balancing manner which can gracefully deal with the death of one component.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  5. #105
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    one last post on this..

    Quote Originally Posted by Serra View Post
    Edit: Yes, this was a typo. See post 67 for details. Should be about 3%. In fairness though, 4 drives in RAID-0 on add-on software-driven cards are bottlenecked by either a PCI or PCI-Ex1 bus anyway... so that's why you won't see more than that. Oh, and that was on an Opty 170 @ stock.
    so indeed youve edited your post.. its not 0% anymore now its about 3% ? lol


    Quote Originally Posted by Serra View Post
    It turns out there was some confusion here. The RAID-1 test Napalm did was on an HP, but the single drive test on an Areca. So take these conclusions with a grain of salt and head on over to post ~91ish for a new review.
    no confusion whatsoever, ive clearly stated what the benches are on and i posted what you asked for

    maybe you should take some salt to get rid of some of that bitterness


    not only youre not comparing the same hard drive.. different controllers.. different buses.. different mobos.. and LOL @ your 2007 hdtach result.. is that the best result hdtach/74gd raptor can muster?

    too bad i dont have a WD740GD raptor anymore..

    the only reason for the lower access time on the promise controller could be the lower latency of the pci bus

    same @ virtualrains results: lower latency

    just as hard raid controllers get faster/better so do onboard/soft raid controllers - 975x vs x38: 72MBs/ vs 77MB/s

    you keep saying 600$.. both the areca1210 and hpt 3510 are 300$ controllers

    look, you got issues.. this has not much to do with controllers/hdd/raid/etc.. you got issues with others having better stuff than you.. you got problem with my qx6700 which i dont have anymore im all dual-core and you are the one with the quad-core.. funny.. you got problem with my raid cards.. you think im rich or smtg? i work my arse off to pay for all my pc hardware/software + everything ive got
    Last edited by NapalmV5; 04-26-2008 at 11:19 AM.

  6. #106
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by NapalmV5 View Post
    so indeed youve edited your post.. its not 0% anymore now its about 3% ? lol
    Good thing I put it in a clear red so that it's not like I tried to hide anything. Oh, and it wasn't 0% ever. Thanks for putting words in my mouth. And yeah 3% is about the worst you'd see on even an Opty 170 w/3-4 drives because you're going to get strangled by the PCI/PCI-Ex1 bus... there's no opportunity to see more than that.


    Quote Originally Posted by NapalmV5 View Post
    no confusion whatsoever, ive clearly stated what the benches are on and i posted what you asked for

    maybe you should take some salt to get rid of some of that bitterness
    I didn't say you were confused - but there was some confusion. I apologized for not noticing you used different cards before. I made sure it was known at the top of the first review so people wouldn't think otherwise. As for the latter results though, well they didn't look too much like your hardware was any better than mine. Maybe you need to give it some more of that magic juice... you still haven't explained what property of yours would make yours faster.

    No bitterness, just trying to clear up a techno-myth.


    Quote Originally Posted by NapalmV5 View Post
    not only youre not comparing the same hard drive.. different controllers.. different buses.. different mobos.. and LOL @ your 2007 hdtach result.. is that the best result hdtach/74gd raptor can muster?
    Not a fair test? Let me review what I looked at:
    - Your results using the same two hard drives, singly and in RAID-1 on both an Areca and HP card. You also gave me results from your hard drive using your onboard controller. Excepting one test with write-back cache in RAID-1 on your HP card, all your single-drive results were extremely similar. Not "kind of" similar, nearly identical. Your tests on RAID cards also showed exactly what one would expect (excepting the Areca RAID-1 access times, though had you been watching this section a little over a year ago, you would have seen Areca's don't do as well with access time on RAID-1 as they should anyway). So really, no idea where the "bias" could come in here - they're *your* results, and they are consistent.
    - My results. I gave you results of one of my raptors (the other being identical) and both of my raptors in RAID-1. Access times dropped accordingly and read speeds remained rock steady - also as they should.

    The summary of it? Your hardware controller did not boost your hard drive read speeds in RAID-1 (as it would have had to for it to be faster hard drive to hard drive because the read test just runs off a single drive), and neither did mine. Neither card hampered performance either. But in access times, mine did decrease more than yours. I said it's because mine actually makes use of a better algorithm, and all you can say is "No"... not "No, mine makes use of the _____ algorithm, which should be similar" or any such argument.

    Face it - the results don't lie.

    Oh, and thanks for the jab at my hard drive. Last resort of someone who knows he can't win using logic.


    Quote Originally Posted by NapalmV5 View Post
    the only reason for the lower access time on the promise controller could be the lower latency of the pci bus

    same @ virtualrains results: lower latency
    Lower latency

    You're saying that my PCI card has lower latency than your PCI-E based cards? My card that runs at a maximum of 133MB/s experiences lower latency than your card which runs at a minimum of 250MB/s (PCI-E x1)? Excuse me, but how is my card which sends half or less signals per second experiencing less latency? Is your argument that your hardware-based cards increase latency by .5ms or more? I thought they were better because they had some magical property that made everything faster?

    With virtualrain you *maybe* had a bit of an argument, but I believe the NF4 SATA ports also sit on a PCI bus... which goes to the same PCI vs PCI-E question. Why would PCI-E devices be *slower*?

    Worse yet, you say that your access times are higher because my slower bus has less latency... ok, let's say that's true. Why does mine decrease in a way which is both relatively and absolutely better on RAID-1 than yours then? Maybe my bus runs even faster on RAID-1?

    The real nail in the coffin though is that I can post a benchmark of my raptors using my P5K Deluxe SATA ports with no difference in access time or read speed. From using an NF4 chipset to an ICHR9, PCI bus to PCI-E bus... same results.


    Quote Originally Posted by NapalmV5 View Post
    just as hard raid controllers get faster/better so do onboard/soft raid controllers - 975x vs x38: 72MBs/ vs 77MB/s
    You'll only see a difference if there's either something wrong with the controller or you're hitting some kind of bus limit. I can post consistent results accross NF4, ICH9R, and Promise TX2300 (on different buses too). Your results were also consistent to within .2ms access time and under .6MB/s read time (excepting the one HP bench with write-cache on).

    Sure, differences *can* be found... but not on anything we looked at. Results are fair.


    Quote Originally Posted by NapalmV5 View Post
    you keep saying 600$.. both the areca1210 and hpt 3510 are 300$ controllers
    I haven't looked at controller prices in a long time, and the last time I did I think $600 was about the price of the one I had decided I wanted. Frankly no, I didn't look at the cost of those exact units. If I overstated it, I am sorry. I honestly had just been thinking back to the last price I had in mind when looking at some controllers for RAID-5.

    However, there is a critical statement to make here: overstating price does not affect the results we've clearly seen.


    Quote Originally Posted by NapalmV5 View Post
    look, you got issues.. this has not much to do with controllers/hdd/raid/etc.. you got issues with others having better stuff than you.. you got problem with my qx6700 which i dont have anymore im all dual-core and you are the one with the quad-core.. funny.. you got problem with my raid cards.. you think im rich or smtg? i work my arse off to pay for all my pc hardware/software + everything ive got
    [/QUOTE]

    Wow, and there's another personal attack. Looks like you're finally scraping the bottom of the barrel. I don't have problems with people having better stuff than me, I have problems with people telling others they need expensive hardware that would not benefit them in any way.

    As for having a problem with your QX6700 (which you apparently don't have anymore) - *you challenged me to beat your scores in a CPU-intensive benchmark which you had run while you were using high-end hardware*. When I decline to even try you tell me it's because my controllers suck... and never seem to get the fact that it's because you used hardware both qualitatively and quantitatively better than most people here have. You're part of an overclocking community - stop thinking that regardless of setup everyone should be able to get the same results if only they had better hard drive controllers, you know better.


    So face it: you said this was to be your last post because:
    1. You saw the results yourself and know yours showed no single-disk improvement versus your on-board (as seen in the read average)
    2. You say your HP was able to reduce your seek times, but saw that mine did more both relatively and absolutely (for reasons I have outlined)
    3. You have no arguments relating to operation or manufacture as to why my controller wouldn't be able to perform the jobs outlined in an equal manner
    4. You don't want to admit the above
    Last edited by Serra; 04-26-2008 at 12:21 PM.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  7. #107
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    the more you post the more bs you make up..

    if you dont know about latency.. get learned

    why last post? cause i got 24hr/day 7day/week job.. my job never ends.. i dont have all the time in world and im not gonna waste my time and nerves to argue with you about stupid controller/hdd/raid.. lifes too precious..

    peace be upon you..

    out

  8. #108
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by NapalmV5 View Post
    the more you post the more bs you make up..

    if you dont know about latency.. get learned

    why last post? cause i got 24hr/day 7day/week job.. my job never ends.. i dont have all the time in world and im not gonna waste my time and nerves to argue with you about stupid controller/hdd/raid.. lifes too precious..

    peace be upon you..

    out
    Sure. I post results and explanations for everything I say, you just say "get learned" when I ask why my older bus running at 133MB/s experiences less latency than your running at a multiple of 250MB/s. Great, kthnx.


    Still have issues about the results before? Through the magic of pulling parts and BIOS flashing I've been able to get limited use of my Promise card on my P5k Deluxe (though I can't use my DVD drives while doing so... sigh). So I ran more tests: some with a raptor configured in JBOD, some with the raptors in RAID-1. Funny thing, my results are effectively the same (maybe .1MB/s difference in speed or some such, but benches are never the exact same twice).

    So *again* I will compare my results with my raptors/card to your results with your raptors/card. I won't compare them number for number because our hard drives are different (yours pull 77.4-78.3MB/s on any controller, including onboard, all I get is 64.9-65.0MB/s on any controller, including onboard), but if your card possesses magic juice, your should beat me in all relative cases. Let's find out. Scroll down to see new results.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	HDTach_SingleRaptor_01.jpg 
Views:	532 
Size:	177.5 KB 
ID:	77371   Click image for larger version. 

Name:	HDTach_RAID1Raptor_02.jpg 
Views:	531 
Size:	179.5 KB 
ID:	77372  
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  9. #109
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    So let's do a review, for a third time. Having already written it, I can tell you the results come out the same. Like the other reviews, this does not compare my numbers to you absolutely, only relatively because that is all that would be fair.

    Results:

    Napalms Areca:

    1x Raptor on Areca:
    ------------------
    Average Read: 78.2MB/s [write cache on] / 78.1MB/s [write cache off]
    CPU: 0% [write cache on] / 0% [write cache off]
    Random Access: 8.1ms [write cache on] / 8.1ms [write cache off]
    Burst: 750.9MB/s [write cache on] / 756.7 [write cache off]

    1x Raptor on Asus Maximus SE: (for comparison versus his 1x on Areca only)
    ------------------
    Avg Read: 77.3MB/s
    CPU: 2%
    Random Access: 8.1ms

    2x Raptor (RAID-1) on Areca:
    ------------------
    Average Read: 77.8MB/s
    CPU: 0%
    Random Access: 8.7ms


    My TX2300:

    1x Raptor on TX2300:
    ------------------
    Avg Read: 65.0MB/s
    CPU: 2%
    Random Access: 7.7%

    2x Raptors on TX2300 (RAID-0):
    ------------------------------
    Avg Read: 64.7MB/s
    CPU: 1%
    Random Access: 6.8ms


    Differences:

    In Napalms single-disk result differences:
    Avg Read: 78.15MB/s [Areca] | 77.30MB/s [Onboard] - 0.9891% Increase
    CPU: 0% [Areca] vs 2% [Onboard] - Less than 2% increase (clearly, more than 0% resources were used)
    Random Access: 8.1ms [Areca] vs 8.1ms [Onboard] - 0% change

    In Napalms RAID-1 array vs. same disk on same controller:
    Avg Read: 77.8MB/s [RAID-1] vs 78.15MB/s [Single] - 1.00% Decrease vs single
    CPU: 0% vs 0% - No change
    Random Access: 8.7ms [RAID-1] vs 8.1ms [Single] - 6.89% Decrease vs single

    In my RAID-1 array vs. same disk on same controller:
    Avg Read: 64.7MB/s [RAID-1] vs. 65.0MB/s [Single] - 0.0046% Decrease vs single
    CPU: 1% [RAID-1] vs. 2% [Single] - 1% increase vs single (we'll call it even though, it's clearly on the 1-2% threshold)
    Random Access: 6.8ms [RAID-1] vs 7.7ms [Single] - 11.688% Increase vs Single


    Analysis

    Well, for single disk performance, both of us see the same results whether our cards are on our RAID controllers or our onboard, so I think it's fair to say that for single disk usage, there's no difference in read speed or response. CPU can go spike up to about 2% with soft solutions though. Key words being "spike" and "up to". Another thing to notice here is that across ICH9R, NF4, and Promise controllers my single disk read speeds remain constant, as do Napalms over his Areca and Maximus SE (which may very well even use ICH9). Napalm rejects my ability to just state that his newer revision hard drives have a higher sustained read because "chipsets change things"... but I didn't see any change over 3 very different sets and neither did he over 2 (one hardware, one software no less). The conclusion there as well is that yes, his hard drives just are faster than mine.

    For RAID-1 performance... I'm the clear winner. I saw a smaller avg read drop (percentage-wise and absolutely) than he did, and I saw a large drop in seek times (versus his increase). It should be noted though that that's just smugness coming out... the speed differences for read times were 1% or less either way and is irrelevant. That Arecas have RAID-1 issues is a known phenomena. Although he doesn't like me to bring it up, his HPT card did much better than the Areca, also seeing a drop in access time (though again, less in absolute and relative terms than mine).


    Conclusions:

    It pretty well follows what I've been saying. For simple operations, software-driven add-on controllers are the equal of hardware-based controllers.

    Further, though they probably *could*, hardware-based controllers that I have seen *do not* implement all the optimizations possible for RAID-1. This is obvious by the fact that the Areca did not see a decrease relatively better than mine in access times (nor absolutely... and mine were lower to begin with...), nor did the HPT. This results in an actual increase in disk seek performance for a software-driven solution versus a hardware one. It should be noted however that these optimizations are rarely seen in onboard controllers either. For onboard, just shoot for close-to-single-disk performance.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  10. #110
    Xtreme Addict
    Join Date
    May 2003
    Location
    Peoples Republic of Kalifornia
    Posts
    1,541
    I've pretty much come to the conclusion that it's pointless to spend $1k+ on a RAID card if you're only doing RAID-1 or RAID-0... or a single drive. The only benefits are if you are running 3+ drives.

    In HD tune my two 74gb raptors have a average of 133mb/s on my "crappy" NF4 Striker Extreme and a minimum of 88.9mb/s and a maximum of 173.0mb/s.

    That being the case, spending $1k+ on a card that barely gives me any increase in these numbers is just flat out pointless.

    "If the representatives of the people betray their constituents, there is then no resource left but in the exertion of that original right of self-defense which is paramount to all positive forms of government"
    -- Alexander Hamilton

  11. #111
    Xtreme Member
    Join Date
    Oct 2006
    Posts
    412
    Quote Originally Posted by Andrew LB View Post
    I've pretty much come to the conclusion that it's pointless to spend $1k+ on a RAID card if you're only doing RAID-1 or RAID-0... or a single drive. The only benefits are if you are running 3+ drives.

    In HD tune my two 74gb raptors have a average of 133mb/s on my "crappy" NF4 Striker Extreme and a minimum of 88.9mb/s and a maximum of 173.0mb/s.

    That being the case, spending $1k+ on a card that barely gives me any increase in these numbers is just flat out pointless.
    that was essentially the end result of this exercise - well said (applies also $300-500 range) get a velociraptor instead
    Last edited by swiftex; 04-28-2008 at 03:14 PM.

  12. #112
    Xtreme Guru adamsleath's Avatar
    Join Date
    Nov 2006
    Location
    Brisbane, Australia
    Posts
    3,803
    how many raid slices are possible for 2 disk raid0?
    can you set different stripe sizes for the slices?
    with intel matrix raid
    ?
    Last edited by adamsleath; 04-28-2008 at 03:26 PM.
    i7 3610QM 1.2-3.2GHz

  13. #113
    Xtreme Member
    Join Date
    Oct 2006
    Posts
    412
    sabrent $30 pci-e x1 card esata connection
    raid0 2x wd-640gb 64k
    raid1 - same
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	sil r0-1.JPG 
Views:	477 
Size:	83.2 KB 
ID:	77529   Click image for larger version. 

Name:	sil r1-1.JPG 
Views:	480 
Size:	80.2 KB 
ID:	77530  

  14. #114
    Xtreme Addict
    Join Date
    Feb 2008
    Posts
    1,565
    Why would someone go and spend money of a controller card for raid 0 and still use SATA instead of SAS?
    Thought that was the entire point of getting most controller cards for raid 0.

Page 5 of 5 FirstFirst ... 2345

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •