For RAID 0, which of these companies provides the fastest SATA II controller card under $500?
I've read some favorable reviews of the HP 3520 but want to know what the others offer.
Printable View
For RAID 0, which of these companies provides the fastest SATA II controller card under $500?
I've read some favorable reviews of the HP 3520 but want to know what the others offer.
i hear good things about Areca.
they seem to be the most recommended.
I could be wrong, but you won't see huge gains from any of those controllers if you're just running RAID0. Intel ICH9 would even be ok.
If only for RAID0=not worth the $$$
This isn't directed at the poster - it's more a rant directed at all the people on these forums who have promoted this bad idea, you know who you are - but....
Why do people actually buy hardware add-on cards for RAID-0? Why on earth do others advocate it?
Let me spell it out plainly:
There is *nothing* that a hardware add-on card can do to improve performance with RAID-0. It can provide on-board cache, which can be a benefit to *some* desktop users, but unless your motherboard-based RAID solution is garbage (ie. NF4), you will not see any noticeable improvement. RAID-0 has no calculations to perform and no optimizations that can be made. Even those interrupts which it does generate *must* be sent to the CPU and cannot be offloaded to the card.
Now, if you still want a RAID card, go get a Promise software-based card. It'll cost you <$80 and it'll do as much as the hardware-based card with no loss in performance. RAID-1 performance is actually even improved versus most hardware cards.
*waits for all those posters who bought $600 Areca cards for RAID-0 to start chiming in, knowing they won't address anything I said... they never do... *
Edit: If you have the money to spend though, then I'd look long and hard at a RAID-5 array using a $500 card. Performance will be somewhat comparable to RAID-0, just without one of the disks. Considering the tradeoff that your data is secured against some disk failure, I think it's a good deal. It doesn't do anything against corruption (which you may very well see from a failing hard drive), but that's another matter.
I agree exactly with the reasons you have for the rant. :D But there are three reasons why someone (me for instance) would do exactly that...
The first is motherboard incompatabilities with certain disks. The particular bugbear I have in mind is the P5K Premium board, one that is still a killer except for the well-known problems with RAID and certain Samsung/WD disks, that are still not certainly fixed in later hardware/BIOS revisions for the board.
The second is the (related) ability to take the card + attached disk arrays and simply slot it into a brand new mainboard without having to worry about whether the Southbridge will read the disk arrays correctly. It likely will, between Intel ICH versions, but then what if you decide to switch to a SLI mainboard with an nForce chipset? Having a RAID card insulates you from that potential problem in upgrading, so it (and working disks/installation) can carry through many builds giving good value for the money spent.
The more expensive the card the better the future options too, you may not want or need RAID5 today, but a mix of RAID0 or RAID1 may do nicely, and then you have the power if you do wish to upgrade later and you aren't limited by the onboard Southbridge capabilities.
The third is the fact that if you are using SSDs or similar solutions, the Intel ICH has a bug that caps throughput and IIRC sometimes causes boot-up issues. Using an offboard hardware controller avoids the bandwidth cap even if for all other intents and purposes it operates no differently from the Southbridge RAID. Anyone with or intending to play with SSDs will therefore need an offboard controller even for RAID0 to make full use of the silly money they've spent. :p: (Yeah, speaking as a 2x i-RAM owner...)
I think you also have to throw in the question of reliability too. The P5K Premium debacle shows that mainboards (perhaps from some manufacturers rather than others) are not terribly reliable and prone to manufacturing problems and design flaws, but also lack of commitment to fix bugs in "non-essential" areas of the BIOS or hardware, such as onboard RAID, which is basically added-on value to the board's main function. I don't know if there are any comparative reviews of the RMA rate or user satisfaction between RAID cards and mobos, but anecdotally on this forum you generally hear people swearing BY their Arecas and AT their Asuses... :yepp:
The bottom line for this argument, it seems to me, is that RAID card manufacturers make a profit on their ability to make good hardware, and they compete intensively in a specialised market for this function only, so they have an incentive to get it right. Viz the recent Areca firmware fix for the X38 chipset problems with their cards, a little late but certainly welcome for their users. Mainboard manufacturers are just bundling an Intel add-on to their product, so it is not their PRIME focus of business and (as has been shown) their concern for its fitness has been dubious at best. Why are nVidia chipsets STILL destroying data on SATA drives? They can get away with that because people buy the chipset for SLI and know they can work around it. The specialised RAID hardware companies would die in months if they released a product like that.
So despite your unarguable technical objections, I respectfully submit there can still be very good reasons for preferring a RAID card over an onboard solution, in certain situations, however dumb the processing being done.
software based RAID = fail
onboard RAID = fail
3Ware > Areca > Adaptec > Highpoint > Promise
I have read your entire argument, and accept only one statement in it right now - that hardware controllers let you migrate to RAID-5/6. Please allow me to explain myself.
Concerning the first (P5K Premium compatibility issues) - issues with certain brands of hard drives? I'll want to see some proof of this, I've never heard of issues with different brands. I've never actually even heard of issues with it's on-board RAID offerings.
Concerning the second - I can do that with my software RAID card, at a fraction of the cost. I paid a high price (for this item) of $69.00 and have better RAID-1 and equal RAID-0 results to those who bought hardware cards (minus their cache, obviously), plus the ability to move my arrays as well.
Concerning the third - SSD's? C'mon, who owns one? Who can afford to? But OK, we'll look ahead into the future, by about 5 years. Frankly I'm not convinced any hardware cards out right now are really right for the speeds SSDs will have by the time we can afford them either. Sure they can decode RAID-5 off a regular disk just fine, but it has a lot of access time to do that in... with SSDs, that luxury is gone. If you want to do a lot of small file work on SSDs, you're getting a new card anyway.
Reliability? Nothing wrong with an add-on software RAID card. For that matter, I don't think there's a lot wrong with on-board RAID solutions, stability-wise anyway. But because we like to screw up our boards so much here, I'll concede those to you in this environment. I will not concede software RAID cards though.
So my bottom line is this:
My best practice for RAID1/0 would be to suggest an add-on SOFTWARE RAID card. It's still a piece of hardware you plug in, it just requires some software. Going to RAID-5? By all means, get the more expensive one. But if you're not, don't waste the money.
^ You'll note I did suggest the poster consider RAID-5 if he has the money for a card in my above post too...
Getting the same performance out of a $600 card that you could have gotten for free = Fail too.
Realizing that with my $70 software RAID I can take advantage of RAID-1 seek optimizations you can't with your $600 solution? Priceless.
I had a similar question as the OP.
What type of card would be required for +5 hard drives in a Raid 0 array? Most on-board raid solutions can't support that amount of drives. (loss of data acceptable) :shrug:
I can do an elevator seek on RAID-1, can you? Or do you just seek per job randomly assigned to each disk, regardless of which has a head closer to the data?
$70 card FTW
Edit: I invite you to find any way in which your hardware card can perform an action my card cannot. Once you look into it and realize how it works, you'll see there really isn't anything RAID cards to for RAID 1 or 0.
i couldnt live without raid0
gimme raid0 or gimme death!
intel matrix: soft - never never get 100% performance out of hdd +
uses way more cpu% especially the more drives used
iop/etc controllers: hard - less cpu% + takes any hdd to its 100% performance
I hate onboard raid as much as I hate onboard audio. AARRRRGHH! I will never give up my Areca!!! lol
What kind of CPU are you using? I've never even seen 1% utilization, even on arrays of 4 drives, and have never seen my results go under that which is seen on Areca cards... NEVER.
Edit: Yes, this was a typo. See post 67 for details. Should be about 3%. In fairness though, 4 drives in RAID-0 on add-on software-driven cards are bottlenecked by either a PCI or PCI-Ex1 bus anyway... so that's why you won't see more than that. Oh, and that was on an Opty 170 @ stock.
Edit: Last time I could do testing on my card was with a s939 Opty 170 too... stupid add-on cards don't stupid work on my stupid P5K Deluxe... grrr..
I'm not being ultra pro-onboard here people, but full-on hardware? I want to see someone give me *results*, not superstition. Fact is hardware is only all that when it's, you know, doing something.
dual-cores
0% @ 4x raid0? you want to edit your post?
theres no superstition
ive posted many dvdshrink/nero results on areca 1210/1230
here you go,
http://www.xtremesystems.org/forums/...d.php?t=126594
try and match that with a soft raid or single drive
Hmm... tried:
Google (using keywords P5K Premium WD640 compatibility) - no results
Asus Forums - Just had to go back a few days, found nothing... couldn't find a search bar though
XS Forums - I found a few pages, all with answers that got people working
Maybe you can find something I can't? :shrug:
Still don't see how this would be an improvement versus a software-driven add-on card in that way though, unless you'll also state that software-driven add-on cards won't work either.
Nope, no reason to edit my post. Fact is, RAID-0/1 just isn't processor intensive. I hate to break it to you, but all the same interrupts I get from it are the same that you do too, even with your hardware. Why? Because after the simple call, which goes out to the add-on card, all there is to return is something saying "I gots the datas", which has to go to the CPU anyway.
I would like to bring up that challenge, but my P5K Deluxe is kind of teh crappy with working with add-on cards (and as a result of which, I have gotten rid of all but 2 of my hard drives, some to external storage, some to other computers). I think it's very solvable without though - you find something your card does for RAID-0 or 1 that mine *can't* or that mine *can with greater stress in some limiting way* and I'll concede the point. Mind you, even if that were the case - and it is not - the difference would be AT BEST 1-2% (at the cost of $500-$600).
Edit: Plus, I notice you're using a 400x9 kenty in that post entitled "kenty power"... sadly, I'm hitting 3.0 - 3.2 *tops*... sigh.
Edit II: If someone wants to prove Areca RAID-1 performance is better, show me some results showing it taking seek times down >10% on random reads versus a single disk
well first of all in RAID0 software/onboard RAID controllers don't scale. they'll will increase in performance up to about 3 drives and then they drop exponentially in performance with each additional drive. whereas a $300 - $600 8 port 3ware or areca will take all 8 drives and continue to spank the hell out of any software/onboard raid controller.
second, software/onboard raid controllers are worthless when it comes to reliability, which is kind of the point of raid.
third, any decent dedicated hardware raid card beats any software/onboard raid in performance drive for drive, and scaled upwards.
fourth, i could be wrong but as far as i know the term elevator seek is just a data handling scheme coined by promise used in their software raid products. every raid solution has its own data handling schemes. most mid - high end raid cards in raid 1 will write/read data from the first drive available in a such a way that increases read/write times by 10-20%. every raid solution does this, and most mid-high cards have a few different options of data handling to choose from raid 1. sounds like you're telling you're stuck with what your software card gave you.
it's not the fact that software/onboard can perform the same functions, its the fact that hardware raid cards perform those functions better, faster, and more reliably.
edit: for the record, software/onboard raid is very CPU intensive
this is just a lieQuote:
Originally Posted by Serra
I'm not trying to argue with you about which card or type of raid is best. Everyone is entitled to their opinion, which makes XS a great place for advice and information.
I'm only telling you that the board has problems. The problem is that some boards work and some don't. The information is scattered, but it's there. I just would hate for others to have problems like me.
Not saying that newegg reviews are worth anything, but a quick google search turned this up. As you can clearly read, some have problems and some don't. I do.
Newegg
Link
XS
Here
Another
My post
Yet it works here
Asus
Link
IN my responses, please keep in mind that I'm defending software-driven add-on cards, not onboard, unless specified.
Now that is true. Scaling can indeed be an issue, especially with onboard. Software add-on, less so... but still, yes. I never said it wasn't.
Why? Because they don't generate as much heat as your Areca? Perhaps you feel that if a drive fails in a RAID-0 array, your data is somehow less lost? What point of failure are we talking about? If my OS becomes corrupted I can't use mine, that's true... but if your OS is on your array, you're not exactly using yours either. RAID is not a solution to corruption. So... where is this reliability issue coming from?
Scaling, yes. Again, no question. Drive for drive? I don't think that's necessarily the case. Onboard can be flaky with performance for sure, but aside from something like using a PCI bus to limit yourself with an add-on card... single drive performance for me has always been the same as anyones Areca results. What part of your magical hardware makes your same drive go faster?Quote:
third, any decent dedicated hardware raid card beats any software/onboard raid in performance drive for drive, and scaled upwards.
You are, in fact, wrong. Check it out on wikipedia.Quote:
fourth, i could be wrong but as far as i know the term elevator seek is just a data handling scheme coined by promise used in their software raid products. every raid solution has its own data handling schemes. most mid - high end raid cards in raid 1 will write/read data from the first drive available in a such a way that increases read/write times by 10-20%. every raid solution does this, and most mid-high cards have a few different options of data handling to choose from raid 1. sounds like you're telling you're stuck with what your software card gave you.
http://en.wikipedia.org/wiki/Elevator_algorithm
I strongly suspect this will make you say "Oh" and concede that maybe - just maybe - by virtue of the fact that I have the flexibility of software I could handle tasks like seeks in RAID-1 with a little more finesse.
Again - what makes you say "better", "faster", and "more reliably"? If you're using SAS or something, for sure. There are actually different commands you can give devices to get different results with SAS that would probably be lacking from most add-on software-driven cards.Quote:
it's not the fact that software/onboard can perform the same functions, its the fact that hardware raid cards perform those functions better, faster, and more reliably.
With SATA, there are only a handful of requests you can make of a hard drive, no more no less. There are no "hidden" commands that only your magical $600 rocket can pull out. And the fact is, my $70 card can issue those commands at the same rate yours can (again, RAID-0/1 only).
I've already questioned the "reliability" argument above.
I remember about 2 years ago these forums were full of people who commonly saw in benches that it took next to no CPU utilization to use RAID-0 on anything. Why has that suddenly changed? It's not like all the data that gets pulled from the hard drive has to be inspected by the CPU.Quote:
edit: for the record, software/onboard raid is very CPU intensive
I didn't say your RAID-0 results weren't good enough. I said that I have no ability to compare with yours. It's like telling Brian Johnston "using those $400 shoelaces doesn't make your shoes faster" and him saying "Well you run faster than me and I'll believe you". That's not the kind of question you can ask of me - as clearly you have the superior hardware (both in quality and quantity). Sorry about that, I guess.
So I asked you to run a test which you *COULD* perform which would be objective - a simple seek time test on RAID-1 (only test I could think of). You responded mockingly. Gee, I'm sorry.
Before we go any further, I'd like to draw your attention to a review of the Areca 1210 by our own Virtual Rain. It's a comparative review against the NF4 chipset (widely known as one of the worst chipsets for RAID ever):
http://virtualrain.blogspot.com/2007...rformance.html
Please note how the differences with 2 drives are extremely slight, and keep in mind that any decent on-board or - especially - add-on software card should also beat an NF4. In real-world performance, the NF4 even wins sometimes.
i responded mockingly? look whos responding mockingly
i took it as excuses: you only got 2 drives, cpu clock too high.. wtf!
superior hardware? qx6700 @ 3.6? u sure ur on the right forum? thats like garbage here now..
objective test? the raid0 results ive posted are objective for crying out loud.. i certainly didnt makeem up subjectively!!!
raid1 seek time you want? i got a couple of free raptors.. will post hdtach/hdtune on that
If a QX6700 @ 3.6 is garbage here, I might as well head over to Toms where I belong with my low clocking, high-heat Q6600 :p:
I wasn't trying to imply that *your* test wasn't objective - I was trying to say that I thought it was an objective test that you could perform, seeing as I lack the ability to test against your results.
I would appreciate posting those. Would it be possible to test with cache enabled and disabled too, by any chance? Just to see what level they effect the results to?
Edit: I'll tell you what. I'll check how much room I have left on my external backup drive (a mere 80GB thing) and see if it would be possible to put drive images of both my raptors on it. The Linux partition's small, but I'm not sure how compressible VM files are on my Windows one so we'll have to see. *If* I have the space I'll bring out an old TX2300 (the only software-driven add-on card I have left) and test them out again in a secondary PC. I believe it's a PCI card though, so we'll probably see some bottlenecking at ~120MB/s... but that's about the best I can do. Actually, I may be able to do some onboard testing with my P5K Deluxe too, just as a baseline reference.
if you are just farting around with a 2x raid0 i dont think most would bother with an add on raid card.
farting around with 4-8 drives is something else.
so, a 3 disk raid0 will work ok with onboard? ie it will show some benefit over 2xraid0 onboard will it not?
also, will onboard raid0 crap out at 4 disks?
ICH8/9R are identical in terms of SATA RAID, and its by far the best software RAID solution available. still.... between 3 and 4 drives its performance degrades significantly. max performance you'll see with software RAID is going to be 3 drives. some people will say you can see gains with 4 drives but in my experience i have not. 3x raptors in RAID0 on pretty much any raid controller are gonig to fly, with an areca or 3ware you'll see large performance gains though, and with 4+ drives they will leave any software controller in the dust.
@Serra
I was originally responding to your headline argument in bold about hardware !> onboard for RAID0 or RAID1, which you then qualified later as meaning hardware-only, not software-controlled RAID cards. So some of your rebuttals are misplaced by assuming I was talking about hardware-only RAID, which I wasn't. For my own purposes, because I only need a RAID1 and a small RAID0, I am indeed looking at the cheap 4-port 1430SA from Adaptec right now. But if I wanted to add a RAID card with more than 4 ports for future expandability, then such a simple software-controlled card wouldn't be an option, I think.
The implication here (for any n00b reading) is that the hardware is limited where the software isn't, somehow. Clearly that's nonsense, as the hardware is being controlled by firmware on-card containing the seek algorithms that in the software-controlled card are part of the Windows driver instead. So there is a small if negligible CPU offload, and card firmware is upgradeable in just the same way as the driver, allowing new and better algorithms to be added. I'd be extremely surprised if your elevator algorithm was not therefore implemented in a much more expensive card. :shrug:
At the same time, with 4+ drives, your odds of experiencing a failure get quite high, and that alone makes it worth getting a controller which can handle RAID-5/6 appropriately.
Yes, I had been talking about pure hardware solutions earlier, sorry for any confusion there. As far as it goes, I'm not saying that hardware cards *couldn't* do it - but, for unknown reasons, they don't seem to implement all the advantages that they have open to them (keeping in mind we're only talking about RAID-1, there is nothing you can do for RAID-0).Quote:
The implication here (for any n00b reading) is that the hardware is limited where the software isn't, somehow. Clearly that's nonsense, as the hardware is being controlled by firmware on-card containing the seek algorithms that in the software-controlled card are part of the Windows driver instead. So there is a small if negligible CPU offload, and card firmware is upgradeable in just the same way as the driver, allowing new and better algorithms to be added. I'd be extremely surprised if your elevator algorithm was not therefore implemented in a much more expensive card. :shrug:
This topic died out a long time ago here, but iirc we had decided that it was based on usage models. For example, while one model may be best for single users with only a few tasks going at the same time, it may not be best for environments with hundreds of users making different requests. While some optimizations could surely be added... they just aren't. With software-based controllers on the other hand, they pretty much have to be for just a few requests at a time just by the market they're in, and so can operate differently.
well... high end cards don't use this algorithm because they use better, more efficient ones. saying that a crappy 70$ software raid card is better in any way shape or form than high end hardware raid cards is just ridiculous. its hard to debate with someone who says something like this just because they're obviously under some false delusions.
I JUST recently went through that same ordeal between my Raptors(raid 0) and an abit quad GT p965 mobo and my asus maximus formula. On the quad GT, I was NOT able to overclock while I had a raid going. It would lead to all sorts of issues and blue screens and freezes. After multiple RMA's to the board AND the hard drives, I gave in and bought the maximus. A few months go by and then WHAM i get the dreaded iastor errors in my system log. I try every trick on the internet to get them to play right. Could NEVER figure it out after months of reformats and installs.
I finally had enough and I purchased 2 seagate 250gb drives. No problems whatsoever since. So I have TONS of proof and if you jsut do a google search for IASTOR errors LOTS of issues will come up. Intel even came out and acknowledged there was a problem, but everyone played the blame game and no one owned up and fixed the issues. So other than the COUNTLESS postings on sites such as this and asus' own forums, what proof do you need? I'm happy to provide whatever I can. I'll try to get a screen shot the next time my system freezes with my raptors :p:
EDIT: I have a link for you straight from intel about the problem that their solution DID NOT fix.
http://www.intel.com/support/chipset.../cs-025783.htm
And another from XS http://www.xtremesystems.org/forums/...d.php?t=163183
i believe the point was if you are using raid0/1 for 2/3 disks that a software raid card from promise(highpoint) would provide comparable performance to a hardware raid card ($300-500)
and i have found that true with a $30 Bytec(sil3125r) esata pcix1 card for most intents and purposes
you want a valid debate then explain what algorithms are being used by hardware raid0/1?
2x 74GB raptors @ raid1 on the hpt rocket 3510
write back
http://img293.imageshack.us/img293/6...itebackal5.jpg
none
http://img293.imageshack.us/img293/2...id1noneka2.jpg
write back
http://img293.imageshack.us/img293/6...raid1wbke6.jpg
none
http://img131.imageshack.us/img131/4...id1nonefn0.jpg
To the world in general: As to the people who don't believe my statements, are there any responses yet to VirtualRain's review? If you see any flaws in it which I may have missed, I would like to know. It's one of the primary reviews that I do base my opinion on (as well as my own experiences), and I think this is a good place to discuss it.
@Napalm: Can you post results of just the single drives on the Areca as well, for comparison please? With luck I should be able to do a couple quick software tests tomorrow.
Oh come on! I can guarantee you it is much easier to write a software algorithm than it is to make one for a firmware release. "high end cards use better, more efficient ones" indeed! Do you think that any of these companies hold patents on the algorithms or something? In fact, I would be willing to bet money that none of the companies that produce the high-end cards have ever made any of the algorithms themselves, rather taking them from research institutions and whatnot. They may have tweaked them for their own hardware compatibility, but don't believe that just because they sell the hardware they were clever enough to figure out the best ways to make it work.
You didn't believe in the elevator algorithm when I told you about it and how it's included as standard in even the cheapest software cards, yet *I'm* the one who is just blindly following marketing? You better start naming some algorithms or commands you can pull off with your chip that my software either:
- Can't do,
- Can't do in a timely manner, or;
- Requires too many CPU resources to pull off
Keeping in mind we are talking about RAID-0 (for which there are no optimizations) and RAID-1 (for which there are, the only two ones of note being the elevator seek - which I don't believe an Areca can do - or shortest seek first - which I also don't believe they do [though I'm not 100%]).
ok, first of all these companies aren't writing RAID algorithms. i'm losing faith in the fact that you have any idea what you're talking about at all. i never said i didn't believe in the elevator algorithm. i said i believed it was a promise tech term that referred to a proprietary data handling scheme. these companies don't write algorithms, they write software/firmware that executes them. hardware raid cards execute a raid algorithm on their own processor, software/onboard raid cards execute them via CPU. hence the heavy resource utilization on the system. saying that a hardware card can't execute these algorithms just exemplifies the fact that you don't know what you're talking about. if they aren't capable of executing a raid algorithm then they aren't capable of raid period. most hardware raid cards use a shortest seek first which is a simpler higher performance algorithm.
your arguments are getting a little ridiculous :shakes:
i am using onboard raid right now and have no problems. been over a year and never had my raid fail. id say there are alot of myths and legends and BS associated with raid. look at the latest news section and how it leaves out various controllers. it doesnt even compare onboard or software raid or anything. id really like to see a legitimate test. until then seems to me its alot of speculatign and not much proof.
you are adamant that Raid0 is using some form of algorithm?
I prefer reasonable to ridiculous, but thats a personal preference,
As well I would like to develop a better understanding and therefore resort to reading and writing within these forums,
If I or anyone else has offended you by our ignorance. I sincerely hope that you can accept my humble apologies,
Please take time and explain to me what is required to do Raid0 whether it be software or hardware Raid?
As a reference 2 to 3 disks only to be fair, because we do not want the issue of scaling to be a determining factor,
Thanks in advance
well taken
300$ controller is performance bang for my buck - not a bad investment vs 1200$ i paid for the qx6700 which im giving away
since about two months ago..
800mhz iop341/256MB ddr2 cache and most importantly no compability issues on the mobos/hdd ive tested it on
1x raptor on the areca 1210
enabled
http://img293.imageshack.us/img293/1...enabledfk9.jpg
disabled
http://img87.imageshack.us/img87/585...isabledqr9.jpg
enabled
http://img258.imageshack.us/img258/3...enabledxu9.jpg
disabled
http://img176.imageshack.us/img176/2...isabledhb2.jpg
well guys i gotta go get passionate about/for/to my woman.. :D
Losing so many Raptors in a row, I'm just not a expensive storage guy. By expensive I mean anything more than necessary for general use.
But I can definitely vouch for onboards killing HDD, I've just had 2 die in a row, whilst tested perfect by diagnostic utility 8-9 hours before, no overheating, perfect PSU outputs, no dropping or movement - only thing I did to trigger it is attach a PATA along with with SATA, P35 chipset. Straight on boot and ever since, the drive is not picked up in BIOS on 5 systems and just keeps making recycling clunk sounds with ROM unknown.
Lost a full drive of vital info 20 minutes before scheduled backup, really not happy. So I connect another new one just to check my prediction that its SATA and PATA together which causes it.... Yeah after 6 more hours, first try with both of them and the new HDD is again dead! :mad:
Very strange why or how its happening, I'm not even running overclocked.
Only way I can justify those add-ons is professional use and for that we use RAID level-5EE rather RAID level-0/1 and other setups, and only with 4-16 drives.
stay with us.... he's arguing with about about RAID1 "algorithms" that's what i responded to. if you're not going to pay attention then stfu. i knew one of you without an argument in this matter were going to try and pull this out as me referring to raid 0. i could have quoted you before you said it.
how sincere. people that are here to learn don't come in here and spout off factless dribble and then argue with everyone that knows they are wrong.
you too, are ridiculous
lol. you're kidding right? oh my leet software raid is so much more hax! but don't compare it to anything that is better! cuz then it might look like what it really is.. a pile of worthless crap
lol, what is your purpose of trying to defend software RAID? seriously. yes, it has its place in the world. is it better than any decent hardware based RAID card... no. absolutely not. so why say that it is?
1, hardware cards are faster
2, hardware cards are more reliable
3, hardware cards are more expandable
4, hardware cards scale with its only limit being the bus its attached to
software RAID solutions have 1 benefit and 1 benefit only. cost.
conforting? is that a typo? or did you just start making up words to go along with your made up argument?
By claiming others don't know what they are talking about you are basically implying that you do, right? Well, apparently you don't and you are also incredibly rude. Hardware RAID certainly doesn't scale until the bus' limit is reached; the controller chip often craps out before that, although when using a high-end controller like an Areca 1210 you only hit this limit with 4+ very fast drives like the new Raptors or SSDs
I too see no reason to not use the onboard RAID controller when you are using RAID 0 with 3-4 drives max. The Intel ICH9 is a perfect controller for this application. It's okay for RAID 1 too for most users.
Guys, guys - let's keep this civil :)
Again, keeping in mind that I am referring to software-driven add-on hardware when I say "software RAID" and cards like an Areca 1210 when I say "hadware RAID", unless otherwise stated...
My statement is not that software RAID is better than hardware RAID - my statement would be more along the line that they are effectively comparable for RAID-0 and RAID-1, with a limited number of drives. I do also believe that Areca cards do not carry all the enhancements they could for RAID-1 and that by that virtue software RAID cards may in fact outperform in that area (though I could be wrong about that - once we get some single drive results from Napalm, we might get to see something telling).
Oh, and certainly hardware RAID has cache which is a bonus as well in any RAID array - BUT in terms of actual benefit, that will depend heavily on usage patterns. I'm not convinced desktop users would really see much of a benefit, though there's a chance I'm wrong about that. I don't think that desktop users do anything repetitive enough for there to really be a difference (though for some benches, for sure, it's a bonus).
@itznfb:
1. We will see ;) I have asked you repeatedly to look up some specs of your card and find any commands it could issue that mine couldn't, or reason it could issue them in a better way - for our purposes - etc. I have offered proof to the contrary in the form of VirtualRains high-quality review, bringing up an algorithm which I know I use (which you didn't know existed), and brought up the command set offered by the SATA protocol. You have as yet to respond. Please do so before asserting this again.
2. I have asked you why this is the case as well, again for our purposes. Please respond with why you feel this is so before asserting it again. I'm still confused as to how this could be so, excepting in the case where your OS is *not* on the RAID array. I think that's a fair exemption, as most people here use RAID to see faster boot times etc.
3. Agreed (but I never disagreed on that)
4. Like Alexio said... no. The limit is the processor, sorry 'bout that (well... maybe we could say "operation dependent" and leave it at that, it's outside the scope of what we're worried about anyway).
And again, I invite you to tell me what you thought of the review I posted by VirtualRain. That, as well as the reliability and "faster" issues are things you are going to need to respond to if we're to continue this conversation.
I had edited my last post, but because I'm not sure whether Napalm read the post before I edited it or not...
I missed noticing that Napalm put up his single drive tests as well. Napalm, if I could ask you one last favor, would you please run the tests on just your regular SATA ports as well? For comparison versus the hardware card. Sorry, should have asked for that earlier. Should be the last test I can think of :)
first of all, you haven't provided proof of anything. second, stop saying that i didn't know some algorithm existed because has been said that suggest anything along those lines. i have only seen the term "elevator" used with promise cards and therefore couldn't care less about the term. you called this "elevator algorithm" an optimization... which is incorrect. its just a disk scheduling algorithm which the link YOU posted even states performs worse than shortest seek first.
software cards are just that, and are prone to software failure due to an exponential amount of variables. every component in the system effects the OS which therefore effect directly or indirectly the performance of the software raid card. one example being an 'unstable' OC will directly and adversely effect a software raid card while not effecting a hardware raid card. there are no exemptions, it either performs the same or better as your saying or it doesn't period. a software card has no where near the capabilities of a hardware card, and a hardware card has no where near the limitations of a software card.
the limit is the processor? i hope that was a sarcastic statement.
http://www.nextlevelhardware.com/storage/battleship/
"[...]these hardware Raid cards are inserted into PCIe 8X slots and they run with either full 4x or 8x lane compatibility, the cards do have plenty of theoretical bandwidth. However, the processor on each controller differs from the midrange ARC-1220 and the high end ARC-1231ML. The card we were using for the review initially was an Areca ARC-1220 with the 400mhz Intel IOP333 processor. Take a quick look at our next HDTach shot of 5 drives in Raid 0 and than we can do more explaining:
http://www.nextlevelhardware.com/sto...p/preARECA.jpg
[...]Five drives put out only 386 MB/s sustained read when we should be anywhere from 550 to 600 MB/s easily[...] The limitation happens to be right around 400 to 450 MB/s max on the 1220. I had one of my suppliers overnight me an Areca 1231ML and I junked the 1220 immediately[...] the #1 difference and reason for upgrading being the high end 800 MHz Intel IOP341 processor onboard. I simply plugged my existing array into the new controller and BOOM. Look what magically appeared:
http://www.nextlevelhardware.com/sto...tARECA1220.jpg
Right off the bat using the Areca 1231ML and the same 5 drives, sustained read went up to 608 MB/s and burst jumped a couple hundred points to 1200 MB/s."
It is worse, or can be with different seeks types, than "shortest seek first". However, it has yet to be established that hardware based cards use shortest seek... and I think I will be showing you that they don't in a moment (give me a few minutes to finish some benching here and we'll see). You didn't know about the elevator algorithm and you state you don't care about it because you saw it used with Promise cards?
No sir, I have given you examples of what my card can do. My card is fully compliant with all SATA protocols, as I assume yours is... so my question to you is: what about your card makes it faster? It's a very simple question which you continue to dance around.
Oh, and I have provided you with the review by a member of these forums... which I see, again, you failed my challenge to address.
Given that RAID does not protect against data corruption, your argument is moot. Data corruption randomly impacting the RAID array upon which an OS sits is reason to wipe an entire drive. The chances of it hitting only one driver and nothing else is ridiculous.
You must be right. I'm so silly, as are manufacturers. It's so strange they would continually release new products which operate on the same busses with the same number of ports with the only modification being adding more processing power. It surely can't be because the processor becomes a bottleneck.
Edit: Hey look, HiJon grabbed out some benches while I was typing this. Thanks, HiJon!
Anyway, like I said, results coming soon.
yes, you can put a processor on a card that isn't powerful enough to handle the amount of drives you're able to attach to it. you do this to save money which pretty much kills the reason for making the hardware card in the first place. what does that prove? that's its possible to make a hardware card that sucks? yea you proved that.
it's obviously not even a 'decent' card. most mid-high end cards now use at least 800mhz procs, many are now using 1.2ghz procs which eliminates any type of cpu bottle neck leaving the bus its connected to as its only bottleneck.
how many times are you going to say i didn't know about the elevator algorithm before you realize you don't know what you're saying? its a scan algorithm. that's all it is. the term elevator was made up by promise. its a marketing term. you're talking about it likes its some great revelation that only your software card is capable of executing. i haven't seen the term used by any other manufacturer yet they all have cards the execute the exact same algorithms.
i continue to dance around? i'm dancing around the fact that your software cards creates unnecessary overhead and has dependency limits that both factor into its lack of performance which hardware cards do not have? interesting i didn't realize i was dancing.
i've never heard a hardware raid card taking out drives, or an entire set of drives. or having an array fail during every test that stresses I/O
i didn't realize pci1x 4x, and 8x all had the same bandwidth. i'm glad i now know that pci8x provides no more bandwidth that pci1x.
Err, well OCing could cause Hardware controllers to fail too esp. if instability is affecting PCIe channel. You're right though, unstable CPU will affect software cards due to software using CPU.
In that same line of thinking, I think what Serra meant was that the software controller will only work or scale up to the Core speed of your CPU. Faster speed (MHz) equates to better performance/scaling of the software RAID system... theoretically. ;)
If he was referring to hardware based cards, then I would take a guess and say that it would be more to do with a combination of CPU/PCIe bandwidth/Raid card's CPU. I assuming this, but don't hardware RAID cards work like network cards... what I'm referring to is an article that was put out some months ago that showed that as CPU speed increased, network throughput increased. As we all know, no one ever gets the theoretical output of 100mbps ;) It would be interesting to test this (CPU scaling) with a hardware based raid card.
I'm ruminating that last paragraph in my head. Just speaking aloud what I've been wondering. Hope it made sense.
ETlight
P.S. Some questions for everyone.
If a person dedicated two cores for just the software raid card, would the scaling improve or is there some other factor involved? I would this that the software solution would continue to scale as you add more drives till you maxed the load on a core....or two
Alright, I just finished a few quick benchmarks of my own. I have run each test a number of times and received the same results each time. I'll give a brief explanation of each attachment, then offer my conclusions below. When making comparisons between myself and Napalm, lets please try and stick to only HDTach results, as HD Tune does not work for me and the two are not directly comparable (at least not in my experience).
Oh, my results were achieved on an el-cheapo Promise TX2300 software driven add-on card. It is limited to the PCI bus it sits on, so burst rates aren't going to be anything fancy... but that's fine, it's not like ridiculously high burst rates do you anything with 16MB of cache on a SATA 1 bus anyway.
HDTach seems to work, but it's not without issues. For example, no matter how many times I uninstall/reinstall, I can't seem to save the benchmark files (wtf). Still, screenshots work, so.. yeah.
HDTune on the other hand is just plain out to lunch with this card... you'll see why. Clearly my results with it have to be thrown out.
The first attachment is an HDTach benchmark showing the results of one of my raptors (blue) versus a RAID-1 array with both of my raptors (red). The bench of my old raptor is quite old - early 2007 by the date - but I have no reason to believe the hard drive has deteriorated any since then (apparently I still had an HD Tach file on this old computer with a few benches in it... convenient considering I can't save them now!). The new RAID-1 benchmark, it should be noted, is actually the worst I saw. The read times remained constant to within .1MB/s, but the access time on this one is higher by .1-.2ms. I can't say whether anyone else's results were their best or worst though, so I'm assuming they were the worst for comparisons sake. If not, let me know.
The second/third attachments are two runs of HD Tune. Things don't look too crazy on it until you notice the CPU utilizations... over 50%. I'm sure some of you want to jump on that and yell "SEE!"... but then I'll also point you to the access times, which showed 3.0ms in the first test and a staggering -0.5ms in the second. Clearly there's something else going on there, like caching to system memory. I've included the pictures just for fun.
It turns out there was some confusion here. The RAID-1 test Napalm did was on an HP, but the single drive test on an Areca. So take these conclusions with a grain of salt and head on over to post ~91ish for a new review.
Results we have seen thus far:
Napalm:
Single Drive on HP Card
-----------------------
Write back cache on:
Avg Read: 78.2 [write cache on] / 78.1 [write cache off]
CPU: 0%
Random Access: 8.1ms
Burst: 750.9MB/s [write cache on] / 756.7MB/s [write cache off]
RAID-1 on HP Card
-----------------
Write back cache on:
Avg Read: 88.5MB/s [write cache on] / 78.2 [write cache off]
CPU: 0%
Random Access: 7.6ms [write cache on] / 7.4 [write cache off]
Burst: 1307.5MB/s [write cache on] / 135.7 [write cache off]
Myself
Single drive on TX2300:
Avg Read: 65MB/s
CPU: 2%
Random Access: 7.7ms
Burst: 114.0MB/s
RAID-1 on TX2300:
Avg Read: 65MB/s
CPU: 2%
Random Access: 6.8ms
Burst:126.5MB.s
My analysis:
Avg Read Time:
My average read time remained a steady 65-65.1MB/s in all my tests. Napalms seems to fluxuate with write cache on/off, but 3/4 tests peg him at an average of 78.16MB/s (the last gives a 10MB/s improvement). Are they directly comparable? I'm not sure. Further testing and results are required here. It could just be that his drives are newer than mine, use different firmware, etc... or it could be that the controller adds on about 13MB/s average read. We'll see what Napalm is able to post about his drives on their motherboard controller, and I'll try the same on my P5K Deluxe.
CPU Utilization:
The battle here seems to be between mine at 2% and his being reported at 0%. I think we can agree that just running the program involved some kind of resources, so if you'll allow, I'd like to argue it's my 2% (HD Tach) versus his 0.4% (his lowest HD Tune). If not, fine - his program runs without resources. In either event, I'll add that his processor is at least a Q6700 - overclocked to 3.6GHz in one of his posts in another thread - where the one I'm using for this test bed is a dual-core Opty 170 at stock speeds that my wife has been using for the past year or so (and loaded it with garbage, but that's another story).
Given the performance difference between a 3.6GHz Core 2 Quad and a 2.0GHz Opteron 170... I think we can agree that the utilization is next to nothing for both solutions.
Seek time:
In his testing, Napalm went from a steady 8.1ms average access time to an average of 7.5ms (between his two results). This is a decrease with RAID-1 of 0.6ms. In my results, I went from 7.7ms to 6.8ms, a decrease of 0.9ms.
It is important to note, however, that although my decrease was better in absolute terms, it was also better in relative terms. His decrease represented a decrease of 7.41% average seek times, while mine was an 11.69% decrease.
The only conclusion to be drawn here is this: My software RAID implements algorithms or optimizations that are not seen on his hardware RAID card.
Burst speeds:
In terms of Napalms results, well they're obviously a function of it having actual RAM cache. 'nuff said there. Mine weren't great, but I was bottlenecked by the PCI bus as well, so I can't really make a fair comparison.
Final thoughts:
Well, I hope I've demonstrated the following:
1. Sustained Read Speed: More results are needed to speak to whether the hardware based card is actually pulling 13MB/s more than I am, or whether his drives are just plain better. Those will be coming.
2. CPU Utilization: With a comparable CPU, this should be negligible either way
3. Seek times: My card was the clear winner, both absolutely and relatively.
4. Burst speeds: RAM cache versus a PCI bus... no contest there
While it is certainly true that the faster your processor the more drives you could scale to... it is also my assertion that the overhead for RAID-0 and RAID-1 is trivially low. Scaling for me is more a result of the fact that software controlled add-on cards are generally crippled by their bus (either PCI or PCI-E x1), so it gets pointless after a few drives anyway.
Edit:
In response to your question about dedicating processors to software RAID: If you were talking about RAID0 or RAID1, it's a moot point. The utilization just isn't there for it to make a difference. For RAID-5/6... maybe... frankly I've never done testing on it and I don't think I would, it's not a best practice by any stretch.
As for hardware CPU bottlenecking, and responding to itznfb on this as well - I was, of course, referring to RAID-5/6 (as stated). And yes, in those cases the CPU can be the bottleneck. You could buy a $800 Areca with the very latest IOPs available from Intel 5 months ago and buy a new one with dual-core, faster IOPs today and pull different speeds... and you say the CPU can't be the bottleneck? itznfb stated that the card couldn't have been properly high end if it couldn't deal with all the drives it was designed to handle at line speed all the time... but frankly, as hard drive speeds increase, so too must the CPU speed on the hardware card. You simply cannot get around that itznfb. If your RAID card handled 8x drives with 70MB/s throughput one day, don't you think it might be possible it would bottleneck when you went to drives which could sustaint 110+MB/s?
Mind you, in that last paragraph I'm just mirroring (in a less eloquent way) what Alexio said... which you also didn't respond to...
i'll do you a favor and ignore the 2nd screen shot. in the first you said you've never even seen 1% CPU utilization? so why is it at 3%? and why is your average read and burst lower than if the drive were just connected to the motherboard?
and why can your software raid card be bottlenecked by the bus it sits on but a hardware card can't?
You'll have to ignore the 2nd + 3rd screen shots [both HD Tune results] (3.0ms response and -0.5ms response time... I wish), but I assume that's what you mean.
As for the utilization - I'll have to agree that's a typo. I'll put it in red for you on the first post and apologize if you'd like. And it's not at 3% - it's at 2%. I was a little worried you were going to try to attack me there, but when you made that typo of 3% yourself I was relieved to see we're just human.
My average read is the same on both HD Tach results in the comparison - 65MB/s either way, so I don't know what you mean there. Burst is bottlenecked by the PCI bus on the array.
And this is RAID5/6 only...
And I'm not saying that a hardware can't *can't* be bottlenecked by it's bus, but the bus is not always the first or primary bottleneck. I guarantee you you can find a top of the line card from a year or two ago that you can put into at least a PCI-Ex8 or even x16 slot... and you'll find you're bottlenecked versus a current (or slightly future) offering with newer IOPs in RAID6, say.
Hmmm, this MIGHT be possible. Since the hardware card has managed the parity calcs and combined the byte streams from the RAIDed disks into the correct order, all the CPU has to do is get an interrupt from the card and read the data that is arriving in the buffer, which it then transfers somewhere else where another thread can process it. This is all the card driver software is doing, pretty minimal stuff when the hardware on the card is taking care of the actual RAID algorithms.
I'm trying to get some order-of-magnitude handle on this... so do correct me if I'm wrong... for these large disk arrays, data is arriving at a significant fraction of GBytes/sec, your CPU is running at GHz speed, so a single core should be just keeping up with the incoming data. Remember the practical limit on accessing this data once it's been received in a buffer (or to transfer it into one) is the actual read/write speed of the memory. There's also no way to parallelize the algorithm when you are dealing with a single input stream.
Now IF you used software RAID to handle each individual disk, then up to the limit of the processor cores the throughput should be fully scalable as you could handle each disk with its own thread. But in parity RAID you then get the issue of combining the data to do parity calcs, which we know is the slow part. In that case, you've spent processor cycles switching between thread contexts, each thread has to respond to a system interrupt from the card for its relevant disk, then you are getting the data byte from buffer and storing it in memory, then you have to read it all over again in another master thread that also has to check whether all disks have returned the particular byte, do the XOR and then write the data byte out having confirmed the result against the previously calculated parity byte (and recalculate if it doesn't match, handling the error situation, serious overhead)... So yes a faster processor would help, but wouldn't change the results magically by any significant amount, there is so much processing going on in this regard which includes many "slow" memory writes. And the data throughput is (as I already said) reaching the processing limit of a single core, so this is why software parity RAID has such high processor overhead, affecting all your other apps.
The situation with network cards is slightly different. Disk data is a raw data stream. Network data is not, it's like playing pass-the-parcel with many wrappers of information around the actual data sent. For a short course in why read up on the OSI model - Google offers this as a good page: http://www.tcpipguide.com/free/t_Und...lAnAnalogy.htm
So to get at the actual raw data sent between two computers, you have to have software that strips away each successive layer of surrounding information. The name for this is the "TCP stack", a critical core part of any OS. This is all taking much processing overhead for each data byte you actually want to get at, much more compared to handling a raw input stream like a disk array, so processor power should have a more critical impact here, and the routing overhead is why the actual throughput is effectively limited and you can never reach the theoretical max.
wow. two pages of this? "RAID" is not a religious thing. It is simply a tool. As what Serra & other have alluded to. Barring implementation screw-ups, there is no 'magic sauce' that a software solution can perform differently than a hardware one or visa versa. What do you think is on a hardware card? It's software, called microcode but still the same, and these days probably written in the same languages.
You just have to pick the right tool for the job at hand. (all sans implementation issues) you will not find any statistical differences between any raid level once you baseline towards relative performance of whatever CPU/bus/memory handling the load. (i.e. the operations and types of operations required to give the RAID-X/X level is the same).
The deciding factors usually between software & hardware are usually down to System load and ease of management (and ease of management here I'm not talking about the people here, I'm talking about the $8/hour 1st level support tech at a company who could care less about computers and is supposed to service them). The system load is mainly for the point when you have an application that is generally too much already for a box (DB's, application servers, et al) and the system literally does not have enough CPU/memory/buss bandwidth to even do it's primary job. Any off-load is a boon as it saves the company from buying a larger system (larger capitol expense and in many cases licensing charges due to more cpus/processing power for the primary apps).
What one can get out of a particular set up is _HIGHLY_ dependent on the workload presented to the solution.
Striping (or RAID-0) you will not find any benefit regardless of hardware/software. Same # of cpu cycles, same # of interrupts and same amount of bus/memory bandwidth required to access for a software/hardware raid.
RAID-1, to Serra's point there is some logic needed to find out what drive is less utilized for requests. HOWEVER, this I would label as an implementation bug if you ever find a card that does not do this automatically. Ie. Anyone who has _ever_ done a load balanced solution of _ANY_ type (raid, networking, bus, memory) finds out in the matter of hours of testing that a round robin request layout does not work except in only ONE case where _every_ request is the exact same size and takes the _exact same_ number of cycles to complete. Good thought experiment but it does not happen in the real world. To compensate software & hardware solutions use metrics like service time (derived from markov chains/little's law and others) and watching the command flow. This is more involved, yes, however this is not any more than say what a normal disk driver does for you (actually less)). Which is why for RAID-1 arrays there is no real distinction between hardware/software again for deployment.
RAID-1/0 (0/1), 1E, 10E and variants I'll add here, these fall into utilizing the above two 'levels' in combination. Likewise here you will not find any appreciable difference at all between hardware/software solutions.
Starting with the 'parity' raids. (skipping RAID-2 as it's more of a history lesson these days except for ECC memory which works similarly).
RAID 3,4,5 and most definitely raid-6 have substantial calculation overhead WHEN WRITING to the array. These are the types of arrays that kill performance on systems that are already close to the edge. IF your system is _NOT_ however (ie, desktop box or whatever, lightly loaded system, service times/utilization below the ~60% mark) you will not notice any difference between hardware/software (assuming you are properly comparing your respective calculation power and system. Ie. You can't compare say a CPU that does ~60,000MIPS with one on a card that does say 6,000MIPS. and point to it and declare that 'software' is faster. No. It's that your system can just execute more software operations faster than the card you are not comparing the algorithms you are just comparing the implementations, which is different.
Another item that 'skews' results in testing is write cache. This has nothing really to do with the algorithms themselves but is a way to make them more efficient. This can be done both in hardware & software as well, generally it's 'easier' in hardware from the end users point of view as they just have to add ram to a device. Once again this is an implementation item. What this does is to hold the data long enough WITHOUT writing to the drives UNTIL there is a full stripe width to write at once. This only applies to parity raid setups (mainly, can also have some other benefits but not for 99% of the people usually). This is due to fact that if you do not write a full stripe with parity raids you have to do 4 operations (raid 3, 4 ,5) or 6 operations (raid 6) to the drive subsystem to actually do your write. These operations add up (taking more MIPS) to complete, and if done in software more bandwidth on the host's side.
Also parity raids do not just take cpu overhead they also take more memory bandwidth. Just like a network card or anything else, DMA helps but you are still hitting main memory for several operations sometimes up to 4x the bandwidth you 'see' being written out to your device. I find this more noticeable in high speed networks as many people do not push disk subsystems up to the 1GiB/s mark yet. With home computers (and many servers) limiting memory bandwidth to say 3-4GiB/s (i5000V) ~6GiB/s (975X) you do not have enough there to push the data AND run something else. (This is why the opteron systems even though they are slower in MIPS ratings generally compared to the current intels they have a wider memory bus). Or say the Sun SPARC systems, IBM SP2, et al).
A computer is not defined by a single item, it is a balanced system, increasing any ONE part WILL NOT improve the overall system in equal proportion to the increase in performance of the single item. (amdahl's law of parallelization increase, colloquial similar to the 'law of diminishing returns').
So the question comes directly back to what is the environment you are trying to use the tool of 'raid' on? If it's parity then you have to consider what else is part of that environment that also needs to co-exist with it and what your goals are.
Not to ramble on (:shrug: too late I guess) to the OP, If you are just looking for 0/1/10/1E/10E then literally anything is good as long as it does not have a known implementation problem. I use LSI, adaptec, cards here, but also do on-board, and software raids for this on numerous platforms (intel, sun, linux et al).
The benchmarks I posted show that a PCI-E x8 card couldn't go over 400MB/s sustained read because the bottleneck was the IOP333 CPU. In that case the CPU was the bottleneck, and soon enough the CPU's on today's RAID cards will bottleneck the array. The PCI-E x8 interface can theoretically handle 2GB/s, Serra was quite right in saying that "The limit is the processor"
@Stevecs: I was wondering how long it was going to take for you to get in on this :p: Hopefully they'll at least listen to you (your sig is much more impressive hard drive wise than mine is).
The IOP333's an 800MHz processor, not a 400MHz one. And yeah... they can also be bottlenecked before a PCI-E x16 lane will be.
Yes but PCIe x1 has a 250MB/s ceiling. If you got a PCIe 4x or 8x software card, wouldn't this raise the ceiling to 1GB/s or 2GB/s respectively?
Edit: I just read Steve's post. I guess what I'm trying to say is that, 1. if you have a software RAID card that is say 8x PCIe in a slot that is full bandwidth 8x, shouldn't you get close to full transfer speed assuming you have enough drives to bring it there?
If not, then was IS the bottleneck?
I'm assuming that you implementation Serra, of 3-4 drives at 69MB/s max on a software raid card, 69 * 4 drives = 271MB/s If you had a PCIe x1 card then yeah, you're over the limit. ;)
ETlight
P.S. thank you Serra and IanB for responding :cool:
I'm not saying that 1.2Ghz processors are bottlenecking current RAID arrays, but the processor is "the limit" so to speak. If you were to hypothetically keep increasing the throughput until the card maxxed out, you would be limited by the CPU before the PCI-E interface or anything else. So Serra was right in saying that the "The limit is the processor"
hold up..Quote:
Napalm:
Single Drive on HP Card
-----------------------
Write back cache on:
Avg Read: 78.2 [write cache on] / 78.1 [write cache off]
CPU: 0%
Random Access: 8.1ms
Burst: 750.9MB/s [write cache on] / 756.7MB/s [write cache off]
RAID-1 on HP Card
-----------------
Write back cache on:
Avg Read: 88.5MB/s [write cache on] / 78.2 [write cache off]
CPU: 0%
Random Access: 7.6ms [write cache on] / 7.4 [write cache off]
Burst: 1307.5MB/s [write cache on] / 135.7 [write cache off]
1x raptor on the areca 1210
2x raptor raid1 on the hpt 3510
two different controllers/specs
now if you want to compare 2x raid1 vs 1x.. fine ill bench 2x raid1 on the areca 1210.. i cant create 1x on the hpt.. will also post onboard 1x raptor
But unfortunately you don't get software-only cards with x8 interfaces. :( Or ones with x4 interfaces that have more than 4 ports. :(
Look at the Adaptec 1430SA - a good little software card for RAID 1/0. PCIe x4 interface, but restricted to 4 ports. The interface could handle up to 8 100MB/s SATA drives really in terms of raw throughput, but you're not going to get the chance to test your scaling theory as no-one is releasing an 8-port card in that form factor that'll allow software-only RAID.
I'm going to assume that releasing such a card would kill sales of their overpriced and therefore profitable hardware solutions. Consider that you buy a motherboard with seriously larger amounts of real estate, many more components and a much more complex manufacturing process as a result, for around £75 to £200 max, or $150 to $400. How on earth can the price of a hardware RAID card be justified when the starting point is the same as a motherboard with the latest Intel chipset on it? FYI the "simple" Adaptec I mentioned above is around £70 in the UK, the same price as a mainboard. :down:
EDIT: but since I've never written a low-level hardware driver, I'm not entirely sure what the implications are there. As I suggested in a post above, software RAID has to handle each port/drive individually, so there may be a limitation on the number of ports that is practical - if every drive needs to communicate to the driver, then there will be one system interrupt for each drive on the card to say "My data is now ready, come and get it". Hardware RAID is reducing this overall system overhead significantly, because that is all handled on-card and the card only needs to present the system with a single interrupt for every data block requested. That's got to be a valuable trade-off, but I can't really quantify the interrupt effect on the system overall.
ok, for comparison sake:
3x150 Raptor
raid0 64k write back enabled
adaptec 5805 8port
another one:
1x wd500aaks
tx4302 pci 4port 2ext 2int
connected esata port eagle-m ext case
i don't know what $%&**(*& happened here,
but i thought i'd post anyway because of "cpu" utilization
redid hd tach and nothing else:
1x wd500aaks
tx4302 pci 4port 2ext 2int
connected esata port eagle-m ext case
ok, for comparison sake:
5x320 wd320aaks single platter - 1 might be 2 platter (rma)
raid6 64k write back enabled
adaptec 5805 8port
Oh, I see where the confusion came in! Sorry, I thought you only had an Areca card, and after you posted something about the HP I decided I must have been wrong and it was an HP card... I didn't expect you to have both. Still, it's a great opportunity to get more comparative data out there.
ok, granted only 1 disk -
tx4302 pci
eagle-m ext case = i believe this case interface may limit to sata 1
2x raid1 @ areca 1210
http://img88.imageshack.us/img88/9382/74855365em0.jpg
1x raptor @ onboard asus maximus se
http://img212.imageshack.us/img212/9725/54016337be5.jpg
Check this out ;) http://www.newegg.com/Product/Produc...82E16816115026
The only one I could find. So I'll forgive you. :D
Looks yummy, huh Serra?
ETlight
Alright, new "Results Summary" based on Napalms new tests. A lot of it looks the same, but sentences/numbers have been changed.
Results we have seen thus far:
Napalm:
Single Drive on Areca
-----------------------
Avg Read: 78.2 [write cache on] / 78.1 [write cache off]
CPU: 0%
Random Access: 8.1ms
Burst: 750.9MB/s [write cache on] / 756.7MB/s [write cache off]
Single Drive on Onboard
----------------------
Avg Read: 77.3MB/s
CPU: 2%
Random Access: 8.1ms
Burst: 137.1MB/s
RAID-1 on Areca
-----------------
Avg Read: 77.8MB/s
CPU: 0%
Random Access: 8.7ms
Burst: 745.2MB/s
RAID-1 on HP Card
-----------------
Write back cache on:
Avg Read: 88.5MB/s [write cache on] / 78.2 [write cache off]
CPU: 0%
Random Access: 7.6ms [write cache on] / 7.4 [write cache off]
Burst: 1307.5MB/s [write cache on] / 135.7 [write cache off]
Myself
Single drive on TX2300:
Avg Read: 65MB/s
CPU: 2%
Random Access: 7.7ms
Burst: 114.0MB/s
RAID-1 on TX2300:
Avg Read: 65MB/s
CPU: 2%
Random Access: 6.8ms
Burst:126.5MB.s
My analysis:
Avg Read Time:
My average read time remained a steady 65-65.1MB/s in all my tests. Napalms fluxuated somewhat with write caching on/off, but that's to be expected. His drives, on hardware controllers, pulled an average of 78.16MB/s, and his onboard test gives 77.3MB/s. Realistically, those two numbers are so comparable the difference could just be the difference between one run and another.
The bottom line here is that in RAID-1, his controller did not confer any benefits to him over my software-driven add-on card.
CPU Utilization:
The battle here seems to be between mine at 2% and his being reported at 0%. I think we can agree that just running the program involved some kind of resources, so if you'll allow, I'd like to argue it's my 2% (HD Tach) versus his 0.4% (his lowest HD Tune). If not, fine - his program runs without resources. In either event, I'll add that his processor is at least a Q6700 - overclocked to 3.6GHz in one of his posts in another thread - where the one I'm using for this test bed is a dual-core Opty 170 at stock speeds that my wife has been using for the past year or so (and loaded it with garbage, but that's another story).
Given the performance difference between a 3.6GHz Core 2 Quad and a 2.0GHz Opteron 170... I think we can agree that the utilization is next to nothing for both solutions.
Seek time:
In his testing, Napalm went UP in access times with an Areca card and down with his HP. His single-drive access time is 8.1ms. With the Areca, he was given 8.7ms response on his RAID-1 array, and with the HP pulled an average of 7.5ms. In the first case, it is an increase of 0.6ms and the second a decrease of 0.6ms. In my results, I went from 7.7ms to 6.8ms, a decrease of 0.9ms.
It is important to note, however, that although my decrease was better in absolute terms, it was also better in relative terms. His decrease on the HP represented a decrease of 7.41% average seek times, while mine was an 11.69% decrease.
Why the differences? RAID-1 with any optimizations at all should reduce speed. His HP clearly introduced some form of load balancing between the two to reduce speeds, but seems to stop there. Areca cards, however, have never been notorious for their use in RAID-1... exactly why, I'm not sure. Clearly some sort of code fix is needed. For myself though, my software solution provides me use of the elevator seek algorithm. While *only* the second-best seeking algorithm we've yet found, it's still apparently pretty good.
The only conclusion to be drawn here is this: My software RAID implements algorithms or optimizations that are not seen on his hardware RAID card.
Burst speeds:
In terms of Napalms results, well they're obviously a function of it having actual RAM cache. 'nuff said there. Mine weren't great, but I was bottlenecked by the PCI bus as well, so I can't really make a fair comparison.
Final thoughts:
Well, I hope I've demonstrated the following:
1. Sustained Read Speed: Napalms sustained reads were faster, but so were his drives. His hardware card did not improve speeds noticeably (0.86MB/s). There is no speed advantage to be taken by hardware cards here.
2. CPU Utilization: With a comparable CPU, this should be negligible either way
3. Seek times: My card was the clear winner, both absolutely and relatively (0.9ms / 11.69% drop versus either 0.6ms / 7.41% [HP] OR an Increase [Areca]).
4. Burst speeds: RAM cache versus a PCI bus... no contest there
So I invite you Naplam and itznfb - what do you say? For RAID-1/0, can we be agreed that - for a limited number of drives - software-based solutions* are the equal and can in fact be the superior to hardware-based solutions but at a fraction of the price?
*Edit: (at least those which use add-on cards, not necessarily on-board junk)
With that said, hardware cards do certainly provide:
- The ability to migrate to RAID-5/6
- Additional cache, which can be a benefit in *some* desktop usage patterns
- The ability to scale further (in general, due to software-based cards being generally limited to the PCI and PCI-E x1 busses)
Heh, downright smexy!
I can't believe there's actually a market for $240 software-based add-on cards though! I guess if you're doing RAID-0 you can save yourself some money this way... but 8x drives in RAID-0 :eek: That's a failure waiting to happen.
Maybe for a RAID-01 (or -10) solution though... I could see that, sort of.
1x vs 2x raid1 vs 1x onboard ? is that what you based your analysis and conclusions on nonraid/raid/0/1 ?
you have got to be kidding!
FYI - 975x controllers is slower than the x38.. 72MB/s vs 77MB/s @ 1x raptor
all your doing is comparing "questionable results" others post
are you even aware of the differences between the areca 1210 and hpt 3510 ?Quote:
A lot of it looks the same, but sentences/numbers have been changed.
you guys dont even the hardware to base your nonsence on
this is so silly isnt funny
- i was using an x48 board with q6600@3.52ghz
- also used adaptec 5805, 3xraid0 raptor 150's and raid6 results posted
- promise pci tx4302 esata ext 1x500gb results
thank you Napalm for your data,
thank you Serra, I appreciated your efforts compiling the data and structuring it into a straightforward and easy to read report.
--next is a 1x500gb esata same as before this time using a Sabrent $30cdn pci-e x1 card
please note****** transferring 300gb of data from nas to raid array while doing tests
p.s.
brb, ongoing amoeba testing being done..............................oh Ollie!!!, yes Lou, i thought I told you his head was way too big to fit through that door?
The question is, is it completely hardware or software controlled? It's got a VERY big heatsinked chip on it, which certainly suggests some kind of onboard processing... :shrug:
Actually I was seriously looking at this model for a while, it's a very good price for an 8-port compared to the high-end cards, and in Tom's comparative review of RAID cards a while back http://www.tomshardware.co.uk/pci-ex...view-1927.html the external-socket only version of this (RR2322) distinctly came out tops against Areca et al for RAID 1/0 as far as I could tell, even if they decided to ignore that and recommend others because it appeared to need a proprietary external drive cage (avoidable with a different cable, easily acquired). Yeah - checking that link the article is now removed, and Tom's blocked the Wayback Machine from archiving it too. :mad:
The glitch for me on this card and the RR23xx series has been the NewEgg review on that page you linked saying:
Not sure what to make of that, but it put me off more than a little as the guy making it seemed fairly competent. :(Quote:
Cons: terrible software. the review titled "Worked well for a while" is true in that the RAID app eating up massive CPU cycles. I've build more than a dozen rigs with RR2320, RR2310, and RR2300 - only the RR2300 doesn't have this problem (but it's only 4 port, and PCI-E x1)
Other Thoughts: I've already contacted Highpoint about the CPU cycle issue, and no one seems to care. Bad Customer Support, slow/no response to support inquiries. Go for a 3ware if you have the $$.
swiftex - sure no problem.. i posted all that for him its not like i didnt know what results i was getting @ areca/hpt/onboard/raptor
im all for sharing comparative data and thers lots of it here just @ xs
serra - this for you,
http://www.xtremesystems.org/forums/...13#post2946313
keep an eye on whose softraid/intelmatrix is gonna beat my results
http://www.newegg.com/Product/Produc...82E16815121009
I know its PCI-X but that's one hell of a price :D
Umm... the only results I looked at were yours and mine, no-one elses. I even tossed the results from the HPT, aside from using them as an "aside" note - and be glad I did, the SPANKED the Areca and were the only ones that didn't leave you utterly embarrassed by your seek times. Effectively, it was your Areca card vs. my TX2300... and you know what? It was extremely comparable, except for the seek times, in which mine won (also won against the HPT).
Edit: I'll also note the reason not a lot of it changed was that besides your numbers, which changed a bit... the end result was the same. I only had to change about 3 numbers, and not all for the best. Sorry Napalm, but the numbers speak for themselves.
What magical elixer do you think is in your hardware card that isn't in software solutions? What commands does it issue differently from mine? What logic on yours differs from mine? Seriously, what?
That's right - there is no difference in commands or how they're issued. There is no difference in CPU interrupts given. There is no difference anywhere. There can't be by the very nature of the SATA protocols. For RAID-5/6, sure... all the difference in the world, what with parity calculations... but for RAID1/0... nope!
Read what I wrote. All it is is looking @ your results and mine.
Not enough for you? Kindly respond to VirtualRain's review I posted as well.
I see 3x columns of evidence against you (your results vs mine, VirtualRains results, and the logic of the fact that commands work the same both ways).... and all I get from you is "No, it can't work that way. My card cost $600, it must be the best at everything".
So if you're all for sharing comparative data - why won't you accept our results, as well as VirtualRains? Just give me some reasons. I did re-work the post based on your different controller and, if you'll notice, changed the first one to state that there had been a mistake (in red print).
... and that's kind of childish. Maybe the top 1 percentile here can afford hardware like you have... QX6700's @ 4.2GHz + your other goodies?
Shame about the -x, yes. The reviews suggest there's no RAID driver in the bundle, it's just 8 SATA ports, so a true software-only solution... but the killer for the scaling idea might be this comment in one of the reviews:
Quote:
This card seems to max out at 285 MB/s in Windows.