Quote Originally Posted by __Miguel_ View Post
Btw, one thing I've been thinking (run for the hills! lol): RAID controllers are basically dedicated CPUs and memory for the parity calculations (not even needed when reading, unless if the array is cripled). Also, it's very clear that parity calculation takes a fraction of today's CPUs computing power (see the links I posted a while back).

So, my point is, since there are usually no backup bateries on CPUs like there are on hardware RAID controllers, to keep data from being lost after a power outage, software RAID most likely calculates parity for every block written, and sends it immediately to the drive, to minimize data loss, without even considering storing it on RAM. This, of course, creates abismal performance for software RAID, when it could actually be the fastest configuration available...

So, am I too off on this? I don't think so...

Also, would it be possible to rewrite Intel or Microsoft's RAID driver to actually use system RAM as cache, before sending the data to the array? This would open up insane performance boosts on software RAID...
Well, besides quoting myself (which is always weird to begin with), I have a little update on the whole "software RAID" vs. "hardware RAID" performance, and the why we see so different performances between them.

Granted, a dedicated CPU will always be better than a several times faster generic CPU. However, I've found an interesting controller which almost answers the questions I've risen on my last post.

For those who don't know what the hell I'm talking about, check here and here. The thing is, apparently there is one company which is trying to do exactly what I've been talking about: getting a generic purpose CPU to do the work of those I/O processors on the dedicated cards. The name of the company is RAIDCore, with its 4000 (apparently, not that good) and 5000 (just released) series controllers, and (most important) also the Intel (AMD on the works) driver replacement.

And guess what, its performance is not that bad, because instead of only pulling the absolute minimum CPU they can, they kinda go wild on that. If my readings are correct, the 4000 series cards could pack quite a punch, even when competing with dedicated I/O processor cards...

The only thing missing seems to be the cache. Cached writes with XOR-enabled and cache-enabled cards are through the roof, and that is very important with parity RAID levels. I've read that the RAIDCore cards could actually support cache, only there aren't any available models with that option (probably because of price concerns: a 4-port RAIDCore + software costs around €85, cache would at least double it). I just hoped there was an option to use the system RAM as cache, like IGPs do... it shouldn't be too hard, and the performance would be impressive...

So, what are your thoughts about this one? I might consider one of these, since a 4-port card with the special software is impressively cheap here (€85 is very nice, and the software itself is said to retail for $50, so...), and, combined with an ICHxR (presumably, ICH8R, ICH9R and ICH10R, because of the aparent 6-port requirement), I'd be able to go to a 10-drive array in no time, with support for a lot of interesting combos (like a kind of Matrix RAID, but with 10 drives). Only thing really missing for added safety is RAID-6, but I'm guessing it will be added to the software, sooner or later.

Also, another question. since I don't really need SATA for the system drive, since I won't be using an optic drive (except for the OS install, that is...), and since the IDE controller would be left there sitting, would it be a good idea to go IDE with the system drive? Like a WD1600AABS (IDE variant of the AAJS, I think)?

Cheers.

Miguel