Yes, but with a caveat, RAID-6 is CAPABLE of this, but it's up to the firmware writers to take advantage of it. Of which I highly doubt low-end cards do this (mainly as I see that most of those have hard time just getting the multiple requests working properly on raid-1/10 which is much simpler). Unfortunately to independently test which cards (or firmware versions) do this properly is very hard and it's not generally published so you have to escalate it to the company who makes the card. (you could do this yourself by creating a small array, turn it off, pull out the drives read in each stripe for an entire width, then inject a bit error in each area (one at a time, the data, P parity, Q parity) and between each manual insertion turn the array back on, have it do a check, it should find and correct the error. Once you see that, then do a dual bit error (say both P & Q) and it should correct the data with that change if the card is coded with that logic).
(and just to point out as well though I've never seen this done except in lab tests, you CAN do this with mirrors as you are technically NOT limited to just 2 disks (you can have a 3-way mirror, or like my desk play-box here I have a 12-way mirror (don't ask) As long as you have something to break the 'tie' (and it can get more complex depending on how many errors you want to recover from (ie, 3 out of 5 match, or whatever).
Also remember that this is subsystem checking NOT file data checking and only comes into play (with raid cards) when you do a raid set check. In read mode there are NO cards or software (besides ZFS if you turn it on which is off by default) that check any type of integrity (file or block). This is what can cause problems where data integrity is important. You read a block (and since it's not parity or hash checked on reads it could be wrong/corrupted) you then act on that data (maybe changing some other part of the block or data set) and then write it back. At this point you are now writing back the BAD data that you read. The card will then calculate a NEW parity (for the bad data) and thereby vetting that data at the subsystem level. It can be very insidious.
I've been unsuccessful in the past two years to get any raid companies to put an option in to do this (it's not hard, they're already doing this with a manual check, but instead do it on every block read) it WILL slow down operations to that of a partial stripe write which is a hit, but doing it at this level makes it file system and os independent. Then you can add on top of that file and file system level integrity checks (even your suggestion of par archives which is a good thought but not universal to deploy to the different OS's/file systems).



Reply With Quote
Bookmarks