It's the same thing. With better raid cards generally (or should) operate in write-through mode. Ie. when a write request goes to the card, the card passes it to the drives and waits for the drive to tell it that is' written ok, and then passes that ok back to the OS. It's to insure that the data is written properly.
With a write-back cache (what gets enabled with a batter backup) the card on behalf of the drive issues an 'ok' back to the OS WITHOUT writing it to disk right then. It usually waits until it can write an entire stripe at a time to avoid the write penalties. But in this situation you have a point of failure without the battery as you can loose up to the cache size of transactions if they are not committed.
Write-back cache will benefit any raid 3/4/5/6 operation (well, raid 3 not much actually as that's a full stripe raid) but the others as by caching the data you have a much better chance to write an entire stripe width at the same time, and when you do that you do not have the 4 operation (raid 4/5) or 6 operation (raid 6) penalty and can do it all in 1 operation. It will come into play/use with nearly _any_ write operation on a raid except in the situation where you have 0% full stripe writes (ie updating less than a stripe width). (haven't seen anything like that but it is possible).
Sensitivity is going to be the same if you're talking about external factors of power/crashes/et al. that has nothing really do to with RAID that's system & filesystem related not raid related. Remember raid's purpose is to provide availability for a hard drive failure. It does nothing at all for system stability (it's only concern is to replicate X block of data, not what's in that data block)
As for data errors, there are numerous items that can cause them, and RAID does nothing to solve/mitigate them (not it's function).
- Unconditioned power, You have the UBE/BER (Uncorrectable bit error rate) on the drives, somewhere around 1x10^15 bits.
- The controller (HBA) itself ~300,000MTBF
- cable transmission errors (unconfirmed 1Meter 3.0Gbps SATA/SAS cables of 1x10^12bits)
- mainboard errors
- cpu errors
- memory errors at a similar 1x10^12bits (ibm did a study of 1 bit error per 1GB Ram/month).
- Then you have background radiation (dependent on your environment mostly from Cerenkov radiation from what I've been reading).
- Then you have filesystem implementation errors, OS errors, application errors which may write bad data on the system.
I'm probably missing something but those are the main ones that I deal with/try to mitigate around here.
As for being afraid of it, don't know about that. There are no real solutions to it. Sort of like life, you'll end up dead one way or another, is that a reason to worry about it? :)
Seriously the biggest item barring filesystem, OS, applications (as I have no way to calculate those) would be Power issues (UPSs help mitigate) & HD failure far out in front (which raid mitigates). Then next would be BER rates (for very large arrays) or memory errors (especially if you use a lot of ram).

