Generally unless you have both drives on the same controller and have an intelligent controller card that can do a 1:1 block map (and a program designed to do that) you use your main memory.

First the application, OS and driver are involved (assuming your doing this from an OS not from the bios on the controller card) those functions are in main memory. All transfers are initiated by the app/os/{filesystem if not block level}/driver -> disk as 'read x blocks at location Y' that data is then (usually) DMAed into main memory and then the OS then tells drive 'B' to 'write x blocks at location Y' from the DMAed memory space. So every block (every bit actually) is written and read from main memory in a copy and goes through several layers of code.

Technically even if you do it from a controllers bios system you have a mini version of the above but you're mainly running on the firmware and the small amount of cache memory on the controller. You don't have a 'filesystem' at that point as it's block level but the same bit error rates come into play.

To give you an idea. I copy/read/write about 1000TiB/month on the array here at home. This is both for backups/restores, block level verifications, file level verifications, md5 hash compares et al. so I can find any single bit-error in the data. I find a single bit error every couple months somewhere in the chain. There's no real way today to correct bit errors like this besides restoring from backup from a known-good copy. (One reason why I'm trying to get raid vendors to do parity checks/corrects on READS which would really help this but so far I'm a lone voice).