Originally Posted by
Particle
Simple answer: Yes*
I'm a programmer, so I appreciate some of the aspects about multithreading that a lot of people do not. So, why the asterisk? It's because the positive reply depends a lot upon the code being ran. Workloads that don't have to be synchronized can be parallelized rather easily and utilize any number of dissimilar cores to get the job done. Unfortunately, most real-world things must be either synchronized or in-order to be useful. Sure, you can encrypt 10GB of data using multiple threads and just write the data from one thread's output stream to disk as soon as it is done but good luck decrypting it. If you want to use tons of RAM, you could buffer the output in RAM either piecing it together during or after everything is processed, but in practice this is a bad methodology. However, most highly-threaded processing is done on "frames" of data, essentially a small chunk that isn't difficult to keep in memory. If well implemented, this allows even vastly different speeds of cores to work together to finish a common task that requires a serial output without using a terrible amount of memory in the process. If poorly implemented, you might be stuck waiting on the slowest core to finish at the end of each frame. Graphics problems like SLI and Crossfire are different in that the output is highly time-critical. To deliver frames that are consistently spaced, your faster card is frequently going to be waiting on your slower card to output so it can as well. Otherwise you'll end up with a jerky sense of action even if the framerate is modestly high if the cards are each just told "puke out frames as fast as you can and we'll write both of your outputs to the monitor".
But that's just a view from a sugar-coated technical side. In a more consumerist way, it is safe to assume that yes--it will be faster/better than keeping them all at a similar, lower frequency. No, you won't always notice it. No, not everything will benefit.
Max them out.