To do it scientifically we need to list the various different types of resources. Then we need to identify applications that would stress each one of these various systems.
After we know those things we can then systematically run these things one at a time WITH some other particular benchmark.
EXAMPLES:
1. Run benchmark.
2. Run benchmark with 4xPrime95.
3. Run benchmark with Virus scan or defrag.
4. Run benchmark while compressing.
5. Run benchmark while encoding.
<you get the idea... add more to the list as needed>
The basic test routine listed above is simple. But what if you do all of the above and you don't get anything that really makes a huge difference? Does that mean that the effort was wasted?
NO. Because then you start running more than one thing at a time. Using this process one could eventually find a combination that would work. Or maybe not.
The best part is now we don't only have the Intel FSB vs AMD IMC. Now we also have the i7 to add to the testing. There has not been a lot of motivation to complete this type of testing in the past. But in the future I predict that this type of test WILL become popular with Intel benchmarkers that have both Intel FSB and Intel i7 chips. (A side effect will be that the AMD will also be shown to be better.)
I have been in a discussion about this very issue where a poster basically came up with: "That is worthless because you are only looking for something that AMD does well." But does that really matter? If it ends up being something that can be tested and measured then it is REAL. If it doesn't provide end results then it wouldn't matter anyway. (But I have to question why so many posters on various forums seem to not want to see the results of this kind of test. Are they worried about the results?)




Reply With Quote
Bookmarks