Originally Posted by
FM_Jarnis
I don't know where to even start, so I'll just state that FM stands behind it's products and states that they produce usable results within the margin of error that is reasonable to expect from a single benchmark.
Will you get more accurate results if you mix multiple benchmarks and games when compared to a single benchmark (even one that has multiple workloads)? Sure. That's simple math.
Does that invalidate any individual benchmark? Nope - a single benchmark has higher error margin than combined score from multiple benchmarks. Duh. That doesn't mean the result is invalid.
Are the differences between 3DMark and a pool of games material? No - they are well within the error margin, especially when you consider that many games run better on cards from specific vendor (accidentally or on purpose), creating bias. Spotty multi-GPU support also creates bias (while 3DMark is always offering full multi-GPU support). You also have to consider that benchmarking a pile of games and creating an average out of that is obviously (slightly) more accurate than any single result (from a benchmark or a game).
3DMark Vantage might not be the best or the prettiest product we've ever done, but one thing it does do is to produce a valid, unbiased score and seems to be doing it even with hardware that was years away from being released when it was shipped. With 3DMark 11, we have the exact same goal.
As for the business model side - I'm not the person to discuss about it. If you want to continue that discussion, I suggest you contact us at bdp [at] futuremark.com - I'm sure our guys are happy to do an interview or provide more details.