As flawed as real world gaming might be from a scientific approach...
I have a problem with the people who keep calling real-world testing unscientific. This is of course usually in contrast to people saying canned benchmarking is more scientific since its easily repeatable by anyone.
Repeatability alone does not make a scientific testing method.
The fact is the data you're testing is subject to change. Games are not some constant of nature that will act the same way no matter what's happening.
Every time a company refines their hardware to perform better in a certain benchmark its laughable that anyone tries to draw a conclusion from contaminated testing. Is it repeatable? Certainly. You're repeating a flawed test that gives no significant data.
This article on [H] has shown how flawed that data can be. These canned benchmark numbers are meaningless to a real user. Sure there are some people who like big numbers, but for those of us looking for the best gaming hardware these numbers are completely useless.
Realistically, canned benchmarks are just AMD's/Nvidia's way of getting tech sites to tell the public whatever they want them to hear. Nobody believes performance numbers directly from the company. Why do you so blindly believe it when a tech site runs the same rigged test?
Is [H]'s method perfect? Of course not, it's an impossible task. People aren't machines after all.
But when it comes down to a decision, who do you trust? People who just do whatever test company X gives them, or someone who's played the game? I'll take [H]'s game play experience over a blind test any day.
Canned benching being scientific is BS. No 'scientist' would put any value in a blind test with tainted test data.