Benchmarking the Benchmarks @ [H]

I like HardOCP's testing methodology, but I've said all along that I, or anyone, would be just as much of a dumb fool for blindly following HardOCP's testing results as they would blindly following Tom's, or Anandtech, or whomever.

Seriously, people need to read the reviews and make their own judgments. Everybody seems to be a lemming these days. I read all reviews because they all provide me with some sort of information I can use to decided.

Plus if I did any engineering using subjective analysis, I would be in big trouble.
 
Rendering speed (which I think is primarily what we're concerned with) is not a subjective issue at all.

Rendering quality, software compatibility, etc. are probably subjective qualities, but certainly not rendering speed.

edit: the negativity towards the scientific method here is astounding.

I would hope that most people have gotten away from the "pure rendering speed" as the definitive benchmark long ago. If you want to test rendering speed then drop to the lowest resolution possible with no type of filtering or effects applied. There is your rendering speed.

With the capabilities video cards have now, image quality is actually part of rendering speed. AA, AF, the different types of them and everything else are all a part of "rendering speed". The cards have to process all of this to give you the picture on your screen. These are the things which generally bring a video card to its knees. I don't care if the video card can play the game smoothly at 1600x1200 if I have to have all visual effects turned off as the game is going to look like shit. I would much rather drop the res and turn on some visual effects. Depending on how some things look, I'll drop the res to a point where I can turn everything else up. 1280x1024 isn't a bad res even though I prefer 1600x1200. I currently run ET: Quake Wars at 1280x1024 with just about everything turned all the way up with my 7600GT. I figure my 8800GT should allow me to do the same at 1600x1200 once I get it later in the week.

Because IQ is part of rendering speed, things will always be subjective since it's visual. Things look different to different people. I've seen shader effects in games where [H] said it looked better on one video card over another which I disagreed with. In some cases I think one thing looks better than what someone else does. However, these are usually small things that don't really affect gameplay and can only be seen when "standing still" or something like that.

 
I don't have the time to read through all the posts, but none of the benchmarks, canned or uncanned, can apply to a vast majority of us.

Why? Because we don't have the exact same hardware they use for benchmarks. Unless we go out and buy the same hardware, the benchmarks can be a moot point. This is the nature of computers though, which gives us flexibility of hardware. Who's to say the X2 will run faster on an Athlon rather than a C2D? How about a 8800GTS vs the X2 on a P4 3.0ghz? For those of us who upgrade or build a system without the "uberest" mobo/cpu, this can be a daunting decision.

Running tests with beta drivers doesn't appeal much to me either. If they aren't official then I'm not interested. The improvements they make in beta drivers may or may not make it to later official versions...

I rely on both canned and uncanned benchmarks to hopefully make a sound decision on hardware.
 
I'm on [H]'s side as far as real world testing and it's merits, however...

The issue of Quad core vs dual core is pretty serious. Many tests are showing improved performance with a quad core, does the 8800 GTX/Ultra see that kind of performance boost?

If not, that could be the source of the different numbers.
 
I'm on [H]'s side as far as real world testing and it's merits, however...

The issue of Quad core vs dual core is pretty serious. Many tests are showing improved performance with a quad core, does the 8800 GTX/Ultra see that kind of performance boost?

If not, that could be the source of the different numbers.

Not at all. Lets say that getting a quad core involved will net you an extra 10 fps. It shouldn't matter what cards you have stuck in there, the extra 10 fps should apply w/ an Nvidia or ATi card.
Why would you expect to get more fps out of a video card based on processor type?
 
Thanks [H] for posting this and showing the differences between real world and standard benchmark results. I think the best part of [H]'s testing method is that when I buy a card and start playing games, I am not surprised by the performance or lackthereof (what a novel concept, evaluations that actually tell you what you can expect!)

There is still one thing bothering me though (I believe several others have also pointed this out), and that is, even in this evaluation with canned benchmarks, the ATI card is showing to be slower than the GTX. This means the contradiction between [H] results and the results of some of the other sites have not been resolved.

Are there any good explanations of why even with canned benches the [H] results show the X2 to be slower than the GTX? This suggests that more than real world vs canned benchmarks is at play.
 
great article guys.
you guys really are taking a stand for what you believe in. I love having other sites for pure numbers, but for applicability to gaming, [H] all the way
 
Why would you expect to get more fps out of a video card based on processor type?
Drivers themselves are now multithreaded, or at least NVIDIA's are. The addition of more processors may yield improved scaling in rendering performance alone.
 
I've done enough testing to know that in the modern day, an Athlon x2 and a C2D / C2Q will not affect your gaming performance when you're at 1920x1200. The CPU only really starts to matter at low resolutions. Any difference is going to be a fraction of a percent. If that fraction of a percent matters to you, you might want to try your own tests to see how exact you can be.........
 
Ugh, there's a comment in the digg submission:

"Real world" testing, my ass. [H]ardOCP is the only site I've seen that tries to compare video card framerates using different settings. Sure, maybe if you own an ATI card in "real life", you'd be running Crysis on Medium, whereas if you buy the nVidia card, you'd be running it on High... but who cares! That doesn't help me make a purchasing decision, and it just makes you look like an idiot.

Funny. I thought being able to play on High with one card vs. medium on another card would be a purchasing decision. I'll take High please.
 
Ugh, there's a comment in the digg submission:



Funny. I thought being able to play on High with one card vs. medium on another card would be a purchasing decision. I'll take High please.

Yeah it's pretty ridiculous to compare one card that can do high and one that can do medium... Although I do understand that people think they need to see both cards doing medium and then the FPS difference... it doesn't matter much if one card can do high and one medium.
 
Wow that was a long read. A few thoughts.

Why is it so important to have a reproduceable scientific benchmark for numbers that do not matter? The graphs that [H] includes show theres no significant variation in gameplay as shown.

Why are people blaming the test system? Its a video card review. Its not a whole system review.

Who cares if you game at a lower resolution than those reviewed. You know that either card should be more than adequate. If money is the issue then read reviews for the lesser cards, certainly what I did.

Why review cards on games that don't stress the cards?

Rediculous really.
 
Yeah it's pretty ridiculous to compare one card that can do high and one that can do medium... Although I do understand that people think they need to see both cards doing medium and then the FPS difference... it doesn't matter much if one card can do high and one medium.

There is a reason for that. If a lesser card has to turn down some settings, below that of the faster, more powerful card, then if you decide to purchase it, don't be surprised when you can't match the more powerful card. The point is to show the highest playable graphics settings and compare the benefits of owning X card versus owning Y card.

The Anandtech forum post is riddled with fan-boys. This post is just an example of their ignorance (Page 2 http://forums.anandtech.com/messageview.aspx?catid=31&threadid=2153074&FTVAR_STKEYWORDFRM=&STARTPAGE=2&FTVAR_FORUMVIEWTMP=Linear)

KristopherKubicki said:
As an ex-AnandTech employee, I sure remember having my share of poor reviews. That doesn't mean the methodology is incorrect.

Kyle's benchmarking method can justify whatever outcome he'd like. "Feel good" benchmarks are the easiest in the world since they can never be wrong. KillerNIC anyone?

The correct move here is to apologize and move on, but I wouldn't dignify any of this with a response. Tom's Hardware is already doing "gameplay" reviews. If AnandTech starts doing benchmarks like the rest of these guys, then you know the demographic of this whole industry has shifted away from the college educated techie to the high school gamer.

Derek: Unreplicable benchmarks are the first horsemen of the apocalypse for your industry -- whether its Kyle, Tom or you doing them. If any of us were still in (or ever went) to college we'd all fail trying to pass that sort of testing methodology off in a physics class.

I think both methods do indeed have their merits. However, there is something wrong one your "testing" shows the 3870 X2 as being faster than a 8800GTX. However, when you play the game, it is completely opposite. This is the reason we find real world, player playing the game testing to be useful.

Also, I think that subjective testing has its reasons as well. It does indeed show raw horsepower and will show you which card is faster. However, what happens when AMD optimizes its drivers for specific timedemos that are popular? I think we all know what happens. It's called a waste of money when you think you are buying the superior product.
 
The thing is, try to run your 3870x2 at 1680x1050 at all HIGH settings....it runs like ass!!!! But according to the other site's "review" those settings should be playable....only, they aren't, because the canned demo sucks ass and is NOT representative of REAL gameplay.

That about sums it up. Keep it up Kyle and crew, this is the way to test video cards. I don't care if a card gets 50,000 in 3dmark/canned timedemo/etc. if it chugs at 20fps in real gameplay, it's useless to me.
 
I began doubting "reviews" and "comparisons" on sites like Tom's and Anandtech many years ago. The funny thing is, they seem to put in the hours to test "real world" scenarios where it matters, like when the product is by a company from which they derive no direct ad revenue:

"Apple's 5 hour claim is laughable but not as much as I expected. If I wanted to I suspect I could hit 5 hours by making the web browsing test less stressful, but my focus was on real world usage scenarios, not proving Apple correct." [my emphasis]​

Sure, there's plenty of ads for Apple-related products and services, but those are Google Dumbbot™ ads, which don't count.

Of course, their "real world usage scenarios" in this case might have been automated as well (which doesn't necessarily invalidate them in this specific case). Is Anandtech phoning it in out of sheer laziness, and do hardware companies know this and take advantage by designing their drivers to perform well in the "standardized" test scenarios?

I'm not sure which is worse, an Anandtech that is complicit in the phony results or an Anandtech that simply doesn't give a frak.
 
There is a reason for that. If a lesser card has to turn down some settings, below that of the faster, more powerful card, then if you decide to purchase it, don't be surprised when you can't match the more powerful card. The point is to show the highest playable graphics settings and compare the benefits of owning X card versus owning Y card.

Oops I didn't really say that well. I was (poorly) defending the real world gameplay results because if one video card can use higher settings than another at the same FPS then really that's the card to buy, we don't need to see one video card getting higher FPS than the other. We also want to see which card handles AA better, shaders better, etc etc... oh well.
 
Quote:
Originally Posted by KristopherKubicki
If any of us were still in (or ever went) to college we'd all fail trying to pass that sort of testing methodology off in a physics class.


Difference in Physics vs. videocards is you usually don't have to worry about someone [insert your higher power here] rigging fundamental forces in their favor when your testing. It's a commercial product, with human forces (ie greed) determining what your "canned" timedemo consists of. It's not gravity, and last time I checked it wasn't influenced by market share.

;)
 
Yeah it's pretty ridiculous to compare one card that can do high and one that can do medium... Although I do understand that people think they need to see both cards doing medium and then the FPS difference... it doesn't matter much if one card can do high and one medium.

The fact that people can't make the educated decision realizing that, for example, they should buy Card A that runs on high settings in a game compared to Card B on medium. So who cares if they have different settings/resolution? If Card A FPS > Card B FPS and Card A Quality > Card B Quality, then I don't see the logic of anymore comparison but to buy Card A knowning that it will perform well at high quality and definitely much better at medium quality. (Simplified situation, of course)

People are just being fanboys (to almost every tech site they like) and I think they should either take what [H] presents, or simply leave and follow whatever benchmark scores they want. There are many sites that do the 'canned benchmarks'...Kyle and crew here wants to be different - let them be; if you don't like it, then leave.

/end rant?
 
The thing is, try to run your 3870x2 at 1680x1050 at all HIGH settings....it runs like ass!!!! But according to the other site's "review" those settings should be playable....only, they aren't, because the canned demo sucks ass and is NOT representative of REAL gameplay.

That about sums it up. Keep it up Kyle and crew, this is the way to test video cards. I don't care if a card gets 50,000 in 3dmark/canned timedemo/etc. if it chugs at 20fps in real gameplay, it's useless to me.

Do you actually have a 3870x2? I have the 3870x2 and the GTS512 and the 3870x2 can very easily play 1680x1050 all high, I actually play 1920x1200 all high with the 3870x2 where before I had to play mostly medium with a couple settings to high with 8800GTS 512. 3870x2 is actually quite playable at 1680x1050 with many settings at Very High avg 30fps.

The only glitch I have noticed and where Nvidia wins this comparison is in one particular portion of the game when you go into that church or schoolhouse? not sure what it is, to rescue the hostage for some reason the 3870x2 drops to 5-6fps randomly. It is quite annoying and was a bit difficult to get through that portion. This lasted for about 30 seconds on and off while trying to kill the tanks that show up. Aside from that throughout the rest of the game so far in overall performance the 3870x2 smokes the 8800GTS 512 much higher settings playable.


After doing testing on my own and comparing reviews from other sites I can confirm that if you are running dual core less than 3ghz the GTX might be the better card for you, [H] is correct in this respect so I don't think they were being biased in their review however with regards to a high end system quad core 3.6ghz or better the 3870x2 smokes everything else. If your on a budget the 8800GTS 512 is the card to get for bang for buck because its performance is almost identiacal to the GTX and the price is much lower but for best performance 3870x2 all the way on a high end system. Unfortunately [H] testing methodology does not show this.

Edit: Just wanted to add I had similiar performance drop with 8800GTS when you get to the part with the frozen ship and the snow on the ground, at this point the GTS dropped to 8-9FPS the 3870x2 ran through this at 40FPS so both cards actually seem to have their quirks.

Cheers
 
It seems to me that other sites compare to the [H] about much as a High School research paper compares to a Graduate Thesis.

Actually, you've got that backwards. Other websites' results would have a chance during a defense whereas [H]'s wouldn't based purely on the fact that the results are only applicable to Kyle's experience, and nobody elses.

Not at all. Lets say that getting a quad core involved will net you an extra 10 fps. It shouldn't matter what cards you have stuck in there, the extra 10 fps should apply w/ an Nvidia or ATi card.
Why would you expect to get more fps out of a video card based on processor type?

It all depends on the structure of the driver. If one vendor has a lot more driver overhead than another, more CPU power would net said vendor more relative performance than the other.

Wow that was a long read. A few thoughts.

Why is it so important to have a reproduceable scientific benchmark for numbers that do not matter? The graphs that [H] includes show theres no significant variation in gameplay as shown.

Why are people blaming the test system? Its a video card review. Its not a whole system review.

Who cares if you game at a lower resolution than those reviewed. You know that either card should be more than adequate. If money is the issue then read reviews for the lesser cards, certainly what I did.

Why review cards on games that don't stress the cards?

Rediculous really.

How can you say they show no significant variation when you only get to see the results from a single run?

Anyone claiming this is a video card review (like you) is an idiot. This is an article comparing benchmarking methodologies. I, personally, cited differences in system specifications only in the situation where numbers were being directly compared between two reviews with differing setups.

Different video cards scale differently with resolution. You, being an advocate of real world testing, should realize the value of seeing how each card performs at different settings.

Who is suggesting that reviewers not use demanding games?
 
I think both methods have their place and, combined together, give a good picture of what to expect when making a decision about a video card purchase.

That said, I don't think one is superior to the other, for all the obvious reasons. Timedemos typically are not indicative of real-world gameplay. However, I've played games with what this site considers "playable settings" on similar hardware and ended up with results that were not so playable. With the 'real world' method, there is too much room for error. There's a reason that science follows a very specific method. There are controls. With 'real world' testing, there really aren't.

Really, justifying your reasons for your methods is fine. It's not necessary to call out another major website or their staff in the process though. It reeks of sensationalism.
 
bluehaze013: you are effectively saying that the 3870x2 is unleashed only after using a 3ghz C2D?? At a regular resolution of 1600x1200 or even 1280x1024 I'm not so sure there would be the huge differences we've seen across different websites from the [H]. It would be interesting to see if you could run some tests with your CPU clocked around 2.9 Ghz then again with it at your regular 3.6ghz
 
It all depends on the structure of the driver. If one vendor has a lot more driver overhead than another, more CPU power would net said vendor more relative performance than the other.
I see what you mean, indeed. I guess then at the same time, you would have to call it not a video card review, but a card/driver eval.
To be accurate, of course. :D
 
Spectacular article, thanks for giving some insight to your testing methods. I've always been a firm believer in real-world testing and this just solidifies that more.

Some canned bench marks do reflect real-world performance, but not all of them. Why waste your time with the gamble?

[H] is the only place for video card evaluations.
 
bluehaze013: you are effectively saying that the 3870x2 is unleashed only after using a 3ghz C2D?? At a regular resolution of 1600x1200 or even 1280x1024 I'm not so sure there would be the huge differences we've seen across different websites from the [H]. It would be interesting to see if you could run some tests with your CPU clocked around 2.9 Ghz then again with it at your regular 3.6ghz


Nope I'm saying with a Quad Core 3.6ghz I get vastly better performance than [H] does with a Dual core, that is my only experience. I do not have a dual core to test with but if you look at the graph I posted in an earlier post that Kyle removed it shows the differences in scaling between 8800 Ultra and 3870x2. The 3870x2 does much better than the 8800 Ultra with a better CPU. I don't have the technical understanding to know why this is but it is obviously the case when comparing all reviews with Quads versus Dual Cores, my own personal experience and even in this review right here testing the Ultra and 3870x2 with various processors.

Here is the graph of the CPU scaling:

http://www.pcgameshardware.de/?menu...ty_id=-1&image_id=769656&page=1&show=original
 
There is a lot of talk about "scientific" evaluation of the game. It sure sounds impressive, but in essence they're trying to apply the scientific method to something that's inherantly subjective. Do people play through games with the framerate counter going to see how they can maximize it? Or do they play through games to enjoy the graphics, storyline and experience.

When i play a game, i just want it to look good and run smoothly, I don't want to be killing baddies and have the game lock up and lag on me. Someone sitting down and finding the best balance of in-game settings and resolutions that allows the game to look good and run smoothly is a huge undertaking, especially if they complete the entire game as they claim.

Minimizing variables scientifically is all well and good, but with so many settings in these newer games the best way to find the best balance is to sit down and mess with them until you get something that works well. It's just like overclocking, you just keep tweaking and tweaking until you hit a wall. Educated guesses until you get to the right balance. I'd venture to guess that someone that sits around all day and does just this is bound to have better educated guesses than I am.

When you ask your friend how he likes his new flatscreen tv, he doesn't spit out test results of chrominance or black levels, he gives you his impressions based on his experience with the tv. I don't know why video card reviews should be any different. Just because you can quantify everything doesn't mean you should. Especially if you can glean more pertinent information from an informative description from a source very familiar with the topic.
 
I think both methods have their place and, combined together, give a good picture of what to expect when making a decision about a video card purchase.

That said, I don't think one is superior to the other, for all the obvious reasons. Timedemos typically are not indicative of real-world gameplay. However, I've played games with what this site considers "playable settings" on similar hardware and ended up with results that were not so playable. With the 'real world' method, there is too much room for error. There's a reason that science follows a very specific method. There are controls. With 'real world' testing, there really aren't.

Really, justifying your reasons for your methods is fine. It's not necessary to call out another major website or their staff in the process though. It reeks of sensationalism.

What are these 'obvious reasons' you speak of? You didn't even cite one, all you did was make a poor attempt at discrediting [H]. Nothing that you said even hints to a reason why you would justify making your purchase on timedemo benchmarks AT ALL.

Who cares if this reeks of sensationalism? It is necessary, I believe. Most of the benchmarks that are held acceptable do not correlate to 'real world', or however you want to put it - ME PLAYING THE ACTUAL GAME at all. Therefore, I feel it is important we speak up and start getting some testing that may actually apply to the GAMER, not simply a graphics enthusiast seeking the highest 3DMark score.

So, no, if I were buying a graphics card with the intent of getting good performance from a game (the 'real world' gameplay), timedemos would have nothing to do with it!

Please elaborate if I misunderstood you.
 
There is a lot of talk about "scientific" evaluation of the game. It sure sounds impressive, but in essence they're trying to apply the scientific method to something that's inherantly subjective. Do people play through games with the framerate counter going to see how they can maximize it? Or do they play through games to enjoy the graphics, storyline and experience.
Graphics rendering is about as "inherantly" (sic) subjective as any other field in science.

The weather is inherently subjective (ie. it's warm; it's wet; it's windy), yet we still try and quantify this information and apply the scientific method, because the information is much more useful to us that way.
 
There is a lot of talk about "scientific" evaluation of the game. It sure sounds impressive, but in essence they're trying to apply the scientific method to something that's inherantly subjective. Do people play through games with the framerate counter going to see how they can maximize it? Or do they play through games to enjoy the graphics, storyline and experience.

When i play a game, i just want it to look good and run smoothly, I don't want to be killing baddies and have the game lock up and lag on me. Someone sitting down and finding the best balance of in-game settings and resolutions that allows the game to look good and run smoothly is a huge undertaking, especially if they complete the entire game as they claim.

Minimizing variables scientifically is all well and good, but with so many settings in these newer games the best way to find the best balance is to sit down and mess with them until you get something that works well. It's just like overclocking, you just keep tweaking and tweaking until you hit a wall. Educated guesses until you get to the right balance. I'd venture to guess that someone that sits around all day and does just this is bound to have better educated guesses than I am.

When you ask your friend how he likes his new flatscreen tv, he doesn't spit out test results of chrominance or black levels, he gives you his impressions based on his experience with the tv. I don't know why video card reviews should be any different. Just because you can quantify everything doesn't mean you should. Especially if you can glean more pertinent information from an informative description from a source very familiar with the topic.

Good post - People are trying to play the numbers game when it's not even something they can count.
 
I've been reading [H] for a little over 8 years now, and computer modding/hacking/enthusiasting for over 10 years. Benchmarking has been, and remains a purely subjective topic. This applies to [H], even after this article that everyone is praising so highly. In fact, as a concerned reader I have openly complained about the deminishing quality of [H]'s articles on the forums, directly to the authors in question, and directly to Kyle himself(to which Kyle was nice enough to afford me some time to answer questions).

For starters [H] rips on timedemos, yet timedemos provide an even basis for which new video cards to be compared to older cards. Not to mention that when you do not use a pre-recoded and set method for reproducing test results you end up introducing things like human error. Everyone here has seen how turning your viewing angle the slightest bit can affect the framerate you recieve. Talk about skewing benchmark results. In fact, I would compare the whole 'lets run through and take notes' to 'lets make a less accurate timedemo.' Run through this area, then that one, and that one again - just a timedemo you do by hand, and as stated above, introduce human error into.

In 2004 [H] stopped being my #1 source for reviews and enthusiast news. 9 out of 10 times when I read an [H] review I thought to myself 'this is in no way, shape, or form anywhere remotely near how I use my computer or play games on it.' I watercool, I develop, I play games, I utilize my computers HEAVILY, and I've been doing it for a LONG time. Doesn't that make me the target audience? Why then do I feel like the site is no longer catering to me and my other like minded enthusiasts?

All this crying about benchmarks and how certain devices do better in certain benchmarks is just that - uninformed illogical crybabying. Of course different devices are going to do better in certain benchmarks. New technology, new algorithms, and new hardware techniques are developed on a daily basis. People are constantly looking for ways to do things better. Naturally with these differeing technologies and techniques, certain devices do better at certain things.

Attempting to force results to be even is the WRONG way to approach scientifically testing a hypothesis. Proper scientific method is: Keep a control group, have identical test circumstances, and see how the results DIFFER. Not tweak the environment and circumstances until results are almost identical. Any real scientist after actual information will fight to the death over this process - which [H] is very actively tossing out the window.

I couldn't care any less about what [H] thinks the 'best playable settings' are for a game. In fact this is one of the most useless 'points' I have seen a review site attempt to make. Not everyone carries the same weight on what makes something 'playable.' Which makes the results from [H]'s 'real world' tests nearly impossible to interpret and apply. What if someone doesn't care about jaggies, or what if someone doesn't care about the clarity of his textures with transparancy? What if the only thing someone cares about is getting a framerate equal to the vertical refresh of their monitor? What if the only thing someone cares about is maximum draw distance? Lets also not forget the games that get a direct physics advantage when they run at a higher framerate. Frequently the most 'playable' settings in those games is 'disable every single bit of eye candy and get the fastest possible framerate you can.' [H] doesn't account for any of these things.

I would like to say that if the goal of [H] is to inform a buyer as to which product is the best for the game they want to play under conditions that are 'real world,' you have failed miserably. I've made two video card purchases at the suggestions of [H] since these 'new' reviews have come out, and they both turned out to be completely inadequate. I did not get results even close to what was described on vanilla systems. I have now been told on two occasions by different [H] staff that 'Most people don't play games like you do, Zoson.' Need I remind you that this site - your own site - is not [A]verage|OCP, it's [H]ard|OCP? A site that supposedly relishes in the fact that it caters to 'hardware enthusiasts' and 'hardcore gamers.' Since when were 'most people' enthusiats OR hardcore gamers?

Defending [H] was something I used to do frequently. But nowdays it seems like whenever someone brings up an [H] review, I find myself picking it to pieces and pointing out how badly [H] is duping us with [A]verage|OCP reviews. The topic of this article wasn't even 'how we can do better' it was flat out 'how we are better than they.' Since when has [H] had to resort to attacking other websites and their reviews?

I'm flat out DISGUSTED.

This reminds me of how Tom Pabst behaved just before his website became the joke of the enthusiast community. Attacking someone elses reviews doesn't prove anything but a childish mentality of 'I'm better than you are.'
 
Graphics rendering is about as "inherantly" (sic) subjective as any other field in science.

The weather is inherently subjective (ie. it's warm; it's wet; it's windy), yet we still try and quantify this information and apply the scientific method, because the information is much more useful to us that way.

Problem here is we have folks who just want the numbers (scientific method) and there the others who look at it like art (how its played). Yeah Timmy drew that picture of a his puppy in 5 seconds, but that doesn't mean I want to put it up on the wall and frame it. Useful? Depends on your perspective.

Me, I'm playing a game, not watching the FPS counter. ;)
 
Graphics rendering is about as "inherantly" (sic) subjective as any other field in science.

The weather is inherently subjective (ie. it's warm; it's wet; it's windy), yet we still try and quantify this information and apply the scientific method, because the information is much more useful to us that way.

Um, I don't know what internet you're on but the one I'm on allows me access to weather.com where I can view exact temperature and barometer ratings (with slight variation, of course) for different regions of the world. There is one earth, and a million calcutions going into providing the most accurate weather reports for my area. There are millions of computers, and only one calculation (from a fucking time demo) providing me with accurate benchmarks for the game I want to play with my card? I think not.

Furthermore, everyone has a different opinion regarding what is a nice day (I love rain, for instance), but I think everyone can agree they don't like bad performance.

This isn't about science, everyone. It's about results.
 
Do you actually have a 3870x2? I have the 3870x2 and the GTS512 and the 3870x2 can very easily play 1680x1050 all high, I actually play 1920x1200 all high with the 3870x2 where before I had to play mostly medium with a couple settings to high with 8800GTS 512. 3870x2 is actually quite playable at 1680x1050 with many settings at Very High avg 30fps.

The only glitch I have noticed and where Nvidia wins this comparison is in one particular portion of the game when you go into that church or schoolhouse? not sure what it is, to rescue the hostage for some reason the 3870x2 drops to 5-6fps randomly. It is quite annoying and was a bit difficult to get through that portion. This lasted for about 30 seconds on and off while trying to kill the tanks that show up. Aside from that throughout the rest of the game so far in overall performance the 3870x2 smokes the 8800GTS 512 much higher settings playable.


After doing testing on my own and comparing reviews from other sites I can confirm that if you are running dual core less than 3ghz the GTX might be the better card for you, [H] is correct in this respect so I don't think they were being biased in their review however with regards to a high end system quad core 3.6ghz or better the 3870x2 smokes everything else. If your on a budget the 8800GTS 512 is the card to get for bang for buck because its performance is almost identiacal to the GTX and the price is much lower but for best performance 3870x2 all the way on a high end system. Unfortunately [H] testing methodology does not show this.

Edit: Just wanted to add I had similiar performance drop with 8800GTS when you get to the part with the frozen ship and the snow on the ground, at this point the GTS dropped to 8-9FPS the 3870x2 ran through this at 40FPS so both cards actually seem to have their quirks.

Cheers

Thats a shame, my GTS only slowed down on the assault map, everything else was great.
 
I'd like to stop hearing the scientific argument over benchmark methods. Honestly, if the scientific method was accurate for our purposes, we would still be using it. I don't buy a $500 video card to play benchmarks. I spend this money to be able to enjoy all the eye-candy in video games today.

Timedemos with their scores only show us if a GPU is theoretically faster, not if it really is. The fact is, drivers are updated, new revisions are spun, etc. If a GPU is the fastest processor in the world it does not help if the overhead from the driver causes it to perform slower than 14 month old technology. Or even if the driver is not optimized... or maybe the GAME isn't optimized for the video card. Either way, we measure REAL performance, not theoretical crap.

This is a matter of looking at what is true and what could potentially be true.
 
I've been a HardOCP and Anandtech reader for many years and I think both sites are good. When it comes to a buying decision for a new video card I always trust HardOCP the most. I want to know what FPS I can expect from actually playing the game. I was quite disappointed when my 8800GTS 512MB couldn't play Crysis in 1280x1024 everything on high with stable frame rates (sure it's fine at first, but later in the game it's not unless you think 15 FPS is fine)

I think it's always important to get as many opinions as possible. That is why I don't blindly trust my favorite site (even HardOCP) but check many sites and talk to people who have the video card or whatever.

At least we have left the days were running 3dmark was enough for a video card review (although some sites still use only 3dmark!) I remember when I bought my Radeon 9600pro and then a week or so later Tom's Hardware came to the conclusion that FX5600 was a much better video card and I was pissed that I had bought the wrong card. Of course we all know now that FX5600 sucked at real world gaming so I was glad that I had bought the 9600pro after all.
 
Back
Top