misrepresentation of the Fermi offerings

McCartney

Gawd
Joined
Mar 6, 2006
Messages
866
To wrap things up, let’s start with the obvious: NVIDIA has reclaimed their crown – they have the fastest single-GPU card. The GTX 480 is between 10 and 15% faster than the Radeon 5870 depending on the resolution, giving it a comfortable lead over AMD’s best single-GPU card.

With that said, we have to take pause for a wildcard: AMD’s 2GB Radeon 5870, which will be launching soon. We know the 1GB 5870 is RAM-limited at times, and while it’s unlikely more RAM on its own will be enough to make up the performance difference, we can’t fully rule that out until we have the benchmarks we need. If the GTX 480 doesn’t continue to end up being the fastest single-GPU card out there, we’ll be surprised.

The best news in this respect is that you’ll have time to soak in the information. With a retail date of April 12th, if AMD launches their card within the next couple of weeks you’ll have a chance to look at the performance of both cards and decide which to get without getting blindsided.

On a longer term note, we’re left wondering just how long NVIDIA can maintain this lead. If a 2GB Radeon isn’t enough to break the GTX 480, how about a higher clocked 5800 series part? AMD has had 6 months to refine and respin as necessary; with their partners already producing factory overclocked cards up to 900MHz, it’s too early to count AMD out if they really want to do some binning in order to come up with a faster Radeon 5800.

Meanwhile let’s talk about the other factors: price, power, and noise. At $500 the GTX 480 is the world’s fastest single-GPU card, but it’s not a value proposition. The price gap between it and the Radeon 5870 is well above the current performance gap, but this has always been true about the high-end. Bigger than price though is the tradeoff for going with the GTX 480 and its much bigger GPU – it’s hotter, it’s noisier, and it’s more power hungry, all for 10-15% more performance. If you need the fastest thing you can get then the choice is clear, otherwise you’ll have some thinking to decide what you want and what you’re willing to live with in return.

Moving on, we have the GTX 470 to discuss. It’s not NVIDIA’s headliner so it’s easy to get lost in the shuffle. With a price right between the 5850 and 5870, it delivers performance right where you’d expect it to be. At 5-10% slower than the 5870 on average, it’s actually a straightforward value proposition: you get 90-95% of the performance for around 87% of the price. It’s not a huge bargain, but it’s competitively priced against the 5870. Against the 5850 this is less true where it’s a mere 2-8% faster, but this isn’t unusual for cards above $300 – the best values are rarely found there. The 5850 is the bargain hunter’s card, otherwise if you can spend more pick a price and you’ll find your card. Just keep in mind that the GTX 470 is still going to be louder/hotter than any 5800 series card, so there are tradeoffs to make, and we imagine most people would err towards the side of the cooler Radeon cards.

With that out of the way, let’s take a moment to discuss Fermi’s future prospects. Fermi’s compute-heavy and tessellation-heavy design continues to interest us but home users won’t find an advantage to that design today. This is a card that bets on the future and we don’t have our crystal ball. With some good consumer-oriented GPGPU programs and developers taking up variable tessellation NVIDIA could get a lot out of this card, or if that fails to happen they could get less than they hoped for. All we can do is sit and watch – it’s much too early to place our bets.

As for NVIDIA’s ecosystem, the situation hasn’t changed much from 2009. NVIDIA continues to offer interesting technologies like PhysX, 3D Vision, and CUDA’s wider GPGPU application library. But none of these are compelling enough on their own, they’re merely the icing on the cake. But if you’re already in NVIDIA’s ecosystem then the choice seems clear: NVIDIA has a DX11 card ready to go that lets you have your cake and eat it too.

Finally, as we asked in the title, was it worth the wait? No, probably not. A 15% faster single-GPU card is appreciated and we’re excited to see both AMD and NVIDIA once again on competitive footing with each other, but otherwise with much of Fermi’s enhanced abilities still untapped, we’re going to be waiting far longer for a proper resolution anyhow. For now we’re just happy to finally have Fermi, so that we can move on to the next step.
.

Straight from Anandtech. I am not going to let alg7_munif and his fanATIcs pollute our forum with lies and misinformation. In munif's poll he has people claiming the GTX 470 is worse than the 5850, which is definitely not the case.

I am wondering who else here is sick of the misrepresentation done by brand loyalists so that they have someone to make fun of?

Obviously the reviews have LARGELY stated the "480 wins" and the "470 wins".

Can people tell me, on what basis, the 5850 is better than the 470? Munif sure thinks so, but reviews have said otherwise.
 
.

Straight from Anandtech. I am not going to let alg7_munif and his fanATIcs pollute our forum with lies and misinformation. In munif's poll he has people claiming the GTX 470 is worse than the 5850, which is definitely not the case.

I am wondering who else here is sick of the misrepresentation done by brand loyalists so that they have someone to make fun of?

Obviously the reviews have LARGELY stated the "480 wins" and the "470 wins".

Can people tell me, on what basis, the 5850 is better than the 470? Munif sure thinks so, but reviews have said otherwise.

:rolleyes: jesus man, can you read at all?
 
Just to put it out there, canned benchmarks do not always reflect world world gaming performance. If you buy videocards just to post benchmark scores, the the 470 is ok. For actual gaming, it's in a hard spot. The price gap between it and the 5850 is fairly substantial.
 
Depends on what you consider better. It seems like for the money the 5850 is the better choice. The 470 edges it out mostly but I'm having a hard time justifying the extra cost, heat and power as much as it saddens me to say heh. I'm a fan of nvidia but it doesn't seem the 470 makes a whole lot of sense.
 
OK so you are going to pick and choose which site you believe?

::Yea this sounds better and proves my point, lemme go on [H] and posts this::
There's a review right here, on this forum if you can believe it, that says the 470 is not a good buy.

Do your own review, buy the 5850 and 470 so you can review them for yourself.
Either way you are forming conclusions based on someone else's tests and opinion, just like the rest of us.
 
Just to put it out there, canned benchmarks do not always reflect world world gaming performance. If you buy videocards just to post benchmark scores, the the 470 is ok. For actual gaming, it's in a hard spot. The price gap between it and the 5850 is fairly substantial.

I don't know what the difference is between "canned" benchmarks and "gameplay" benchmarks, except that the reviewer has more control in the latter case.

If you're trying to say that these benchmarks, even with min & average figures probably don't give you the best idea of how much benefit you receive when not looking at framerate figures, you're probably right. It's often times hard to notice a real performance difference; it seems like anything below a 30% improvement of min framerate isn't noticeable, and only if you're somewhere in the 18-30 fps (min) range. Above or below that you need a more significant performance increase to care.

I'd say Fermi does not give that based on what I've seen.
 
A thread for me? So sweet :eek:

Btw the difference between [H] method and others can be explained with this picture, it is not accurate but I hope this can help:
 
.

Straight from Anandtech. I am not going to let alg7_munif and his fanATIcs pollute our forum with lies and misinformation. In munif's poll he has people claiming the GTX 470 is worse than the 5850, which is definitely not the case.

I am wondering who else here is sick of the misrepresentation done by brand loyalists so that they have someone to make fun of?

Obviously the reviews have LARGELY stated the "480 wins" and the "470 wins".

Can people tell me, on what basis, the 5850 is better than the 470? Munif sure thinks so, but reviews have said otherwise.

Doesn't [H] review essentially show the same thing:

We see no reason to purchase a GeForce GTX 470. It provides no gameplay advantages compared to the competition, and will actually end up costing you more power and dollars for the exact same performance you can get with the Radeon HD 5850. Factor in the power consumption, and it doesn’t seem worth it. If you have an HD 5850, stick with it, the GTX 470 is not an upgrade. If you are contemplating a great performing graphics card, for a decent price, the HD 5850 is still the best choice.

Are you going to search around for a review that agrees with what you want to think?
 
.

I am wondering who else here is sick of the misrepresentation done by brand loyalists so that they have someone to make fun of?

It's been that way since at least the 3DFX days.

Just like AMD vs Intel, Xbox vs Playstation, Coke vs Pepsi, boob guys vs leg guys.

These forums would be dead silent without some drama I suppose.
 
A thread for me? So sweet :eek:

Btw the difference between [H] method and others can be explained with this picture, it is not accurate but I hope this can help:

I lol'd

It's been that way since at least the 3DFX days.

Just like AMD vs Intel, Xbox vs Playstation, Coke vs Pepsi, boob guys vs ass guys.

These forums would be dead silent without some drama I suppose.

fixed.
 
It wouldn't be dead silent, it would be full of people trying to dump their 5870 in FS section if Nvidia showed up. Screaming I told you so and I should have waited comments. Nvidia brought a knife to a knife fight, which means there are injuries on both sides of the battle. We all wanted gun shots from Nvidia, but it's dead silent!
 
The OP can try and justify the, IMO, underwhelming performance, the fact still stands that Nvidia dropped the ball.

No point in splitting hairs in this situation.
 
Leg guy, ass guy whatever. They are all wrong. Who doesn't like boobs? Or coke? ;)

I'm a legs, ass, boobs, abs, and face guy. There is nothing wrong with having preference a preference between traits but I feel that it is an insult not to adore the whole package. I also like pepsi and coke (im a fitness nut though and don't drink sugared pop, in which case I drink diet pepsi and coke 0)
 
I don't know what the difference is between "canned" benchmarks and "gameplay" benchmarks, except that the reviewer has more control in the latter case.

If you're trying to say that these benchmarks, even with min & average figures probably don't give you the best idea of how much benefit you receive when not looking at framerate figures, you're probably right. It's often times hard to notice a real performance difference; it seems like anything below a 30% improvement of min framerate isn't noticeable, and only if you're somewhere in the 18-30 fps (min) range. Above or below that you need a more significant performance increase to care.

I'd say Fermi does not give that based on what I've seen.

Canned benchmarks are things like the Crysis flyby benchmark or things like time demos. They do NOT reflect the same load as actually playing the game, and they can (and have) had specific optimizations done by ATI and Nvidia for the specific path the camera take. For example, in the Crysis flyby benchmark the camera flies high above the ground with no shooting or explosions. Actually playing the game involves being close the ground, moving through the jungle, and shooting stuff - things the canned benchmark didn't even do much less accurately test. And since the camera movements are prerecorded, the drivers can cheat to give a better score.

Gameplay benchmarks means the reviewer sat down and played the damn game, just like you or I would, and recorded the FPS between runs using the same general run through of the level (taking the same paths, etc..)
 
I'm a legs, ass, boobs, abs, and face guy. There is nothing wrong with having preference a preference between traits but I feel that it is an insult not to adore the whole package. I also like pepsi and coke (im a fitness nut though and don't drink sugared pop, in which case I drink diet pepsi and coke 0)

Good, thank god you're avoiding all of those calories. Too bad all that aspartame causes excitotoxicity and accelerates neuron death. So you can look forward to early retirement and dementia with clean arteries!

Yeah, and before you reply I'd look up aspartame's metabolism and the role of NMDA.
 
I'm a legs, ass, boobs, abs, and face guy. There is nothing wrong with having preference a preference between traits but I feel that it is an insult not to adore the whole package. I also like pepsi and coke (im a fitness nut though and don't drink sugared pop, in which case I drink diet pepsi and coke 0)


So which is it for you? Definitively Pepsi (ATi) or Coke (nV)?

Honestly, everything I've bought from nVidia hasn't ever broken, or disappointed me (excluding those ATROCIOUS chipsets for Intel platforms, which really pissed me off and made me ambivalent on my x58c purchase) in any way.

I understand there is price to performance concerns, but I just look at the product. So when people slag the best product for not being 100 percent faster in some gaming benchmarks, it irritates me.

This video card is a HUGE deal in high performance computing. I am trying to say our games will now scale on GPUs instead of CPUs. Imagine multi core GAMING on your GPU independently of your CPU?

This is the first step, nVidia has established its shader as a HIGHLY scalable GPU architecture. Don't you guys see it? nVidia games are going to scale much better due to the UDA guys... :

http://www.nvidia.com/object/feature_uda.html

Just wait. It's getting sweeter.
Everyone just gets bored because nVidia is doing what any intelligent company would do and established a standard that makes it easier for developers to exploit it. There is no surprise with anything for people in the industry to understand it.

Believe me when I tell you that many of my friends who are entering the workforce as 3rd year undergrad interns is impressive. Everyone has now fully adapted to a standard and we want to work on it because it's easy and fun. Learning to scale computations on other devices instead of the CPU is a big deal.

As work continues on the DRIVERS to scale the games on the CORES, you will see near linear scaling between nvidia product releases.
 
So which is it for you? Definitively Pepsi (ATi) or Coke (nV)?

Honestly, everything I've bought from nVidia hasn't ever broken, or disappointed me (excluding those ATROCIOUS chipsets for Intel platforms, which really pissed me off and made me ambivalent on my x58c purchase) in any way.

I understand there is price to performance concerns, but I just look at the product. So when people slag the best product for not being 100 percent faster in some gaming benchmarks, it irritates me.

This video card is a HUGE deal in high performance computing. I am trying to say our games will now scale on GPUs instead of CPUs. Imagine multi core GAMING on your GPU independently of your CPU?

This is the first step, nVidia has established its shader as a HIGHLY scalable GPU architecture. Don't you guys see it? nVidia games are going to scale much better due to the UDA guys... :

http://www.nvidia.com/object/feature_uda.html

Just wait. It's getting sweeter.
Everyone just gets bored because nVidia is doing what any intelligent company would do and established a standard that makes it easier for developers to exploit it. There is no surprise with anything for people in the industry to understand it.

Believe me when I tell you that many of my friends who are entering the workforce as 3rd year undergrad interns is impressive. Everyone has now fully adapted to a standard and we want to work on it because it's easy and fun. Learning to scale computations on other devices instead of the CPU is a big deal.

As work continues on the DRIVERS to scale the games on the CORES, you will see near linear scaling between nvidia product releases.

You clearly don't have any clue how video cards have been engineered for the past, oh, since they were invented.

Basically nothing you said makes any sense at all. Oh, and ATI uses a unified driver as well - and has for years.
 
Can people tell me, on what basis, the 5850 is better than the 470? Munif sure thinks so, but reviews have said otherwise.
On the basis of Brent's review right on this site. If you think that the [H] reviews are wrong, state your grievance with Brent and Kyle.
 
A thread for me? So sweet :eek:

Btw the difference between [H] method and others can be explained with this picture, it is not accurate but I hope this can help:

You're right, it isn't accurate. In fact, it gives no information at all. But it's a graph so it looks like it is significant!

The difference between can be graphed like this:

HardReview-1.png


This is assuming you value scientific, objective comparisons. Plenty of people enjoy reading the largely editorial review. Hard is trying for something positive, but imo they haven't reached it for the most part. I think that's why many consider their reviews less useful, but that's just an assumption. I did like the 480/470 GTX review, but I disagree with their choice of settings, and that claustraphobic 'you better like the settings we choose because that's all you get" approach doesn't hold a candle to the methodology of a site like anandtech. The idea that instead of showing relative performance you need to show maximum featureset at some reviewer-predefined performance goal, especially considering that performance figure changes for every single test (there isn't a set cutoff performance they target, but a general swath they call "playability") is simply ridiculous to me.

As a caveat, before they decided to use their current review methodology this was my #1 go to site for reviews. Even with the modifications they've made to this strategy since its inception, they lag far behind imo. This might just be a man-hour thing, because I believe if they did a more exhaustive settings analysis (so doing what they call "apples to apples" testing for the exhaustive variety of settings and commenting on playability), and did this analysis by spending the time to play good chunks of the game like they currently do, and analyzing a frame/time graph, it would be the best review out there. I assume that it take too long to have 1 or 2 people do this and still remain competitive in terms of release schedule.
 
Last edited by a moderator:
This is assuming you value scientific, objective comparisons. Plenty of people enjoy reading the largely editorial review. Hard is trying for something positive, but imo they haven't reached it for the most part. I think that's why many consider their reviews less useful, but that's just an assumption. I did like the 480/470 GTX review, but I disagree with their choice of settings, and that claustraphobic 'you better like the settings we choose because that's all you get" approach doesn't hold a candle to the methodology of a site like anandtech.

Except it really isn't scientific and objective when you tell the companies making the two products the exact test you will be doing months before you review the product, which is essentially what canned benchmarks do. Hell, anandtech even benchmarked a damn cutscene in a review. How does a cutscene reflect the performance of playing the game?

The problem isn't whether or not it is subjective or objective, the problem is whether or not it is *accurate*. And canned benchmarks simply aren't accurate in what they test. Crysis's canned benchmarks, for example, have almost no relevance to playing the actual game. In the game you don't lazily fly around in the air, so why does it benchmark that? If a card gets a good score flying through the air in Crysis, does that mean it can handle explosions, shooting, up close textures/bump mapping, etc...? No, it doesn't, it just means it can fly through the air - which tells me nothing since that isn't what I do in the game.

So really you're graph would be more accurate if the "typical top class review" was well below the usefulness line - because they don't tell you how the cards will play games at all. They only tell you how well a card benchmarks, which isn't useful at all. Remember, many of those "typical top class reviews" showed the 2900XT as being a clear winner - the card benchmarked faster than the 8800GTX after all. [H] even got bashed for saying that the 2900XT wasn't any good, and people ragged about [H]'s testing methodology saying that it was flawed and wrong - just like you are now. Guess who was right? [H] was. We don't even think about it anymore, everyone knows the 2900XT was a flop.
 
I think the problem nowadays is that cards are getting so incredibly fast that [H] is forced to use some completely off the wall AA settings to stress the card to the max.

What drew me to [H] reviews initially was the actual real world impact of "Hmm, if I spent $100 more, what do I actually get?"... and sometimes it really is 1 level of AA (shocking!). It's nice to see a bunch of graphs with numbers, but [H] is really the only place that does this kind of review and I hope they keep doing it... considering if I wanted a bunch of graphs of repeated tests, there's about 20 other sites that can give me that.

Although, I think the playing field has also changed since the introduction of eyefinity and nvidia surround. What I look forward to the most in future reviews (on any site...) are comparisons of triplehead gaming. For some reason, I think [H] won't disappoint me on that in the future.
 
You're right, it isn't accurate. In fact, it gives no information at all. But it's a graph so it looks like it is significant!

Should I explain my graph to you? The cards scale differently with load, AMD cards take less hit with a higher setting but at a lower setting, AMD cards won't get insanely high frame rate either.

What other websites do is they choose a specific setting and compare the cards' frame rate.
What [H] do is they choose a specific frame rate window and compare the maximum settings that can be used to reach that frame rate window.

Check out this review and you will see that my graph is not accurate but it is not something pulled of out nowhere either:

http://www.guru3d.com/article/geforce-gtx-470-480-review/16
 
Should I explain my graph to you? The cards scale differently with load, AMD cards take less hit with a higher setting but at a lower setting, AMD cards won't get insanely high frame rate either.

What other websites do is they choose a specific setting and compare the cards' frame rate.
What [H] do is they choose a specific frame rate window and compare the maximum settings that can be used to reach that frame rate window.

Check out this review and you will see that my graph is not accurate but it is not something pulled of out nowhere either:

http://www.guru3d.com/article/geforce-gtx-470-480-review/16

No I understood the graph. It's useless because "playable framerate" is a largely subjective value, and it is not fixed in [H] reviews like you imply (nor is an actual cutoff ever stated, just what feels playable to Brent or Kyle). Also your graph ignores the fact that [H] reviews actually do not measure the relationship between "load" and "playable" framerate before they introduce a third confounding factor: "settings". They measure relative performance at unequal settings, hence why you have to rely on the reviewer's commentary (the numbers are almost completely useless).

That's why I said your graph conveys no information.
 
Last edited by a moderator:
the nVidia offering is clearly superior for the newest DirectX games, how can you say that it isn't the best product? It's going to get better with time...
 
the nVidia offering is clearly superior for the newest DirectX games, how can you say that it isn't the best product? It's going to get better with time...

*cough, Dirt 2 is DX11 as well and Fermi gets its ass handed to it in that game, cough*

Pretty sure we had/are having that conversation in another thread, though.
 
*cough, Dirt 2 is DX11 as well and Fermi gets its ass handed to it in that game, cough*

Pretty sure we had/are having that conversation in another thread, though.

Does it really, now?

It seems the facts disagree with you:

http://images.hardwarecanucks.com/image//skymtl/GPU/GTX480123/GTX480-46.jpg

http://www.firingsquad.com/hardware/nvidia_geforce_gtx_480_470_performance/images/l4d1920.gif

the nVidia offering is clearly superior for the newest DirectX games, how can you say that it isn't the best product? It's going to get better with time...

These creatures live under bridges in the woods in fairy tales... they live in forums on the net, too, maybe... maybe we should stop feeding them and they might go away :).
 
You're right, it isn't accurate. In fact, it gives no information at all. But it's a graph so it looks like it is significant!

The difference between can be graphed like this:

This is assuming you value scientific, objective comparisons. Plenty of people enjoy reading the largely editorial review. Hard is trying for something positive, but imo they haven't reached it for the most part. I think that's why many consider their reviews less useful, but that's just an assumption. I did like the 480/470 GTX review, but I disagree with their choice of settings, and that claustraphobic 'you better like the settings we choose because that's all you get" approach doesn't hold a candle to the methodology of a site like anandtech. The idea that instead of showing relative performance you need to show maximum featureset at some reviewer-predefined performance goal, especially considering that performance figure changes for every single test (there isn't a set cutoff performance they target, but a general swath they call "playability") is simply ridiculous.

As a caveat, before they decided to use their current review methodology this was my #1 go to site for reviews. Even with the modifications they've made to this strategy since its inception, they lag far behind imo. This might just be a man-hour thing, because I believe if they did a more exhaustive settings analysis (so doing what they call "apples to apples" testing for the exhaustive variety of settings and commenting on playability), and did this analysis by spending the time to play good chunks of the game like they currently do, and analyzing a frame/time graph, it would be the best review out there. I assume that it take too long to have 1 or 2 people do this and still remain competitive in terms of release schedule.

This sums up most of my gripes with the [H] reviews. They're still useful, just not as thorough or data-heavy as some other sites are, and they pretty much take away any opportunity for the reader to come up with his own decisions based on larger data sets.
 
No I understood the graph. It's useless because "playable framerate" is a largely subjective value, and it is not fixed in [H] reviews like you imply (nor is an actual cutoff ever stated, just what feels playable to Brent or Kyle). Also, it is not at all true that the only performance difference between Fermi & 5800 is due to difference in settings, as your graph implies (a correspondence between framerate and load). There is actually a 3rd confounding factor which is relative efficiency. The graph is junk :)

That's why I said your graph conveys no information. It isn't clever.

I don't understand your graph, it is not useful when [H] is being subjective and it will be more useful when they are being objective but other typical top class review are constantly being objective eventhough they are not useful?
 
This sums up most of my gripes with the [H] reviews. They're still useful, just not as thorough or data-heavy as some other sites are, and they pretty much take away any opportunity for the reader to come up with his own decisions based on larger data sets.

B-I-N-G-O and bingo was his name-o... I agree completely. I like to take the data, make my own comparisons with it, then decide. Even assuming their "real-world" run-throughs are accurate enough (which is debatable, compared to custom timedemos (not built-in benches, but custom timedemos)), they just kinda interpret their runs for you and give you pre-digested info, rather than raw data for you to analyze and decide from.
 
I don't understand your graph, it is not useful when [H] is being subjective and it will be more useful when they are being objective but other typical top class review are constantly being objective eventhough they are not useful?

Yeah I would have to represent other review sites as a single point for that graph to accurately reflect their usefulness. You get the idea. It's fixed now. edit: It actually wasn't all that bad to begin with, because some other review sites have objective AND useless results :D, but that wasn't what I was getting at.

By the same token your graph would actually be more accurate if that vertical line represented [H] reviews, and the horizontal "playable framerate" line represented every other review. Because they actually test variable loads, so you can see where the playability cutoff is for yourself. Cool right? Your graph not only means nothing, it actually has the labeling backwards.

To answer the call to corruption: Sites like Anandtech often use their own in-house timedemos, so it would be impossible for AMD & Nvidia to optimize their drivers for Anandtech's results. "Canned benchmarks" pretty much mean "Heaven" and 3dMark at this point.
 
Last edited by a moderator:

Yes, it does

http://hardocp.com/article/2010/03/26/nvidia_fermi_gtx_470_480_sli_review/6

Of course, I've already linked you to that before, so I have a feeling you are just going to continue to ignore it, but whatever.

Oh, and hardwarecanucks used "High" post processing, not "Ultra".
 
Even if the performance numbers are close in some games, the obscene noise and power consumption make it a loser.
 
I am not a fan of HardOCP's subjective reviews either. Techpowerup has an amazing amount of data in their review of Fermi. It's good that we have both because while a larger dataset allows you to extrapolate where your system would fit in (along with other useful charts like performance per watt, etc.), I read [H] for the editorial content really because I can't take anything useful from the "maximum playability" graphs. It's probably a manpower thing. If they had the time to apply those to every resolution, it would be a much more useful review.

Again, it's good to have comprehensive sites with regular reviewing methods and also good to have [H] because I enjoy the editorials.
 
Again, it's good to have comprehensive sites with regular reviewing methods and also good to have [H] because I enjoy the editorials.

I read the H reviews for the editorial, not the performance aspects, myself as well.

Yes, it does

http://hardocp.com/article/2010/03/26/nvidia_fermi_gtx_470_480_sli_review/6

Of course, I've already linked you to that before, so I have a feeling you are just going to continue to ignore it, but whatever.

Oh, and hardwarecanucks used "High" post processing, not "Ultra".

Yes, at a resolution (2560x1600) that nVidia says has current outstanding pre-release driver issues... ;) you complain about some sites using 9.12's for the ATI cards, but magically forget about it when it's a loss for nVidia due to drivers, eh? Nice double standards! In any case, the reviews I linked are apple-to-apple comparisons, not various settings that drastically affect performance pitted against eachother at a resolution nVidia confirms is bugged since the cards aren't out yet. If you don't like the HWC one, why not simply read the firingsquad one which does max the settings? Oh, right, because it doesn't match your desires to twist the facts.
 
No I understood the graph. It's useless because "playable framerate" is a largely subjective value, and it is not fixed in [H] reviews like you imply (nor is an actual cutoff ever stated, just what feels playable to Brent or Kyle). Also your graph ignores the fact that [H] reviews actually do not measure the relationship between "load" and "playable" framerate before they introduce a third confounding factor: "settings". They measure relative performance at unequal settings, hence why you have to rely on the reviewer's commentary (the numbers are almost completely useless).

That's why I said your graph conveys no information.

When you read [H] review, you look at the highest settings, not at the numbers. Even though playable is subjective, the reviewer is the same so it is still the same with both cards.

You only look at the numbers when the settings are maxed out and both cards are using the same settings. They push the cards to the maximum until they feel that it is the limit for the cards to still be playable. Eventhough the avg., min. and max. frame rate are not the same, the playability is considered the same relative to each other. If both cards could already max out at a lower resolution, there is no point running the benchmark again at a lower resolution. Maybe a card can do 10 fps more than the other but when you already reach 100 fps, another 10 fps more won't change much in term of gaming experience.
 
Last edited:
I am not a fan of HardOCP's subjective reviews either. Techpowerup has an amazing amount of data in their review of Fermi. It's good that we have both because while a larger dataset allows you to extrapolate where your system would fit in (along with other useful charts like performance per watt, etc.), I read [H] for the editorial content really because I can't take anything useful from the "maximum playability" graphs. It's probably a manpower thing. If they had the time to apply those to every resolution, it would be a much more useful review.

Again, it's good to have comprehensive sites with regular reviewing methods and also good to have [H] because I enjoy the editorials.

The problem with sites that simply bunch a lot of data into one review is that they don't actually rebench all those cards. As was mentioned in a lot of reviews, they were using outdated drivers for the 5870 (9.12... some even using 8.66 wtf??). Now why would you ever do that... unless of course you didn't actually rebench the 5870 with newer drivers and simply used old numbers. There was a major revision going from 9 -> 10.

Also, is it really useful to see the 200 fps graphs in techpowerup? It's amusing, sure... but ultimately not useful at all.
 
yeah see I don't mind that sort of mixing in reviews.

I am going to get a 480 because now I know that it's utility isn't undermined by underwhelming game performance. The cards are going to be used for more and more things, and people are wanting to learn.

The nVidia stuff is quite accessible for people enthusiastic enough. Yes I understand the heat issue and everything, thats why I'm going to watercool it. I thought watercooling 2 GTX 280s on a PA120.3 was going to be overkill but that 40 degrees at load was an absolute life saver.

I am happy I bought a big radiator as overkill, since it will be a good reason to stay with WC for Fermi. I was gonna actually go back to air, but once I realized that it's just new blocks I figured the heat/noise being as-minimum would be best of all worlds. Agree?
 
Back
Top