NVIDIA Fermi - GTX 470 - GTX 480 - GTX 480 SLI Review @ [H]

Hopefully they can fix this for the AMD folks without breaking the 3D. This game is a 3D Vision showcase.
 
Metro 2033 looks pretty cool. I may have to get that. Not that I've had any time for playing games lately.
 
Apparently there is a bug in Metro2033 DX11 Very-High with the Catalyst 10.3b (and earlier) if VSync is off

That may be the reason with no PhysX 4xAA 16AF the HD5870 gets 3fps and the GTX480 gets 21fps since they seem to all have it off for those benchmarks

With AAA the performance at the same 1920x1200 resolution is more commensurate with other game benchmarks where it usually almost ties the GTX480, but HardOCP's review states that 4xAA, however, was not playable, and I am sure they had VSync off for benchmarking!

At that same resolution with 4xAA but on the DX10 codepath (missing tesselation and DoF I guess) it gets 26fps

Is that really the defining benchmark, where literally you CANNOT play at 4xAA without a Geforce, or is it the Vsync bug?

"I know, it sounds bonkers - but apparently the 'logic' runs thus: The game, at the moment, is running as if you have 3D glasses enabled, so it is actually drawing two versions of the main image all the time and then rejecting one when it sees you haven't actually got 3D enabled. Forcing Vsync on stops it doing this so - while technically you do get a performance hit from Vsync being on, you get a far greater bonus because of the non-drawing of the phantom second image."

If this is truly the case then it sounds like a conspiracy since this is the only game all review have basically trashed the HD5870, even at Heaven2.0 it at least seems competitive at only 1.6X faster, but 7X is ridiculous!!

Scheme: Add a "3D-Vision enhancement" gone wrong. When the code was given to nVidia to "optimize" with PhysX, they slipped an intentional bug in the physics renderer to enable 3D-Vision Quadbuffering(?) if V-Sync is off (May be specific to DX11 4xAA) because they knew that reviewers would use that setting and it would cripple all but the GTX480, maybe only because of the better 3D-Vision frame-rejection optimizations in the 197.17 Forceware driver

Sorry but totally bogus. The games that support 3d Vision don't do any double rendering, so there isn't even a chance for them to double render on ATI. The double rendering (one for each eye) is all done in the NVIDIA driver. No NVIDIA driver, no double rendering.
 
Wow late to the party.

After seeing so much on the rumor mill I this is more or less what I was expecting. Faster in a few cases, slower in others but with more cost (money, heat, power). I also think we will see more of Fermi as NV gets some tweaks in their re spins and/or next version of Fermi. Give AMD/ATI some credit there came out with a really good product months ago. And lets home both AMD/ATI and NV keep one up-ing each other to give us consumers the real win.

Nice review guys!
 
It is a good race right now. My opinion is that if you have a late model GPU, wait. If you are due or overdue for an upgrade there are good choices at all points in the spectrum.
I really think that ATi will have a new offering in the next few months and then the present ones will drop in price. I also believe that NV has plans beyond the present 400 series. Their new releasses keep them in the game but they are no longet the clear leader. When will the improved 400 series come out, that is anyones guess but you can be sure they are well along.
 
Last edited:
Nvidia have already moved on to the 4000 series? I thought they only just released the 400's?

Sorry, had to.
 
Nice review, but none of those videos worked for me? They were all set to private, and I tried setting up an account, but no avail. :eek:

I was planning on getting a 470 and water cooling it, however this review has turned me away a little. So the only thing I would be missing out on is Physx? I've never had an ATI card, so I'm not sure what it would look like without Physx.
 
The only game I've ever seen PhysX effects that amounted to anything noticeable is in Batman Arkham Asylum. Aside from that, nothing. Most of the time you only really even see those effects in the Scarecrow dream/hallucination sequences.You can still use PhysX with an ATI card. Just get a lower end NVIDIA card and use the workaround to keep PhysX support.
 
You can say that the Metro2033 VSync bug on ATi is bogus, but if you google it, it is all over the web how people had to enable VSync on ATi to get playable framerates

Check the Metro2033 related forums and you'll see
 
Apparently the mini-HDMI output on the GTX480 STILL doesn't support audio pass through!?

"AMD's 5000 series are still the only graphics cards capable of supporting protected audio, for true Blu-ray audio playback via Dolby TrueHD or DTS-HD Master Audio, as well as onboard 7.1 HD Audio and multi-channel LPCM, all via HDMI or DisplayPort."

So basically you would now need another HDMI port on your sound card to get the 192kHz 24bit signal that BluRays have, and it still wouldn't work unless you had a tuner to accept HDMI input for speakers, since TVs don't support remuxing from two separate HDMI signals!

Maybe ~$150 sound cards don't have to settle for analog with PowerDVD software encoding, but not a $100 XFi Titanium Pro PCIe, even being pretty top of the line, it still only supports DVD-level audio encoding, regular DTS and Dolby Digital, and it might be done in software (Connect-Neo-Interactive/Live)

It seems like audiophiles are crippled without having to spend more than $500 for the GTX480
 
Last edited:
I don't know why NVIDIA can't do something about this. Their HDMI pass through cable is lame. AMD has a much more elegant solution in this regard.

GTX480 is not enough to cook the egg :
http://www.youtube.com/watch?v=ASu3Xw6JM1w

But the noise, the noise...

They probably should have loaded up Furmark. It seems to cause the highest temperature spikes on GPUs thus far. And yeah, the noise is pretty much unbearable.
 
The only game I've ever seen PhysX effects that amounted to anything noticeable is in Batman Arkham Asylum. Aside from that, nothing. Most of the time you only really even see those effects in the Scarecrow dream/hallucination sequences.You can still use PhysX with an ATI card. Just get a lower end NVIDIA card and use the workaround to keep PhysX support.



I played through the entire game using this physx workaround + a G80 8800GTS 640. Mirror's Edge didn't work, however.... so I pulled the card immediately after finishing the game.

G80/G92 is a great card for < 1680x1050, fwiw
 
@dook43, for Mirror's Edge to work, you should have deleted 2 PhysX DLL files from game folder, thus forcing the game not to use the included PhysX runtime, but the "hacked" system runtime.
 
Even without a nVidia card you can use PhysX on a powerful CPU in software emulation mode
 
Actually, one thing I wanna know that I haven't seen mentioned anywhere... how are the BC2 load times in DX11?
 
probably slightly reduced since the VRAM bandwidth is 10GBps faster than an OC'd HD5870

When OC'd to 1.022GHz/5.2GHz with a 1.287 voltage (+0.112V) tweak the HD5870 runs just as cool and quiet, but it now basically matches the GTX480 at 32.7GP compared to the GTX's 33.6GP

But it almost DOUBLES THE TEXTURE FILLRATE at 82GT compared to the GTX480's measly 42GT!
 
Even without a nVidia card you can use PhysX on a powerful CPU in software emulation mode
you cannot use the hardware accelerated level of physx on your cpu unless you want 10-20 fps. it doesnt matter how powerful the cpu is.
 
I'm going to throw up the BS flag here with a little hesitation. Just to double check, I went back and checked some recent reviews of CPUs and motherboards, to make sure my memory wasn't wrong.

I'm not sure what in the above quote you responded to here you are referring to as "BS"...? I mean, I think if Kyle sees fit to describe [H] as primarily a "gaming site" then I think he of all people is probably best qualified to do so, don't you?....;)

In saying that, I know there are barely any applications out there that utilize CUDA or OpenCL, but they do exist and they ARE pertinent, as you pointed out in your short post with videos, about AMD and OpenCL back in January. To say you only concentrate on gaming is just outright wrong.

Come on...;) I have to say that I think kyle, above all people, really, knows what [H] "concentrates" on! Honestly, I think it's probably safe for you to believe him on that score...;) It is his site, after all, isn't it? I think he knows exactly what he's writing when he puts his John Henry on the author's byline.

Kyle, I've been reading your site for a very long time, but it is overly obvious at times were bias exists. It was almost difficult for me to read this review due to the obvious bias. End of the day, the GTX480 is mostly faster, especially in SLI configuration. I understand and appreciate the drawbacks being brought forth (power, cost, etc), but I much prefer reading an article that isn't LOOKING for problems.

Well, see, that's where you and I have to disagree. To me, the worth of any good review is that the reviewer displays a critical view of the product he's reviewing. That doesn't mean an "unfair, biased negative view"--it just means the reviewer is doing his job for the benefit of his reading public, and I think that is the best service a hardware reviewer can provide his readership. I can't tell you how many times in the distant past I read glowing reviews on expensive products only to buy them and be thoroughly pissed at the fact that the review I read which prompted me to buy the product mentioned left out so many negatives that it might have been described as fiction as opposed to a solid hardware review. I'd much rather a reviewer be a bit "too hard" on a product than to come off sounding like a shill for the manufacturer. Wouldn't you?

I'm also disappointed in the lack of CUDA reviews across the entire web (there are a few), as I've been waiting for the fermi just for CUDA (I've wandered away from gaming in the last few years).

There is likely a reason why you see so few examinations of CUDA. Besides the fact that it is proprietary and so using it to contrast GPCPU performance with the competition is pretty useless and will paint a false picture, I think it's plain to see that likely >99% of everybody who buys a high-end 3d-card from either ATi or nVidia does so because of his interest in 3d-gaming. The GPCPU aspect for most is entirely secondary and for many completely inconsequential.

End of the day, I think the OP that you responded to had a good point, but that's just my $0.02. I agree with your assessment that the best choice for gaming right now lies with AMD, but give credit where credit is due. The Fermi is a beast of technology with more to offer than just gaming.

The problem is, though, that >99% of the people who will consider buying Fermi at present will be concerned with its 3d-gaming characteristics above all else. That has been nVidia's core market from the time when nVidia used to spell it's name as I still write it--with the little "n" and the capital "V"...;) (I still prefer "nVidia" to "NVIDIA," which is why I still write it that way, but that's just me.)

A lot gets said about "bias" that really isn't accurate or fair. Bias can be a very good thing when the reviewer explains his bias in terms that are easy to understand--I mean, if a bias makes sense, then it is logical to have such a bias. For instance, if it was possible to fry an egg on top of a GPU heatsink, then I don't think stating that in a review is an improper bias. If it is true, then far better for the sake of your readership to state that than it is for a reviewer not to state it because of a fear of being called "biased." (Having said that, I don't believe the GF100 products reviewed here are hot enough to literally fry eggs...;) Just wanted to make the point.)
 
There is likely a reason why you see so few examinations of CUDA. Besides the fact that it is proprietary and so using it to contrast GPCPU performance with the competition is pretty useless and will paint a false picture, I think it's plain to see that likely >99% of everybody who buys a high-end 3d-card from either ATi or nVidia does so because of his interest in 3d-gaming. The GPCPU aspect for most is entirely secondary and for many completely inconsequential.

The problem is, though, that >99% of the people who will consider buying Fermi at present will be concerned with its 3d-gaming characteristics above all else.

Well one of the few reasons you see so few examinations of CUDA is probably because most of the guys that review these cards aren't programmers. Even if they have knowledge on that front, they may not have CUDA specific knowledge. And from a content standpoint, review sites know that 99% of the people that buy these cards are going to do so for gaming. It doesn't make much sense to spend a considerable amount of time reviewing a feature that few enthusiasts care about.

I don't give two squirts of piss about CUDA myself. Yeah its cool and it can do a bunch of neat things, but I buy cards with gaming performance in mind. Everything else is secondary as you said.
 
I'm ready, and want to get one/two of the 480s, but my hesitation is getting them right out of the gate. I'd hate to spend that much dough only for Nvidia to do a few planned fixes/tweaks after a month or two. I can see this because they were under the gun to get this to market. Even worse would be a version that unlocks all the shader cores.
Ya I understand you can never stay ahead of the game, but this version Fermi doesn't seem to have a long lifespan.
 
Well one of the few reasons you see so few examinations of CUDA is probably because most of the guys that review these cards aren't programmers. Even if they have knowledge on that front, they may not have CUDA specific knowledge. And from a content standpoint, review sites know that 99% of the people that buy these cards are going to do so for gaming. It doesn't make much sense to spend a considerable amount of time reviewing a feature that few enthusiasts care about.

I don't give two squirts of piss about CUDA myself. Yeah its cool and it can do a bunch of neat things, but I buy cards with gaming performance in mind. Everything else is secondary as you said.

Yea, and you know--trying to be brief here which is sometimes difficult for me--I cannot help but to seriously and truthfully think that nVidia overreacted in a big way to Intel's advance Larrabee publicity. To be fair to Intel, the company never really emphasized that Larrabee was going to amount to a bunch of x86 cores in a chip served by a radically different breed of compiler that was going to usurp rasterization with "real-time ray tracing" capabilities such that Larrabee was going to obsolete the rasterizing gpu technology of the last decade.

Sure, in a few tech demos, Intel showed some possibilities that functioned at very slow (unplayable) frame rates inside of fairly simple scenes (simple at least from the standpoint of a current 3d rasterizing gpu.) Not only that, but as I recall, all these demonstrations were run not on actual Larrabee silicon but were run on undisclosed Intel cpu silicon in the form of a Larrabee software emulator. Far as I know, that's as far as Intel ever got with Larrabee until it pulled the plug on the project a few months ago. Some folks are waiting on Larrabee II, but I'm really not holding my breath...;) Just my two cents...

I think that nVidia was trying to meld the concept of a computationally intensive "Larrabee" with competitive state-of-the-art gpu rasterization (DX11, etc.) technology. Quite an ambitious project, and not an entirely focused project, it seems to me--jack of all trades, master of none, etc. And the result being what it is, a large, hot, and power hungry GPU. I honestly think the advance Larrabee publicity and conjecture worried nVidia far more than it worried ATi/AMD, and thus we can see the divergence in the two architectures in terms of functionality and approach.

I think nVidia would be well advised to recall the KISS principle and get back to the basics of serving its traditional core markets--nV4x et al came out of such an approach and its success was obvious. OK, I've blathered enough, but I do think this is an interesting topic...;)
 
That's an interesting take on the subject and it may have more truth to it than most of us realize. Certainly there may be more truth to that statement than NVIDIA would ever admit to. Intel has deep pockets and NVIDIA knows it. GF100 may in fact be the result of reacting to and trying to get a leg up on Intel. It may be that NVIDIA tried to beat Intel at their own game and ironically enough Intel basically dropped out of the race before it was ever run. One thing NVIDIA has always done in the past was be very aggressive with their development cycles/strategies and stay one step ahead of the competition. For the most part they've succeeded in doing so more often than not. ATI bought out ArtFX and thus gained the basis for the R300 GPU which of course humbled NVIDIA but I think it was a lesson they needed to learn. Of course culturally as a company they haven't learned that lesson. Not the way they run their mouths all the time like school yard bullies in the press.

That said I think GF100 is an interesting product and once refined I think it will be a truly great architecture. Right now it has teething problems in the form of insane power consumption, heat generation and finally, software. I don't think we are truly seeing what it can do. The Heaven Benchmark is of course not a real game but it does show us what GF100 could do in certain circumstances. AMD on the other hand got down to brass tacks and asked the question "How do we improve the gaming experience today?" That resulted in Eyefinity. Personally I think that's the biggest thing to hit the gaming world in a long time. Yeah we've seen it from Matrox and through software before, but not like this. We've never seen it done this well. Their own GPU is a computational power house in itself. But first and foremost AMD seemed to have concentrated on the gaming experience in the here and now while NVIDIA seems to be playing two or three moves ahead.

Though truly only time will tell as to who actually made the right call. My guess is that when we find out we'll have another GeForce FX like fiasco, or an ATI Rage 128 as compared to the VooDoo cards of the day. One thing I am certain of is that for better or for worse we will be seeing Fermi derived GPUs for a very LONG time. Possibly as long as we've been seeing G80 variants.
 
I don't think it will be quite as bad as the FX, the 480 did achieve one of (what I believe to be) it's primary goals which is to steal the crown away from the 5870 as the fastest single GPU card, though it does so at a significant cost but not a significant margin in most cases.
 
What I meant by my last statement is that which ever company made the right gamble will find themselves ahead of the other by a significant margin at some point and possibly dominate for a couple of generations the way ATI dominated the GeForce FX series with R300. We won't know which company made the right call architecturally speaking. Hell none of that may happen. For all we know ATI's next architecture will have nothing to do with their current architecture and blow Fermi and its descendants away.

You can never tell with this industry. Still it may be that Fermi as a stepping stone may lead to an architecture that AMD will have a hard time competing against. I suspect it has longer legs than AMD's current GPU technology does.
 
It looks like AMD will have to release a 1GHz 5870, etc. for the time being since their next process shrink won't come online until next year

TSMC cancelled 32nm and gate-first (1st time in their history) so that brings unforseen delays to "NorthernIslands"

They might release a hybrid chip called SouthernIslands with only the uncore (I guess that means 384-bit IMC?) from the new architecture, but on the same 40nm instead of gate-last 28nm

or they could find a new foundry, but those haven't ramped up yet

gate-first saves 10% of transistor space, so that would be preferable
 
What I meant by my last statement is that which ever company made the right gamble will find themselves ahead of the other by a significant margin at some point and possibly dominate for a couple of generations the way ATI dominated the GeForce FX series with R300. We won't know which company made the right call architecturally speaking. Hell none of that may happen. For all we know ATI's next architecture will have nothing to do with their current architecture and blow Fermi and its descendants away.

You can never tell with this industry. Still it may be that Fermi as a stepping stone may lead to an architecture that AMD will have a hard time competing against. I suspect it has longer legs than AMD's current GPU technology does.

I agree that the Fermi architecture is interesting and has a lot of promise. I also agree that right now it's impossible to say who has made the correct choice regarding the difference in architectures between nVidia and AMD.

I don't doubt that Fermi has some long legs with regards to the base architecture. I wouldn't be surprised if a respin of the current architecture could be done in order to lower the power consumption and heat output. It reminds me of the Intel fiasco when the P4 was moved to 90nm. If I remember correctly, the first P4s on that process actually ran hotter and didn't cut down on the power consumption from the previous process. This was a huge stumbling block for Intel. Eventually, Intel was able to get lower power consumption on that process as well as less heat than the initial CPUs. This wouldn't be the first time we've seen what basically amounts to a simple rearranging of the transistors in the silicon for more efficient power use. I have pointed out where Intel has done it and I remember AMD doing the same thing in the Athlon64 days. In these cases, the architecture was kept the same but a rearranging of how the different parts of the CPUs were laid out on the die made a big difference.

Although I have no information regarding the different architectures between GPUs and my ability to see the future is rather limited, I don't think we'll nVidia making a huge jump over AMD anytime soon. nVidia's architecture is currently too power inefficient to be able to make any great strides even if it is the correct choice. This is going to give AMD extra time to come up with something to meet or beat nVidia if something isn't already in the works.

My personal belief is that we're going to see performance between the two stay very similar unless one or the other of the companies royally screws up similar to the old FX line. This will definitely not be a bad thing for consumers.
 
Between the reviews, heat, and price point of the GTX480 I have to say fix the card, make it cheaper and I might be interested. I don't think this statement is unreasonable.
 
This settles it. I'm going to buy a GTX 480 as soon as the eVGA ones are in-stock on Newegg. No, I am not going to upgrade my power supply.
 
Maybe have all 512 cores working. I have 3 280's in sli with physics card and so no reason to upgrade till DX 11 has more support in games. Last reason I lost my job. Getting laid off sucks big time. I guess I am wiaitng awhile regardless.
 
No surprise.

so i guess its true that partners are really pissed at them

nvidia set EOL for the 2xx series and its come to bite them in the ass since partners were promised the fermi chips were ready and now are almost left high and dry

so i guess we wait another week or 2 while partners get their cards to retailers
 
so i guess its true that partners are really pissed at them

nvidia set EOL for the 2xx series and its come to bite them in the ass since partners were promised the fermi chips were ready and now are almost left high and dry

so i guess we wait another week or 2 while partners get their cards to retailers

I'd thought the week of April 12th or so was the earliest any of us might see these things to begin with.
 
Since the news of A3 being the last silicon Fermi will see, I guess this is all we get.

I gave up on waiting for the HTPC card to come from Nvidia, I know this is isn't very [H] but I did those little no-fan requiring video cards I can put in my HTPC for a cheap upgrade. Knowing what Nvidia did to all of those waiting for them last time (which was for FREAKIN EVER) I'm sure we will see the same, ATI also screwed us in this sense as well.

Oh well, wonder if Nvidia will do somethin with the GTX480 like ATI did with the 4870 and do silicon efficiency instead of another revision and make us a better card.
 
I'm guessing they will but won't admit it to keep those who would wait for the improved version from buying current ones.
 
Back
Top