Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Apparently there is a bug in Metro2033 DX11 Very-High with the Catalyst 10.3b (and earlier) if VSync is off
That may be the reason with no PhysX 4xAA 16AF the HD5870 gets 3fps and the GTX480 gets 21fps since they seem to all have it off for those benchmarks
With AAA the performance at the same 1920x1200 resolution is more commensurate with other game benchmarks where it usually almost ties the GTX480, but HardOCP's review states that 4xAA, however, was not playable, and I am sure they had VSync off for benchmarking!
At that same resolution with 4xAA but on the DX10 codepath (missing tesselation and DoF I guess) it gets 26fps
Is that really the defining benchmark, where literally you CANNOT play at 4xAA without a Geforce, or is it the Vsync bug?
"I know, it sounds bonkers - but apparently the 'logic' runs thus: The game, at the moment, is running as if you have 3D glasses enabled, so it is actually drawing two versions of the main image all the time and then rejecting one when it sees you haven't actually got 3D enabled. Forcing Vsync on stops it doing this so - while technically you do get a performance hit from Vsync being on, you get a far greater bonus because of the non-drawing of the phantom second image."
If this is truly the case then it sounds like a conspiracy since this is the only game all review have basically trashed the HD5870, even at Heaven2.0 it at least seems competitive at only 1.6X faster, but 7X is ridiculous!!
Scheme: Add a "3D-Vision enhancement" gone wrong. When the code was given to nVidia to "optimize" with PhysX, they slipped an intentional bug in the physics renderer to enable 3D-Vision Quadbuffering(?) if V-Sync is off (May be specific to DX11 4xAA) because they knew that reviewers would use that setting and it would cripple all but the GTX480, maybe only because of the better 3D-Vision frame-rejection optimizations in the 197.17 Forceware driver
GTX480 is not enough to cook the egg :
http://www.youtube.com/watch?v=ASu3Xw6JM1w
But the noise, the noise...
The only game I've ever seen PhysX effects that amounted to anything noticeable is in Batman Arkham Asylum. Aside from that, nothing. Most of the time you only really even see those effects in the Scarecrow dream/hallucination sequences.You can still use PhysX with an ATI card. Just get a lower end NVIDIA card and use the workaround to keep PhysX support.
you cannot use the hardware accelerated level of physx on your cpu unless you want 10-20 fps. it doesnt matter how powerful the cpu is.Even without a nVidia card you can use PhysX on a powerful CPU in software emulation mode
I'm going to throw up the BS flag here with a little hesitation. Just to double check, I went back and checked some recent reviews of CPUs and motherboards, to make sure my memory wasn't wrong.
In saying that, I know there are barely any applications out there that utilize CUDA or OpenCL, but they do exist and they ARE pertinent, as you pointed out in your short post with videos, about AMD and OpenCL back in January. To say you only concentrate on gaming is just outright wrong.
Kyle, I've been reading your site for a very long time, but it is overly obvious at times were bias exists. It was almost difficult for me to read this review due to the obvious bias. End of the day, the GTX480 is mostly faster, especially in SLI configuration. I understand and appreciate the drawbacks being brought forth (power, cost, etc), but I much prefer reading an article that isn't LOOKING for problems.
I'm also disappointed in the lack of CUDA reviews across the entire web (there are a few), as I've been waiting for the fermi just for CUDA (I've wandered away from gaming in the last few years).
End of the day, I think the OP that you responded to had a good point, but that's just my $0.02. I agree with your assessment that the best choice for gaming right now lies with AMD, but give credit where credit is due. The Fermi is a beast of technology with more to offer than just gaming.
There is likely a reason why you see so few examinations of CUDA. Besides the fact that it is proprietary and so using it to contrast GPCPU performance with the competition is pretty useless and will paint a false picture, I think it's plain to see that likely >99% of everybody who buys a high-end 3d-card from either ATi or nVidia does so because of his interest in 3d-gaming. The GPCPU aspect for most is entirely secondary and for many completely inconsequential.
The problem is, though, that >99% of the people who will consider buying Fermi at present will be concerned with its 3d-gaming characteristics above all else.
Well one of the few reasons you see so few examinations of CUDA is probably because most of the guys that review these cards aren't programmers. Even if they have knowledge on that front, they may not have CUDA specific knowledge. And from a content standpoint, review sites know that 99% of the people that buy these cards are going to do so for gaming. It doesn't make much sense to spend a considerable amount of time reviewing a feature that few enthusiasts care about.
I don't give two squirts of piss about CUDA myself. Yeah its cool and it can do a bunch of neat things, but I buy cards with gaming performance in mind. Everything else is secondary as you said.
What I meant by my last statement is that which ever company made the right gamble will find themselves ahead of the other by a significant margin at some point and possibly dominate for a couple of generations the way ATI dominated the GeForce FX series with R300. We won't know which company made the right call architecturally speaking. Hell none of that may happen. For all we know ATI's next architecture will have nothing to do with their current architecture and blow Fermi and its descendants away.
You can never tell with this industry. Still it may be that Fermi as a stepping stone may lead to an architecture that AMD will have a hard time competing against. I suspect it has longer legs than AMD's current GPU technology does.
No surprise.
so i guess its true that partners are really pissed at them
nvidia set EOL for the 2xx series and its come to bite them in the ass since partners were promised the fermi chips were ready and now are almost left high and dry
so i guess we wait another week or 2 while partners get their cards to retailers
I'd thought the week of April 12th or so was the earliest any of us might see these things to begin with.
that was IF the chips/reference cards made it before easter
links not working.. and who the hell is shanebaxtor?