Benchmark Wars

Interesting criticisms directed at AMD's chosen way of dealing with tessellation from Anandtech's forum. How real and objective is this? Anyone know?

Source:
http://forums.anandtech.com/showpost.php?p=30637112&postcount=224

Not sure. I have read arguments for both. and on the other hand AMD is right, their methods are faster on both Nvidia and AMD cards with no visual impact. On the other hand people are arguing that Nvidia's solution is more in the true spirit of what its meant to do.

The only thing I can see here for sure is this: if there is a solution that works well on both vendors and they pick one for a particular vendor, then its the customers that are loosing. but even then I don't know for sure what is happening. after that DX10.1 issue though Ubisoft is automatically suspect. so even if they do release a statement it will take someone of better creditability then them to settle this.,
 
Interesting criticisms directed at AMD's chosen way of dealing with tessellation from Anandtech's forum. How real and objective is this? Anyone know?

Source:
http://forums.anandtech.com/showpost.php?p=30637112&postcount=224

http://www.rage3d.com/board/showthread.php?t=33964302&page=46

caveman-jim said:
Thats all well and good but it completely neglects that sub 16pixel triangles don't show a visual increase in detail, meaning that by going to tessellation factors of say, 16, then you're just stalling the rasterizer and tanking efficiency for no IQ gain. This is true on both AMD and NVIDIA architectures.
caveman-jim said:
The issue here is HAWX 2 is using a very high level of tessellation, creating very small poly's - 6 pixels, in fact (per NV email). This is below the optimal efficiency threshold for GPU's that process four quads of pixel groupings - i.e. NVIDIA and AMD current designs. You know quad quads better as 16px/clock, in specifications.

So when you tessellate down below 16 pixels, you create a scenario where the resterizer ('pixel processor') has to reprocess pixels because it's got a new poly in it, despite the fact it's already been processed. This stalls the rasterizer pipeline and causes problems.

This would be fine and down to 'architecture implementation of specification' if it werent for the fact that you can't actually tell the difference as tessellation factors go up, especially if you have a HD display. Ultimately it is a product differentiator between AMD and NV high end, just as forced TSAA was. Does it make a difference to consumers? That's a personal decision.

I got both emails from NV and AMD, and decided it was a stunt to cause controversy in the week of next generation launch.




I think Scali should have waited for the results, before making that statement. Initial tests shows that the 6870 is on par with the GTX 460 in Heaven benchmark with extreme tessellation:
http://hardforum.com/showthread.php?p=1036321789#post1036321789

Heaven is completely useless synthetic benchmark that doesn't relate to real game tessellation performance at all.
 
Wait isn't this just PhysX with DX11, Tesselation instead of x86 code?

Nvidia has specifically gimped PhysX on CPUs and is asking Ubisoft to do the same with Tesselation and DirectX 11...

This isn't suprising, its how Nvidia Competes.. they do this to compete with the CPU and PhysX and its no suprised they are supporting a developer to do the same thing with Tesselation.


For those who do not know what the PhysX cripple issue is i recommend reading the Article

It boils down to the fact that Nvidia purposefully uses an ancient x87 instruction set for PhysX on the CPU, they specifically found a weak point in the CPU and used that making the CPU seem unable to process the physics nearly as well as the GPU - artificially crippling PhysX.

Nvidia is using the CPU PhysX code from Ageia. Ageia didn't do jack with making PhysX work well on the CPU and Nvidia has no reason to update it. Its not in their best interest to make it work on the CPU as good as it does on the GPU. Just like its not in Intel's best interest to release a GPGPU version of Havok, especially with Larrabbee being canned.

That said, I seem to recall reading somewhere a couple months back that Nvidia is working on updating the CPU code for PhysX sometime in the future. No clue where I read it so I can't say if its reliable information or not.
 
Are you guys going to press AMD on why not to use it? It'd be nice to have some "official" confirmation that benchmarks can be skewed.
 
Not sure. I have read arguments for both. and on the other hand AMD is right, their methods are faster on both Nvidia and AMD cards with no visual impact. On the other hand people are arguing that Nvidia's solution is more in the true spirit of what its meant to do.

The only thing I can see here for sure is this: if there is a solution that works well on both vendors and they pick one for a particular vendor, then its the customers that are loosing. but even then I don't know for sure what is happening. after that DX10.1 issue though Ubisoft is automatically suspect. so even if they do release a statement it will take someone of better creditability then them to settle this.,

I'm no dev and I don't know that fellow from Adam. To my untrained mind it sounds like his theory is that AMD's implementation of tessellation doesn't fully live up to tessellation's core intended technical way of functioning. He seems to be suggesting they aren't doing real time calculations in some cases or that they may be artificially limited. I don't have the technical or insider knowledge to comment on it. That's why I asked. I'm always open to being educated and find these sort of things fascinating.

It won't effect my buying decisions as long as IQ and FPS are appropriate in actual gaming. I was simply curious is there was any truth to his theory for the hell of knowing.
 
I'm no dev and I don't know that fellow from Adam. To my untrained mind it sounds like his theory is that AMD's implementation of tessellation doesn't fully live up to tessellation's core intended technical way of functioning. He seems to be suggesting they aren't doing real time calculations in some cases or that they may be artificially limited. I don't have the technical or insider knowledge to comment on it. That's why I asked. I'm always open to being educated and find these sort of things fascinating.

It won't effect my buying decisions as long as IQ and FPS are appropriate in actual gaming. I was simply curious is there was any truth to his theory for the hell of knowing.

from what I am gathering it more a question of scope, AMD solution fails at a point but its at the point where it should not matter. IE if there is no visual improvement to doing something then why dedicate the hardware to it. Nvidia solution allows for doing it without the the same penalty. their solution allows it to scale down that far. so they are trying to exploit that as a weakness. All this is well and good but I have yet to hear anyone of authority speak out on it with the exception of Kyle and he only stated that he has not once seen it matter in the real world test yet. (IE heaven bench mark and such have no real bearing) and to add to that both Ubisoft and Nvidia are well known for doing some rather underhanded things. But AMD isn't a saint either and has had some driver issues. there are rumors as to why as well http://www.brightsideofnews.com/news/2010/10/20/vpp-amd-prepares-new-video-processing-engine.aspx

I do agree with you, this is interesting and I really hope some more people weigh in on this
 
There is a whole lot unsaid about the behind the scenes politics of this particular benchmark war.

I agree with calling out Ubi, Nvidia, and AMD to give their apparent shortcomings some context.

I do not agree with the obvious and strong bias in the way this was reported to us by Kyle. Each of us has a right to our opinions the owners and operators of [H] included. A journalist reporting information to the public has a responsibility to their very best to take a neutral stance. When opinion and analysis are given it should be labeled as such.

To be fair to Kyle he has been very forthright about his dislike of Nvidia (along with his reasons why). More recent video card reviews make me believe Kyle has made some effort to be unbiased.

I enjoy being able to read new news and information daily and very much appreciate the time and effort it takes to bring us, the [H] community, benchmarks, apples to apples comparisons and performance data on the hardware we all love to build and use. The one strong critique I would give the [H] team is the lack of neutrality in the AMD/ATI v Nvidia reporting.
 
There is a whole lot unsaid about the behind the scenes politics of this particular benchmark war.

I agree with calling out Ubi, Nvidia, and AMD to give their apparent shortcomings some context.

I do not agree with the obvious and strong bias in the way this was reported to us by Kyle. Each of us has a right to our opinions the owners and operators of [H] included. A journalist reporting information to the public has a responsibility to their very best to take a neutral stance. When opinion and analysis are given it should be labeled as such.

To be fair to Kyle he has been very forthright about his dislike of Nvidia (along with his reasons why). More recent video card reviews make me believe Kyle has made some effort to be unbiased.

I enjoy being able to read new news and information daily and very much appreciate the time and effort it takes to bring us, the [H] community, benchmarks, apples to apples comparisons and performance data on the hardware we all love to build and use. The one strong critique I would give the [H] team is the lack of neutrality in the AMD/ATI v Nvidia reporting.

Nv stopped sucking with the 4xx series launches. Thus Kyle and the crew seem less biased to Nv fans now. It has happened b4 and will again.

Nv's "exclude every one, even our own customers", business practices, are bound to show through from time to time in the way non Nv fanatics talk about Nv.
 
Actually they both pulled fast ones during the Q3 era. When Ati did it it was cheating, when Nv did it i was called optimizing. Renaming the .exe back then got you a very different experience performance wise. Though to be honest, the IQ loss was minimal to hardly noticeable when you were zipping along at 100fps trying your best to slaughter all that stood b4 you in a fast paced DM. Everyone optimizes on a per game basis these days anyhow, that is not the issue. Batman Arkham Asylum AA style shenanigans are the issue. This benchmark, if the AMD assertions are truly correct, would be another one of those problematic shenanigans.

Though I am currently more inclined to believe that this is more of an "oops we ran out of time" thing rather than a deliberate gimp on Ubi's part, even with Ubi's less than stellar history of fair play. If that turns out to be the truth, then Nv is simply trying to take advantage of a broken and meaningless benchmark to try and reduce any damage this launch does to their market share. Disappointing, but not unexpected.

I agree that Quake/Quack is an archaic discussion... Yet somehow a lot of people on the forum lay the issue at NVIDIA's feet as another reason why they're "evil." I only mention it because of the number of posts in this thread about it, blaming NVIDIA and not ATI.

You and I disagree on the nature of what counts as an optimization and a what counts as a cheat.

NVIDIA optimized and did not compromise IQ in Q3A.

ATI cheated and turned Q3A into an unfiltered blurry mess... and the 8500 was still slower than the GeForce 3.

Any tweak that gives better performance, yet does not impact IQ negatively is cool. That would be an optimization. A tweak that gives better performance at the cost of noticeably impacted IQ is a cheat.

The Quake/Quack issue is one of the reasons I started reading [H] a lot. It's been ten years, I'm still here, and their reporting is still some of the best out there for enthusiasts.
 
I agree that Quake/Quack is an archaic discussion... Yet somehow a lot of people on the forum lay the issue at NVIDIA's feet as another reason why they're "evil." I only mention it because of the number of posts in this thread about it, blaming NVIDIA and not ATI.

You and I disagree on the nature of what counts as an optimization and a what counts as a cheat.

NVIDIA optimized and did not compromise IQ in Q3A.

ATI cheated and turned Q3A into an unfiltered blurry mess... and the 8500 was still slower than the GeForce 3.

Any tweak that gives better performance, yet does not impact IQ negatively is cool. That would be an optimization. A tweak that gives better performance at the cost of noticeably impacted IQ is a cheat.

The Quake/Quack issue is one of the reasons I started reading [H] a lot. It's been ten years, I'm still here, and their reporting is still some of the best out there for enthusiasts.

I don't see how we disagree on quack. I merely did not go into specifics, and tried to shift the conversation away from something that really no longer matters. Both Nv and AMD have shifted away from driver optimizations that trade a great deal of IQ for speed. They seem to have moved on to buying optimization placements, or bombs for their competitors, in the games themselves. Or at least Nv seems to have went there.

I will not buy Nv products until I can use the cards I bought for the purpose Nv stated they could be used. The PhysX lockout is bullshit, and they lost a customer over it. If Batman AA style issues keep popping up, I can't see how I could ever reward their bad behavior by supporting them.

Though, again, I don't think this demo bench BS is as premeditated as AMD would have us believe. Then again, even if AMD's accusations are completely unfounded a demo benchmark is generally only good for conversation anyway. It is not like I would assume theat AMD or Nv would not optomize for the game in short order if it were popular. I would likely never have purchased it until it hit the $5 steam sale and the DRM outside off steamworks had been removed anyway, so on a personal level it is a really meaningless bench..
 
Back
Top