Benchmark Wars

its a mixed bacg of issues.. while im pretty sure nvidia has their hand in Pande Groups back pocket.. the secondary problem is that ATI/AMD decided to ditch the brook++ which is inefficient as hell in GPGPU and moved to openCL which means they have to re-write the client from the ground up.. they currently have an openCL client being tested internally and will hit the closed beta testers within the next 2 months.. from there it will be tested by them for another 2 months and then eventually released to the public.. but this also lies on Pande Group for not hiring coders that actually know what they are doing.. there are 2 boinc projects that have already proven that the 5870 eats nvidia cards for breakfast when it comes to GPGPU.. for example in collatz it takes an overclocked GTX 480 about 8 minutes to finish a WU.. compared to a stock 5870 that takes roughly 5 1/2 minutes to complete the same WU.. and overclocked 5870 takes roughly 4 1/2 minutes to complete the same WU.. milkyway@home is the other boinc project that shows the same difference with a 5870 taking less then half the time to complete the same WU on a gtx 480..

So ATI owners will trounce on F@h next year. Nice, but for the time being the only way to fold $ for $ is an nVidia card. Given the above numbers with Boinc, owners of ATI 5870s should be pulling over 35k PPD in F@H! I thought there was something fishy with the ATI cards having 4 times the stream processors power but pulling half the PPD of cards 2 years older. In the mean time, Folding@Home might as well have a friggin nVidia Splash Screen that says "The Way It's Meant To Be Played"
 
Don't count on getting anything out of that. They allocated $2 million to be distributed across all claims. I believe that is $2 million before lawyers fees, so you will be lucky to get anything.

Going by what I read about the class action it could lead to a new working motherboard for my laptop or $50 (or less). A lot of it depends on how many people file to be part of the lawsuit. I don't care if I only get $2. That $2 will help buy a 6000 series card for my computer.
 
So AMD cards have poor tesselation performance. Upcoming benchmark includes lots of tesselation and AMD asks them to tune it down to get the benchmark run better on their products. Developers refuse and AMD decides to degrade the tesselation quality via their drivers.

And all you people blame NVIDIA? WTF?

Are you so sure it will be a degrade? If AMD has a nicely adaptive implementation they could achieve equal or higher image quality with much fewer triangles than a "brute force" approach that Nvidia may be advocating.

Or you could be entirely correct. Time will tell.
 
Are you so sure it will be a degrade? If AMD has a nicely adaptive implementation they could achieve equal or higher image quality with much fewer triangles than a "brute force" approach that Nvidia may be advocating.

Or you could be entirely correct. Time will tell.


Agreed.

It really could be either way right now. I just don't think we have sufficient information at this point.

To your (Kaldskryke's) earlier point it would be worthwhile doing an image quality assessment of both once AMD releases their supposed driver fix.
 
So ATI owners will trounce on F@h next year. Nice, but for the time being the only way to fold $ for $ is an nVidia card. Given the above numbers with Boinc, owners of ATI 5870s should be pulling over 35k PPD in F@H! I thought there was something fishy with the ATI cards having 4 times the stream processors power but pulling half the PPD of cards 2 years older. In the mean time, Folding@Home might as well have a friggin nVidia Splash Screen that says "The Way It's Meant To Be Played"


no you cant compare boinc to F@H.. WU's are completely different.. boinc is a different distributed computing program.. but based on boinc i wouldnt be surprised if we see the 6970's breaking the 20k PPD barrier.. but odds are if that happens we will magically see a points systems change because ya know ATI cards cant be faster then nvidia cards..
 
Zarathustra[H];1036318387 said:
Agreed.

It really could be either way right now. I just don't think we have sufficient information at this point.

To your (Kaldskryke's) earlier point it would be worthwhile doing an image quality assessment of both once AMD releases their supposed driver fix.

That could be interesting for sure.

I wonder if we'll ever get to the point where tessellation is applied so adaptively that it is throttled to target specific framerates. Wouldn't that cause havoc for reviewers? Suddenly they'd have to compare image quality or triangle counts instead of FPS. :p
 
.....

you win

I'm sorry. I was not trying to demonstrate any sort of intellectual superiority by my "clarification" - I was just so unimpressed by the content of the original article (A, "DUUUUH" moment IMO) I had to comment on SOMETHING. Graphics companies fixing benchmarks? Say it ain't so!
 
So AMD cards have poor tesselation performance. Upcoming benchmark includes lots of tesselation and AMD asks them to tune it down to get the benchmark run better on their products. Developers refuse and AMD decides to degrade the tesselation quality via their drivers.

And all you people blame NVIDIA? WTF?

Yeah you pretty much nailed it. Since when are previews in the software industry equivalent to some sort of evil conspiracy? What AMD needs to do is provide details on why they think Ubisofts implementation is suboptimal instead of whining like little bitches. We need facts not allegations.
 
Ok, so there's a game that has a benchmark. The bencmark favors one GPU manufacturer over it's competitor. The manufacturer whose hardware performs poorly says don't use it while the hardware manufacturer whose hardware performs well says we should....

All the while fanboys from the manufacturer with poorly performing hardware are crying foul....

Sounds like business as usual to me!
 
I'm sorry. I was not trying to demonstrate any sort of intellectual superiority by my "clarification" - I was just so unimpressed by the content of the original article (A, "DUUUUH" moment IMO) I had to comment on SOMETHING. Graphics companies fixing benchmarks? Say it ain't so!

I wasn't being sarcastic. I did thoroughly enjoy your comment. It made me chuckle. I also agree with you that this is was overblown. Any company, will spend money to make their products look better. Do you really think Gatoraide works better than Poweraide just because the NFL says it's their drink of choice?

Nvidia is willing to spend dollars to improve the performance of their product. More power to them. I'm willing to bet if AMD offered up more cash than Nvidia this whole thing would be flipped. There is no morality in corporations and its foolish to consider. Each company does what it can for it's bottom line. Nvidia can afford to throw more dollars at devs than AMD.
 
no you cant compare boinc to F@H.. WU's are completely different.. boinc is a different distributed computing program.. but based on boinc i wouldnt be surprised if we see the 6970's breaking the 20k PPD barrier.. but odds are if that happens we will magically see a points systems change because ya know ATI cards cant be faster then nvidia cards..

Ok so not the same but I call foul as this is science not a benchmark and because it has become something of compition. There is no reason to favor one GPU over the other if in the name of science.
Someone is getting a LOT of benifit from this offhanded shit and it's not AMD as usual.
 
Ok, so there's a game that has a benchmark. The bencmark favors one GPU manufacturer over it's competitor. The manufacturer whose hardware performs poorly says don't use it while the hardware manufacturer whose hardware performs well says we should....

All the while fanboys from the manufacturer with poorly performing hardware are crying foul....

Sounds like business as usual to me!

It's not an issue of X is better than Y.

"AMD has demonstrated to Ubisoft tessellation performance improvements that benefit all GPUs, but the developer has chosen not to implement them in the preview benchmark. "

Rather it's X is better than Y without improvements but not so compared to Y with improvements so discard Y's improvements in the benchmark.
 
I will :) (even though english is not my primary language)

That doesn't seem to stop those of us for whom English IS our first language from typing some hideous crimes against our mother tongue from time to time.
 
We have yet to see many games adopt tessellation for anything more than a checkbox feature aside from Civ 5. Part of the problem lies in the difficulty in creating art assets that can be tessellated since "standard" polygonal meshes don't always work well.
(...)
Another part of the problem is the lack of guides and examples for writing tessellation code. (...)

There's a third problem. If you were taking a walk on the streets of Unigine Heaven, then you wouldn't be seeing your feet half the time, as tesselation changes surface heights, and that isn't fed back to the original object mesh that is used to calculate collisions. Actually, this is a problem not at all trivial to solve.

I'm genuinely curious what exactly HAWX2 does that performs so slowly on AMD hardware aside from simply going overboard on tessellation. Hopefully someone does some before/after analysis once AMD gets its driver in shape for it.

As a rumour, I heard that the HAWX2 demo is upping the tesselation factor to unreasonable levels, where it provides no tangible image quality improvements while using a lot of tesselation power. If it is true, then it is a despicable tactic, and it will be easy for AMD to provide a fix. If it's true.
 
Wow, that took balls the size of Texas to post why Kyle posted.

I read with interest the last 5 pages, which basically boils down to:
- if a vendor pays a dev to improve the game for their platform, it is ok.
- if a vendor pays a dev to purposely worsen a game in the competitors platform, it is NOT ok.

I am not really sure how I feel about the whole thing though. As PC gamer I get "the feeling" that what is going on might be making the platform less competitive.

Specifically, like in consoles, you can buy into Sony, Nintendo, or MS, etc...

I hope we dont end up in a situation, where at some point where we game on NVidia, or, AMD, but not interchangebly. That would really suck.

Kyle, please, can someone at [H] write an editorial on the situation. The facts that you posted, are priceless and speak very loudly, but I am sure a lot of your readers would love an educated editorial.

Either way, thanks for posting what you did, it sure is making a lot of us think.
 
code bombs are real dudes, do not try to denie it.

A part of code that looks for competing hardware and disable/slow down multiple functions, the prime example being the Physics implementations by NVIDIA; but Batman
AA is even more grotesque, as the code SEARCHED for ATI cards to specifically slow things down.

A decision to use the slower code/method to make a certain brand not appear slower on benchs, the prime example being the Assasins Creed Dx10.1 debacle.

The claim here is that NVIDIA is ASKING sites to use a canned benchmark to review a competing product, and people are still wondering if this is fair game?

fair game would be Ubisoft to release the source code for the AMD driver team to locate the code bombs and remove them:)
 
Translation: AMD cards still suck in newer games.


Get your shit together AMD and have working drivers, stop blaming everyone else. StarCraft 2, Batman AA, etc. etc.
 
The one and only question everyone should be asking at this point is "Why didn't they use the code?". Anyone that automatically jumps to any conclusion without more facts has a clear bias. We know why Nvidia is pushing it and we know why AMD doesn't want people to use it. Both reasons are entirely selfish and entirely PR driven. Anyone that sees them as something else is delusional.

Now that brings us back to Ubisoft. Is it possible they didn't use it to make Nvidia look better? Oh hell yes. That is a real possibility and not beyond the realm of belief. However, that is but one of a number of possibilities. A few others (off the top of my head) include that the code made changes within the game that broke something else. The code arrived too late for them to test it properly and put it in. The code pushed something out of DX11 spec. Ubisoft didn't think the code was any good. And so on and so forth.

Both companies are not above using under-handed tactics to get what they want and they both have a big stake in the performance of these new GPUs.
 
Translation: AMD cards still suck in newer games.


Get your shit together AMD and have working drivers, stop blaming everyone else. StarCraft 2, Batman AA, etc. etc.

Wow talk about uninformed. You seriously think that Batman AA was bad drivers? It's been verified that they used a VendorID detection method to make it so it wouldn't work on ATI cards.
 
I don't understand why you guys want this artificial performance block to stay in; it hurts both AMD and nVidia to have tessellation where it's going to be wasted; i.e. having to do multiple passes on something that's already been done before.
 
Does anyone know where we can download this benchmark and try it for ourselves?
 
Remember that BOTH nvidia and ATI were caught cheating on the futuremark benchmark.

I'll throw out an alternative explanation. What if ATI's tessellation performance is so bad, they are desperate to keep people from doing the benchmark, until they work on fixing up some driver issues (remember all the driver issues they've been having in crossfire recently, some of them ignored until [H] brought the spotlight).

I don't know if thats the case or not, but its something that popped in my head reading the ATI explaination.
 
I say run it anyways. It would also benefit AMD when we show the before and after scores demonstrating how well AMD did implementing this driver-based solution they mentioned. If they're sure of themselves about the driver-based solution and how it'll increase performance, then they should have no objection to this.

100% agreement, running it anyways would be best.
 
I don't understand why you guys want this artificial performance block to stay in; it hurts both AMD and nVidia to have tessellation where it's going to be wasted; i.e. having to do multiple passes on something that's already been done before.

If it is an actual artificial performance block, then by all means, remove it.

The way I feel about things is that we can't trust either Nvidia or ATI in this stuff, so we have to be critical of everything they tell us.

It could be:

1.) Intentional pressure by nvidia to make AMD look bad.

2.) Simple poor coding on behalf of the Hawx developers, and an inability to update it to AMD's suggestions.

or

3.) AMD trying to cover up the fact that their boards are weaker at tesselation than Fermi.

My guess is that we are seeing a little of each:

Hawx developers coded a product that hung up cards with lesser tesselation abilities. The benchmark tasks all video cards harder on tesselation than needed, but this is noticed less on Fermi, due to its higher tesselation performance. Nvidia noticed this and started distributing the benchmark as a marketing tool, to try to take the momentum away from AMD's upcoming 68xx launch. AMD fired back with a tweak that may or may not impact image quality. Hawx devs rejected the tweak due to timeframe/compatibility with their code/some other reason.

My guess is that down the line we will see an updated code in Hawx. But I have learned that updating code takes time, especially th epart where tyou test if it has any unintended consequences.
 
Remember that BOTH nvidia and ATI were caught cheating on the futuremark benchmark.

I'll throw out an alternative explanation. What if ATI's tessellation performance is so bad, they are desperate to keep people from doing the benchmark, until they work on fixing up some driver issues (remember all the driver issues they've been having in crossfire recently, some of them ignored until [H] brought the spotlight).

I don't know if thats the case or not, but its something that popped in my head reading the ATI explaination.

Out comes the Magical Balance Fairy...not just in this post, but it's an example.
 
Better be careful or they're not going to send you any review cards, file a lawsuit for defamation, raise the card MSRP $10, and ban you from bestbuy. (The irony here is that all of that wouldn't be a shocker coming from senile Nivida CEO's, lol.)
 
I really, really liked when someone on staff here (can't remember who it was) asked the forums for some benchmark suggestions.

I would love to see some benchmarks on games that don't have a monetary affiliation with graphics card companies (a la TWIMTBP), and I'm glad to see that Hardocp is committed to fairness.
 
If you scratch Ubisoft titles off the Christmas list.--- Suddenly software gets fixed.
Too bad we cant get that done with F@H.
 
Is that the chicken or the egg? Is it not being implemented because it's not needed or because only 1/2 the cards out there can run it?

Zarathustra[H];1036318293 said:
Good point. This really could just be a matter of Hawx2 being particularly tessellation heavy, and AMD just trying to cover up the fact that their parts are weaker when it comes to tessellation than Fermi is.

So AMD cards have poor tesselation performance. Upcoming benchmark includes lots of tesselation and AMD asks them to tune it down to get the benchmark run better on their products. Developers refuse and AMD decides to degrade the tesselation quality via their drivers.

And all you people blame NVIDIA? WTF?

And how did you come to the conclusion that Nvidia's cards have such a superior tessellation in real gameplay? I would love to see some tests.

Unigine Heaven is useless because it has no relevance to real use. Google how Fermi does tessallation to see why that is. As for the linked Civ V test it is a) a benchmark, not gameplay, and b) with tessallation disabled (or set to lowest) relative order of the cards remains the same (i.e. 460 is still faster than 5870) so there's something else that affects performance.
 
First PhysX, now this. Sighz. Good thing we have [H] to do real game play benching. We love you guys!
 
@Meeho, if nVidia had no advantage then what's AMD whining about? There was also a recent Huddy interview where he criticized nVidia for asking devs to use heavy tessellation in their games.
 
Wow talk about uninformed. You seriously think that Batman AA was bad drivers? It's been verified that they used a VendorID detection method to make it so it wouldn't work on ATI cards.

Don't feed the troll. I've seen him spewing shit all over the AMD subforum.
 
Back
Top