Project CARS Devs Address AMD Performance Issues

This is one of the most retarded posts in this entire thread.

Where were you when people were giving Nvidia flak for keeping CPU PhysX as x87 instructions only?

To this day, the CPU-accelerated PhysX library is crippled by it being pure, non-SIMD x87 floats.

Since Kepler and later NV GPUs sometimes have fits with the old Ageia PPU code, Nvidia has depreciated it. CPU PhysX has been, and will always be a performance-killing dead end.
in 2009-2010 I was enjoying awesome physx in batman arkham asylum and reading the same people in this thread pissing and moaning about how useless physx was and how there was only a handful of titles and, ironically, that it should be given to the community so that AMD free riders could benefit from the non-beneficial eye candy it didn't provide.

as for my "retarded" post vs. your response:

physx used x87...over 5 years ago. they didn't bother compiling it against SSE because they were moving forward with development on physx 3.

nVidia released physx 3 as open source, freely available on github for anyone and everyone to use, benefit from, fork to their heart's content. your information is outdated. quit relying on a blog post from 2010 for your information.

this kind of reponse, using information from years ago, and asking for an explanation of how someone with access to the source code could have optimized their drivers and hardware to better address the physics, ignoring the fact that reducing physics settings in game does in fact reduce the physx demands on the cpu, are all reasons why adults can't and won't take your opinions seriously.
 
Crap, I forgot to add: It does not lock anyone out either.
directx doesn't lock anyone out? are you serious? it doesn't lock out anyone on linux or osx? it doesn't lock out anyone who doesn't play ball in microsoft's court? microsoft even locks out their own customers for pete's sake to force Windows version migration!

wow! you guys are so misguided and misinformed I feel like I should pity you more so than anything else.
 
nVidia released physx 3 as open source, freely available on github for anyone and everyone to use, benefit from, fork to their heart's content. your information is outdated. quit relying on a blog post from 2010 for your information.

No you can't, look at the license...

You cannot distribute a change without Nvidia's explicit permission.
 
No you can't, look at the license...

You cannot distribute a change without Nvidia's explicit permission.
what are you talking about distributing a change?

the conversation is about whether nVidia's physx is a blackbox and whether AMD is simply locked out from optimizing their implementation of it on CPUs. Both claims are false.

The argument never was, and would be even more silly, whether AMD can take their code and redistribute it to someone (who doesn't even exist given that there are only two GPU vendors on the planet atm, lmfao)
 
what are you talking about distributing a change?

the conversation is about whether nVidia's physx is a blackbox and whether AMD is simply locked out from optimizing their implementation of it on CPUs. Both claims are false.

The argument never was, and would be even more silly, whether AMD can take their code and redistribute it to someone (who doesn't even exist given that there are only two GPU vendors on the planet atm, lmfao)

AMD can't optimize physx for CPUs. The source code is available for viewing, not editing as you wish. Additionally, how is that AMD's responsibility?
 
nVidia released physx 3 as open source, freely available on github for anyone and everyone to use, benefit from, fork to their heart's content. your information is outdated. quit relying on a blog post from 2010 for your information.
No, NVidia released CPU code only as source accessible, not open source, as there's a EULA and NVidia approves all changes (pretty much the opposite of open source). And please Mope, you promised how you were going to make us all feel so dumb by explaining to us exactly how AMD through driver optimization, if they weren't lazy, would offload Physx from the CPU onto their GPU, or how you assured us that Physx is disabled in game on medium settings. ;)

Pro-tip: The only way AMD would be able to do anything about this is if NVidia released physx in its entirety (meaning GPU) in open source so AMD could make an openCL port which should work great on APUs too. As it is now, the only option available to AMD is to approach NVidia for licensing, and since they are the only real direct competitor obviously they would be priced out of competition. But your claims that you can disable physx in this game or that AMD just has to not be lazy and optimize drivers is hilarious.
 
what are you talking about distributing a change?

the conversation is about whether nVidia's physx is a blackbox and whether AMD is simply locked out from optimizing their implementation of it on CPUs. Both claims are false.

The argument never was, and would be even more silly, whether AMD can take their code and redistribute it to someone (who doesn't even exist given that there are only two GPU vendors on the planet atm, lmfao)

You cannot touch the implementation to try and do something like optimize the runtime, or, for example, make it run on OpenCL without Nvidia's agreement. AMD can't do it, Intel can't do it, you can't do it - not without Nvidia saying you can. Furthermore, you're agreeing that they own all rights to any modifications to the code you made if you even got that far somehow.


You cannot simply go and fork it, improve it for your platform, and use it without violating the license.
 
I've actually read the license and I suspect that you have not.
Did you read it before or after you figured out that setting medium graphics setting on the game disables physx (wrong) or that an AMD driver update could somehow offload physx to the GPU (wrong)? :D
 
Literally copy pasted chunk of it.

Yeah man this somehow totally disagrees with what I said

tjsshhff said:
// Notice
// NVIDIA Corporation and its licensors retain all intellectual property and
// proprietary rights in and to this software and related documentation and
// any modifications thereto. Any use, reproduction, disclosure, or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA Corporation is strictly prohibited.
//

You cannot go and make your own awesome version and let it loose on the world without their permission.
 
Literally copy pasted chunk of it.

Yeah man this somehow totally disagrees with what I said



You cannot go and make your own awesome version and let it loose on the world without their permission.
looks like a standard EULA to me. what's your problem with it?

the argument for years, and in this thread, has been that nVidia created a proprietary black box SDK that AMD couldn't access in order to optimize their own drivers against and that this somehow unfairly crippled their cards' performance.

I point out that nVidia actually freely provides the source code to their proprietary developments and now you guys are going to cry that they should be releasing it under an open source license so that AMD can profit off their hard work? And even doing that isn't enough? You want nothing short of nVidia dropping the source code for their dGPU physX implementation? Your entitlement knows no bounds! :eek:

Instead, AMD is perfectly capable of taking the source, tweaking it, forking it, whatever they want to do with it, they could even come up with their own dGPU implementation and the *only* thing they have to do in order to distribute it is to license it from nVidia?

Are you guys still in high school? That's more lenient terms than some open source licenses! "Open source" doesn't mean "free to use" as in the developers can't require compensation. It simply means here is our brain work for everyone to look at and learn from. nVidia bought the tech, developed it into version 3, released version 3 for free, and developed that into dGPU and AMD, if they have the ability and desire, can do that themselves or license the tech from nVidia. This has been nVidia's official stance since at least 2009.

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can’t really give PhysX away for “free” for the same reason why a Havok license or x86 license isn’t free—the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.
-- https://forums.geforce.com/default/...nswers-its-first-week-of-questions-oct-23rd-/

Your position is a completely unreasonable and untenable position for a business to follow. It doesn't even make any sense. And ducman's little potshots appear juvenile but more importantly underscore just how little he understands about physics engines or code development.
 
After my ranting about AMD and drivers. It has come to my attention that the DEV's lied to everyone, and basically this is an Nvidia Gameworks game with Built in Physx.

The Dev's did admit to it, and basically lied to everyone.

Hopefully this is not the state of games going forward.

So I will eat crow now since I was wrong
 
After my ranting about AMD and drivers. It has come to my attention that the DEV's lied to everyone, and basically this is an Nvidia Gameworks game with Built in Physx.

The Dev's did admit to it, and basically lied to everyone.

Hopefully this is not the state of games going forward.

So I will eat crow now since I was wrong
oh that's rich...yep, they lied even though nVidia PhysX was announced two years ago for both projectcars and star citizen :rolleyes:

the funny thing?
here's SirPauly talking about that announcement back in 2013
http://forums.anandtech.com/showthread.php?t=2348781

and here's SirPauly moaning about nVidia "sneaking in GPU physX."
http://forums.anandtech.com/showthread.php?t=2430693&page=13

maybe he forgot that he knew about it in 2013? :confused:

along with CarFax stating the obvious (to everyone except ducman apparently):
"These games will use the latest PhysX 3.xx, which will run much faster on multicore CPUs. It will require an NVidia GPU to play on the highest level most likely, but I think it should run fine on low and medium settings on a fast CPU."


same people moaning about the same issues for years now. The only thing that would satisfy them is if we didn't have dGPU physics acceleration or nVidia just gave away all the tech they purchased and developed. sounds like a real rosy picture.

how about you spend a fraction of the energy you expend on these forums over the years and contact AMD and tell them you'd like them to either license the tech from nVidia or develop their own dGPU acceleration...oh that's right, you won't because PhysX doesn't bring anything to the table and that's why you want it so badly...wait, er, what?
 
oh that's rich...yep, they lied even though nVidia PhysX was announced two years ago for both projectcars and star citizen :rolleyes:

the funny thing?
here's SirPauly talking about that announcement back in 2013
http://forums.anandtech.com/showthread.php?t=2348781

and here's SirPauly moaning about nVidia "sneaking in GPU physX."
http://forums.anandtech.com/showthread.php?t=2430693&page=13

maybe he forgot that he knew about it in 2013? :confused:

along with CarFax stating the obvious (to everyone except ducman apparently):
"These games will use the latest PhysX 3.xx, which will run much faster on multicore CPUs. It will require an NVidia GPU to play on the highest level most likely, but I think it should run fine on low and medium settings on a fast CPU."


same people moaning about the same issues for years now. The only thing that would satisfy them is if we didn't have dGPU physics acceleration or nVidia just gave away all the tech they purchased and developed. sounds like a real rosy picture.

how about you spend a fraction of the energy you expend on these forums over the years and contact AMD and tell them you'd like them to either license the tech from nVidia or develop their own dGPU acceleration...oh that's right, you won't because PhysX doesn't bring anything to the table and that's why you want it so badly...wait, er, what?


Amd users don't want physx lol they want games to work on both brands of gpus. What don't you get about that?

This game was designed only with no idea in mind. If that's what they knew they were doing, why lie about it, and why sell it to amd customers?
 
Amd users don't want physx lol they want games to work on both brands of gpus. What don't you get about that?

This game was designed only with no idea in mind. If that's what they knew they were doing, why lie about it, and why sell it to amd customers?
Because when I point out that you can simply turn the physics down in the settings that's not good enough. And when I point out that AMD can optimize their drivers against the open source code the response is that nVidia should give away all their developments over the last decade. So clearly nothing is satisfying except a free ride for AMD and their consumers despite how often you guys switch your demands back and forth depending on the answers you're given.

And there wasn't any lying. 2013 the devs announced this would be a PhysX and game.
I can't really answer why they would sell their game to consumers, I'm equally as befuddled as you on that one :rolleyes:
 
Because when I point out that you can simply turn the physics down in the settings that's not good enough. And when I point out that AMD can optimize their drivers against the open source code the response is that nVidia should give away all their developments over the last decade. So clearly nothing is satisfying except a free ride for AMD and their consumers despite how often you guys switch your demands back and forth depending on the answers you're given.

And there wasn't any lying. 2013 the devs announced this would be a PhysX and game.
I can't really answer why they would sell their game to consumers, I'm equally as befuddled as you on that one :rolleyes:


Turning down settings isn't the same as disabling physx. There is no reason my r9 290 should be performing the same as a 660ti in this game when it trades blows with a 970 in everything else I have thrown at it. You wouldn't think something dodgy was going on if you bought a game with a amd "feature" That made your 970 or 980 run like a 7770?? Lol. If you can't be honest about that then there's no point going any further in this conversation.
 
Turning down settings isn't the same as disabling physx. There is no reason my r9 290 should be performing the same as a 660ti in this game when it trades blows with a 970 in everything else I have thrown at it. You wouldn't think something dodgy was going on if you bought a game with a amd "feature" That made your 970 or 980 run like a 7770?? Lol. If you can't be honest about that then there's no point going any further in this conversation.
Do you actually own the game? Or are you repeating what you've read?
 
Do you actually own the game? Or are you repeating what you've read?

So your going to avoid the question I asked?

I own it and a mate who owns a 770 owns it. I took my wheel around to his place expecting it to struggle to run it, I wasn't aware of the physx being handled differently in this game, so I was quite surprised when it ran better on his 770 than on my 290 when I normally average around 20+ fps more than him in all games. So I googled. At this stage I am seriously considering running it on my back up system that has a 750ti. Probably get similar performance lol.
 
Turning down settings isn't the same as disabling physx. There is no reason my r9 290 should be performing the same as a 660ti in this game when it trades blows with a 970 in everything else I have thrown at it. You wouldn't think something dodgy was going on if you bought a game with a amd "feature" That made your 970 or 980 run like a 7770?? Lol. If you can't be honest about that then there's no point going any further in this conversation.
We need to take a few steps back here because there are some trolls in this thread really muddying the waters.

This "saga" has been going on for over 5 years. I'll take you back to Batman.

There was a checkbox to use "PhysX" in the game settings. You could turn PhysX on or off. But no game has multiple physics engines. It's not like you use PhysX on your GPU or turn it off and use Havok on your CPU--what you did was routed the calls through the CPU and didn't get as good as physics in your game experience.

The AMD crowd, some of them in this very thread and over on anandtech forums in fact, bayed and whined that they were being left out of the picture because PhysX (and physics in general) could be calculated on the CPU. nVidia was claiming that it couldn't be done as effectively because of the parallel calculations that GPUs were capable of doing, whereas AMD was marketing to its consumers that nVidia was lying. Some people even managed to get PhysX to run when they were using AMD cards thereby "proving" to themselves and people who wanted to believe it that nVidia was simply needlessly vendor locking their PhysX to nVidia GPUs to increase sales. nVidia claimed that they needed to vendor lock to prevent that kind of shenanigans because PhysX ran like shit when routed through the CPU when using the the complex physics calls they wanted routed through their GPUs and it made PhysX look bad (and particularly harmful to their brand during its infancy).

The response from AMD consumers was that it ran like shit and it didn't really add anything anyway and nVidia was an evil company. nVidia consumers enjoyed Batman and talked about the cool ripples in his cape. If you think I have the history on that incorrect search the forums and you can probably plug my username into your search terms to help locate the threads because the arguments then were the same as now.


Now we've got over 5 years of development into PhysX, branding, enhancements, SDK's, source code, etc. but the situation is the same: If you want the full enchilada that PhysX has to offer you need an nVidia GPU.

Now when you turn the settings to high or whatever the threshold is (I actually posted three different ways to turn the physics up or down according to your card's capabilities and desired performance level), you are going to get PhysX that is meant to be processed through an nVidia GPU. You can't get high physics without using PhysX the way it's intended to be used and still get decent performance. There is no physics engine other than PhysX in the game and that's not because nVidia pulled a fast one on anyone--that's how physics engines are used in a game engine!

What you can do, and what has always been the case when PhysX is used as the physics engine, is turn the physics complexity down and get some sort of playable PhysX emulated on a CPU. This time, however, a "clever" (too clever by half, it turns out) AMD developer managed to push the whole enchilada onto the CPU so you could actually select high physics without an nVidia GPU. Years ago, remember, the game locked AMD customers out from doing this and they shit their pants. The results are predictable--it runs like shit when you do that...so you should not do that.

The solution is to turn down the physics complexity until you get your desired framerates. I'm sorry if you'd like to turn physics completely off in a racing game, that's probably not possible in the literal, pedantic sense that ducman has a special talent for dragging these conversations down so deeply into the weeds over, because reasonable people would never expect that to even be an option or desire that option even if it was.

When the developer or myself say, turn PhysX "off" we're referring to making sure it's not set at a level that expects and needs an nVidia GPU to operate fluidly. There is still PhysX operating on the CPU. PhysX on a CPU is not the problem...cranking it up to the max and expecting it to run the same way on a CPU is the problem.

No game, at least that I know of, uses multiple physics engines and switches them on the fly depending on GPU vendor. Requesting that is ludicrous. And the alternatives, that we simply have no accelerated physics on GPUs or that nVidia simply give their tech away at no cost to the competition or to people who aren't their customers are equally ridiculous for what should be fairly obvious reasons.
 
So your going to avoid the question I asked?

I own it and a mate who owns a 770 owns it. I took my wheel around to his place expecting it to struggle to run it, I wasn't aware of the physx being handled differently in this game, so I was quite surprised when it ran better on his 770 than on my 290 when I normally average around 20+ fps more than him in all games. So I googled. At this stage I am seriously considering running it on my back up system that has a 750ti. Probably get similar performance lol.
I'm not avoiding your question. I'm trying to help you out. If you get defensive and turn this into a pissing contest then I don't need to waste my time helping you out on my spare time. Got it?

Did you try turning the settings down like I posted earlier in the thread?

To be clear: you have the game, you have an AMD card, and did you turn the physics complexities down as much as you could and it still runs like crap?
 
*physX is not being "handled differently" in this game.

It's the same as it always has been: physx runs best when you have an nVidia card. If you don't you have to turn your physics settings down because it's going to use the older API and emulate it on the CPU.

The only thing that might be considered different is that an AMD developer managed to push all the calls through the CPU so it's tanking your processor. In the past, you couldn't select those high settings and AMD consumers shit their pants. Then developers simply allowed you to select those high settings but didn't actually implement them (and AMD consumers came to the boards to tell all of us how useless those settings were, go figure) because they got tired of the shitstorms.
 
After my ranting about AMD and drivers. It has come to my attention that the DEV's lied to everyone, and basically this is an Nvidia Gameworks game with Built in Physx.

AMD said they would fix it. How could they fix it if it's not their fault? If Gameworks were truly blocking them, there would be no fix possible.
 
We need to take a few steps back here because there are some trolls in this thread really muddying the waters.
Couldn't have said it better myself.

To recap, it was explained to mope pages back by myself and several others that you cannot disable physx in this game, because they made it integral to the engine, so all you can do is reduce the graphics settings to those below which a far inferior outdated NVidia card can play at. Mope insisted if you reduce the graphics to "medium" setting, that physx would be disabled, we explained to her why that is wrong, and now she is pretending she knew that all along and is repeating what we told her as if it was her idea, lol!

Further she goes on to say that AMD if they weren't so "lazy" could somehow offload physx processing to the GPU, and we have explained to her as well why this is rediculous, and that Physx is not open source, its CPU only source available, and so completely out of AMD's control.

She goes on further to insist that physx on the CPU is not the problem, when its shown on NVidia cards that if you offload to the CPU that obviously physx processing on the CPU is the problem. Because NVidia has locked out competitors from any other option, including running a cheap NVidia card for physx processing due to a check to see if an AMD card is present, so there is absolutely nothing AMD can do to fix NVidia's product.

And NVidia and sponsored shill games don't want it fixed. As was pointed out, it is fine if NVidia wants to sponsor some games that are rendered effectively unplayable on non-NVidia hardware. We just ask that they be honest about this, and not lie and say that AMD was out of communication which they were not, or that this is somehow an AMD driver problem, when we see its simply a Physx problem as shown in testing.

And yes, I know the trolling misinformation is annoying, but argument will just result in pages of nonsense burying any useful discussion.
 
I quoted the developer and forum users detailing how to reduce the settings in order to disable the problematic PhysX calls. "Disabling PhysX" can only be understood in context to be referring to the problematic calls because one could not, and would not want to, completely disable the physics in a game. That's never been the case even when there was an explicit "disable PhysX" button in the game--it merely referred to disabling PhysX rendering through the GPU.

That I suggested anything else is merely a strawman that you created.
Everyone else understood exactly what the developers, the forum users, and my posts intended to convey.

I never even hinted that AMD could somehow "offload physx processing to the GPU." You just completely fabricated that claim. Just another example of you shitting up the thread.

I said that if AMD weren't so lazy that they would optimize their drivers to route the appropriate calls correctly. You claim to be an engineer but it's clear you don't understand the first thing about game engines, physics engines, or programming at even it's most basic level. Your posts, if they weren't so full of ridiculous misinformation and pointless trolling, would be laughable.

The whole point of the significance of nVidia opening their source code to their competitors is completely lost on you because of your complete disregard for good faith discussion and utter lack of understanding for how coding API's work. You changed your argument so many times it's damn near impossible to track what you're even claiming so after many pages you just figure people will get lost in the weeds.

You first claimed the API was a blackbox so there was no way AMD could optimize their drivers. Then after I pointed out that it wasn't a blackbox anymore because nVidia provides the source code you shifted to claiming they need the source to how it's done on the GPU in order to do anything. Then when I posted that nVidia offers to license their tech to anyone, including AMD, and have had that offer on the table for 6 years and counting, you simply dropped the whole thing and started a pedantic argument over the use of "open source" and arguing about licensing terms to try and claim that open source means that anyone can do anything they want with it and that proprietary means that no one can do anything with it at all (which you are hilariously misinformed about on both counts; but of course wouldn't know this because you clearly haven't ever coded under an open source or proprietary license and have no experience or expertise to draw from).

Just give it up. You're a free rider and your bottom line is that nVidia must give all of their IP away for free, just like your argument that modders should give all their IP away for free. That's your constant MO spanning years on this forum. That's your philosophy on life as you've demonstrated on this forum. Your behavior undermines your claim that you are a professional in a tech field or that you make 6 figures as an engineer. Your philosophy of entitlement, abrasive discussion "skills," and illogical claims are juvenile and taxing.
 
I'm also male, but you already knew that and consistently refer to me as a woman in an immature attempt to get under my skin as if being a woman would be an insult. It's only an insult to someone with your mentality, bud, so you can go ahead and try and find some other way to denigrate my posts.
 
I quoted the developer and forum users detailing how to reduce the settings in order to disable the problematic PhysX calls.
Yes, we already explained that option to you that you cannot turn off PhysX and can only reduce settings, but that high end non-NVidia cards would have to reduce graphics settings to those of archaic NVidia cards is a joke of a fix, as there are plenty of games with beautiful graphics on which AMD trumps NVidia cards that are over twice the price. So as was explained to you, this is an NVidia sponsored game that is crippled for non-NVidia video cards due to their Physx implementation and its horrible CPU based performance.
"Disabling PhysX" can only be understood in context to be referring to the problematic calls because one could not, and would not want to, completely disable the physics in a game.
Mope, you literally replied AFTER it was already explained to you by multiple people that you can't turn Physx off, you can only reduce graphics settings to reduce the load. Yet you insisted in response to that "Of course you can turn it off" indicating that you just have to reduce the settings to "medium", probably because you read that on a forum post somewhere else and didn't understand it until later that you were wrong. Otherwise, there was no point in arguing with people saying what you now claim you "meant" all along. :rolleyes:
I never even hinted that AMD could somehow "offload physx processing to the GPU." You just completely fabricated that claim. Just another example of you shitting up the thread.
O RLY? :D
mope54 said:
AMD developers took a shortcut and pushed all the physics onto the CPU.
Please explain to us champ, how exactly did AMD "push all the physics onto the CPU" when Physx by design intent from NVidia cannot run on AMD GPUs? What "shortcut" did they specifically take that would have allowed them to NOT "push all the physics onto the CPU", and where exactly would it go other than onto the GPU? Just admit that you are just reading the other forum thread, trying to understand it, and made mistakes because you don't know what you're talking about, and then when busted on it try to pretend you either never made the claim or backhanded disagree with us only to later repeat exactly what we were telling you, and pretending that's what you meant the whole time, lol!
 
The AMD crowd, some of them in this very thread and over on anandtech forums in fact, bayed and whined that they were being left out of the picture because PhysX (and physics in general) could be calculated on the CPU. nVidia was claiming that it couldn't be done as effectively because of the parallel calculations that GPUs were capable of doing, whereas AMD was marketing to its consumers that nVidia was lying. Some people even managed to get PhysX to run when they were using AMD cards thereby "proving" to themselves and people who wanted to believe it that nVidia was simply needlessly vendor locking their PhysX to nVidia GPUs to increase sales. nVidia claimed that they needed to vendor lock to prevent that kind of shenanigans because PhysX ran like shit when routed through the CPU when using the the complex physics calls they wanted routed through their GPUs and it made PhysX look bad (and particularly harmful to their brand during its infancy).

The response from AMD consumers was that it ran like shit and it didn't really add anything anyway and nVidia was an evil company. nVidia consumers enjoyed Batman and talked about the cool ripples in his cape. If you think I have the history on that incorrect search the forums and you can probably plug my username into your search terms to help locate the threads because the arguments then were the same as now.
It wasn't PhysX in Batman:Arkham Asylum that was the issue. It was the fact that AA was only available to nVidia video cards. Nvidia had the developers include a VendorID lockout on AA to prevent AMD users from enabling it. And in addition, some of the AA calculations were still being doing on the AMD systems even though they weren't being allowed to display it! This created unnecessary overhead and artificially slowed down any computer with an AMD card in it.

’Amusingly’, it turns out that the first step is done for all hardware (even ours) whether AA is enabled or not! So it turns out that NVidia’s code for adding support for AA is running on our hardware all the time – even though we’re not being allowed to run the resolve code!
So? They’ve not just tied a very ordinary implementation of AA to their h/w, but they’ve done it in a way which ends up slowing our hardware down (because we’re forced to write useless depth values to alpha most of the time…)!"
it turns out that the code that was labeled "MSAA Trademark NVIDIA Corporation" was identical to the one recommended by both sides with one very important difference: nVidia’s code features a Vendor ID lock that allows for AA menu inside the game only if nVidia hardware is detected.
After carefully reviewing statements released by all sides, talking to developers, there isn’t much left. Forum members on Batman: Arkham Asylum Forums and AMD themselves all changed the vendor ID on their cards and got equal functionality as the one experienced on nVidia cards. This happened at the time when nVidia members both claimed that Batman AA code is proprietary to Eidos and that no vendor ID locks is implemented in the code. "Hands in a cookie jar" is the only parallel we can draw here.

http://www.vrworld.com/2009/11/04/batmangate-amd-vs-nvidia-vs-eidos-fight-analyzed/

And now here we are again today with Nvidia deliberately causing performance issues with AMD cards. As an AMD owner, I couldn't care less what Nvidia does to its own customers. But I get pissed when it actively sabotages gaming performance on MY system.
 
It wasn't PhysX in Batman:Arkham Asylum that was the issue. It was the fact that AA was only available to nVidia video cards. Nvidia had the developers include a VendorID lockout on AA to prevent AMD users from enabling it. And in addition, some of the AA calculations were still being doing on the AMD systems even though they weren't being allowed to display it! This created unnecessary overhead and artificially slowed down any computer with an AMD card in it.





http://www.vrworld.com/2009/11/04/batmangate-amd-vs-nvidia-vs-eidos-fight-analyzed/

And now here we are again today with Nvidia deliberately causing performance issues with AMD cards. As an AMD owner, I couldn't care less what Nvidia does to its own customers. But I get pissed when it actively sabotages gaming performance on MY system.

Do you actually have proof that nVidia deliberately caused performance issues with AMD cards?

EIDOS reached out to AMD and nVidia for help with Arkham Asylum.
EIDOS pro-actively got an engineer from nVidia, for the majority of development.
EIDOS did not get an engineer from AMD.

Asylum is written on UE3 engine. There was no AA available in DirectX 9c for that engine built in.

EIDOS asked for assistance for creating an AA module for the engine.
EIDOS received proprietary code from nVidia, written, paid for, and QA tested by nVidia in house with EIDOS.
nVidia had a VendorID block on the code to do an IF (nVidiaCard) / Else (RunOtherAARenderingMethod)
EIDOS did not get anything from AMD. Thus there was never an ELSE.

It turns out though that nVidia's AA code was well written and basic enough it ran well on AMD hardware, something nVidia had not intended on purpose. EIDOS' legal team is the one that enforced the VendorID, but it was all spun as nVidia's doing by AMD, specifically Richard Huddy. nVidia of course responded that there was some confusion regarding ownership of the code, but explicitly stated EIDOS had full writes to the source code, they licensed it to use however they saw fit. They then reached out to EIDOS to bypass the vendor lock, allowing ALL users to use their code since it was working well enough.

All the while, AMD was 100% focused on DirectX 11, according to AMD/ATi internal emails, and could not "spare" the resources for EIDOS.

AMD never added their own AA code despite Richard Huddy dragging nVidia's name through mud even though he through admission stated AMD/ATi were too busy for DX9 titles working on DX11.

You have AA in Arkham Asylum thanks to nVidia.
 
You can't even have bothered to read the quotes I posted above if you're asking those questions. Please read through the article I linked and come back if you have questions that aren't answered there.

It's obvious you have an axe to grind with nVidia. Every single claim you have made has been more grandiose than the last.

Batman: Arkham Asylum.

Did nVidia write the Anti-Aliasing code? Yes.
Did nVidia make EIDOS remove the vendor ID filter? Yes.
Did nVida provide an onsite engineer for EIDOS? Yes.
Did AMD provide an onsite engineer for EIDOS after request? No.
Did AMD respond to requests for an AA module for UE3? No.
Did AMD have its resources tied up with DX11? Yes.

Does AMD currently have its resources tied up with DX12 and Windows 10?
Project Cars performance seems to increase by 25~40%. So I would say yes.
 
Prime 1, Mope and ragingcain how many titan x's or 980ti's did nvidia promise you?

I am currently signed up as a GameWorks developer (indie). But have no other affiliation with NVIDIA.

Full disclosure though, I own stock in AMD. I own a R9 290X II Devil 13, an FX 8370, an i7 4970K, and 780 Ti SLi.

Here is my AMD baby: http://www.bytemedev.com/introducing-my-amd-gaming-build-bloodsnow/

I am quite knowledgeable of both vendors and how they interact with developers. Nvidias track record with developers as been stellar.

Even in this case with Project Cars, who did they call out as not being there or providing resources?
 
Cool Ragingcain. but for mope and Prime 1 those do take shilling and fanboyism to the next level. But what i hate about SMS its not the first time they did this. they are a bunch of amateurish nvidia fanboys.
 
Cool Ragingcain. but for mope and Prime 1 those do take shilling and fanboyism to the next level. But what i hate about SMS its not the first time they did this. they are a bunch of amateurish nvidia fanboys.

I am not trying to take away from your frustration. I just wanted it clear, the majority of you are rallying behind GameWorks and nVidia as being responsible.

I am not hearing anyone going after the developer, who is ultimately responsible for design decisions of a game... or maybe actually go after AMD for making inconveniences for their customers.

Facts Are:
1.) AMD's Roy Taylor talked with Ian and said they can resolve issues.
2.) AMD hardware is running great on their latest Windows 10 build drivers.
3.) AMD has a case precedence of focusing the majority of their resources on up coming products and APIs.
4.) AMD has greatly slowed down driver releases. I believe the Omega catalyst, which was binaries dated from November, finally had an update in March. That's five months of triple AAA titles that almost always need driver updates, which were not received.

Bribing a developer to hinder the competition, is extremely illegal, and exactly why Intel paid AMD $1.5 Billion, for being anti-trust / anti-competitive.

Just because AMD has been stumbling, does not have anything to do with nVidia. Has nothing to do with Batman Arkham Asylum issue created 7 years ago. Has nothing to do with the GTX 970 4GB snafu.

Criticizing AMD is not exonerating nVidia for their past mistakes or future ones. It's just holding AMD accountable for AMD actions. Which is the only way AMD is ever going to "grow" up into a sustainable business is take responsibility for their actions, and stop finger waiving at nVidia every time they slip up [looking at you Richard Huddy -.-].

Not one developer to date has corroborate Richard Huddy, or AMDs claims about GameWorks. So either nVidia has bribed every developer on Earth using GameWorks, or it isn't true.
 
Raging i understand your sentiments my friend, but why this game every other gameworks title works fine, are nvidia so scared of the r9 390x they are resorting to this gimping the competition on purpose. gameworks is a blackbox system on amd gpu's it offloads the physics tasks to the cpu whils on our nvidia system physics is shared between. What also irks me especially is that this aint the first time SMS has done something like this.
 
Raging i understand your sentiments my friend, but why this game every other gameworks title works fine, are nvidia so scared of the r9 390x they are resorting to this gimping the competition on purpose. gameworks is a blackbox system on amd gpu's it offloads the physics tasks to the cpu whils on our nvidia system physics is shared between. What also irks me especially is that this aint the first time SMS has done something like this.

Good questions.

A.) Okay as a developer, the GameWorks libraries are not all black box. I mean they are for the really cheap "Indie" licenses, like the ones I have. But these AAA projects have full access to source code.

Let's break GameWorks down as to why it performs worse on AMD:
1.) There is code that "makes things worse" on purpose.
Keep in mind it is designed to run on AMD hardware too.​
2.) It's the result of time and resources from nVidia to work alongside the developer to make things run great on nVidia hardware.

Now we know #2 is true, from developer feedback, I am sure even AMD fans can admit to that.

#1 is harder to prove, I have certainly not seen anything like that, mind you I am essentially black box in my dealings with GameWorks. No AAA publishers or devs using GameWorks has come forward to say its doing that.

Then add to the fact: AMD hardware is running ProjectCARs like a champ in Windows 10? That still uses GameWorks.

The PS4 and Xbox One both have GameWorks libraries very well optimized too since they are closed systems.

Also add the fact: SMS called AMD out for not being involved in the optimization process till now.

Also keep in mind that just about every console port lately has required insane amounts of VRAM and ran like crap on both nVidia and AMD.

Patch_Dogs anyone? Assassin's Creed: Unity - Inside Out Face Edition? Wolfenstein needing 4.5GB of VRAM for 1080p? Seriously?
 
Back
Top