PhysX on nVidia cards won't work if an ATI card is used

alg7_munif

Supreme [H]ardness
Joined
Oct 9, 2006
Messages
5,862
You could do it on Windows 7, but not anymore since the release of 186 graphics drivers.

Saw this on another thread but I think that this deserves its own thread:
http://www.ngohq.com/graphic-cards/16223-nvidia-disables-physx-when-ati-card-is-present.html
For a variety of reasons - some development expense some quality assurance and some business reasons NVIDIA will not support GPU accelerated Physx with NVIDIA GPUs while GPU rendering is happening on non- NVIDIA GPUs.

I hope that this won't create problems to people with an nVidia GPU on the AMD 790GX systems.
 
I guess that they don't want people to buy a cheap 8600GT to pair with a high end AMD card, I'm sure that they prefer people to buy their high end card instead.
 
I see it as an retaliatory act. Why start blocking right as W7 is launching and not while people were doing it in XP all this time?

The support cost argument doesn't hold water as the money to be gained from selling cards to ATI owners to use as PPU would easily exceed what ever support cost there is. Besides, having heard them claim SLI support is a hardware limitation would you really believe them this time? Especially when the combination is clearly working without issues until their blocking.

They already said so as much "...some business reasons..."
 
Company politics suck~

It's also the reason why I and other game devs are convinced that a non-proprietary physics API along the lines of OpenGL would be very desirable.
 
Yet some people still say that nVidia offered AMD with PhysX and AMD turned it down. It's a good thing that AMD turned it down because nVidia will surely cripple the PhysX performance on AMD cards and some PhysX fanboys will justify it by saying that the performance is not good because it is not a native CUDA hardware.
 
Yet some people still say that nVidia offered AMD with PhysX and AMD turned it down. It's a good thing that AMD turned it down because nVidia will surely cripple the PhysX performance on AMD cards and some PhysX fanboys will justify it by saying that the performance is not good because it is not a native CUDA hardware.

Indeed. It's not like AMD could have implemented the PhysX API themselves on top of its own GPGPU framework or anything. Of course not :rolleyes:

And fanboys should be ignored no matter what side they claim to be on :)
 
nvidia seems to be rackin up the mistakes... they are going to mess things up for themselves, for awhile, if they keep it up. Like tempting/insulting Intel...lol
 
If I'm not mistaken, ati has some sort of it's own physics acceleration software? Only not that widely spread or develeped? Correct me if I'm wrong?
 
If I'm not mistaken, ati has some sort of it's own physics acceleration software? Only not that widely spread or develeped? Correct me if I'm wrong?

No, they don't "own" physics acceleration software. They have however shown Havok (which they licensed from Intel) running on their GPUs, through the use of OpenCL, which is translated in drivers.

And it's not spread or developed at all. There's absolutely nothing to show it off.
 
Indeed. It's not like AMD could have implemented the PhysX API themselves on top of its own GPGPU framework or anything. Of course not :rolleyes:

And fanboys should be ignored no matter what side they claim to be on :)

But that way, evil NVIDIA (that eats babies), would cripple AMD's performance while laughing in the distance!! :)
 
No, they don't "own" physics acceleration software. They have however shown Havok (which they licensed from Intel) running on their GPUs, through the use of OpenCL, which is translated in drivers.

And it's not spread or developed at all. There's absolutely nothing to show it off.

so what is havoc?
 
Indeed. It's not like AMD could have implemented the PhysX API themselves on top of its own GPGPU framework or anything. Of course not :rolleyes:

And fanboys should be ignored no matter what side they claim to be on :)

If I remember correctly, ATI was never offered through proper channels PhysX (only through media). In addition, it was under the condition that ATI started using CUDA instead of Stream. Besides that, the "support" from Nvidia to the guy who tried to run PhysX on ATI cards must have been really bad, since ATI's SDK can be downloaded for free...

We all also see how Nvidia is playing with PhysX as in the OP here. You can't run PhysX on a Nvidia GPU if its not your primary card. I've seen people defending this, arguing that "Nvidia cannot guarrantie stability and therefore chose to block it. It would have been funny if AMD and Intel did the same. "We cannot guarrantie stability for PhysX on our CPU's so if it detects a Nvidia card in the system running physx, we will shut down our CPU..." :p
 
Even running the Nvidia card as the primary card doesn't work.. if it ever did.

The newest drivers that work are the 186.18s with the 185.68 nvapi.dll and nvapi64.dll files.

Works fine even if the Nvidia card is NOT the primary card.
 
so what is havoc?

Havok is not owned by AMD and it isn't "physics acceleration" software. It's a physics API, much like PhysX, only without the flexibility of PhysX that allows other hardware components that DO accelerate physics computations.
 
If I remember correctly, ATI was never offered through proper channels PhysX (only through media). In addition, it was under the condition that ATI started using CUDA instead of Stream. Besides that, the "support" from Nvidia to the guy who tried to run PhysX on ATI cards must have been really bad, since ATI's SDK can be downloaded for free...

We all also see how Nvidia is playing with PhysX as in the OP here. You can't run PhysX on a Nvidia GPU if its not your primary card. I've seen people defending this, arguing that "Nvidia cannot guarrantie stability and therefore chose to block it. It would have been funny if AMD and Intel did the same. "We cannot guarrantie stability for PhysX on our CPU's so if it detects a Nvidia card in the system running physx, we will shut down our CPU..." :p

You didn't put much thought into that answer did you ?

NVIDIA must somehow hack into ATI's drivers on their own, in order to be able to guarantee proper functionality with its GPU doing physics computations and ATI doesn't really need to do anything to reap the benefits of GPU accelerated physics...Does that make sense to you ?...Of course it does :rolleyes:

As for the "proper channels" nonsense, PhysX is open for license to ANYONE that wants to use it. AMD didn't want to. That's their problem, not NVIDIA's. If you want GPU physics go ask them how their efforts on that front are doing.

As for the last part of your "comment", what you suggest is kind of hilarious, because you are comparing apples to oranges (not surprising). x86 instructions != CUDA and ANY physics API defaults to the CPU anyway. No one is being blocked from using PhysX. But given that GPU physics is supported by the underlying instruction set of CUDA, anyone that wants to use it, either licenses the tech or just be happy with CPU physics that ALL physics APIs support.
 
You didn't put much thought into that answer did you ?

NVIDIA must somehow hack into ATI's drivers on their own, in order to be able to guarantee proper functionality with its GPU doing physics computations and ATI doesn't really need to do anything to reap the benefits of GPU accelerated physics...Does that make sense to you ?...Of course it does :rolleyes:

I might ask you the same question. There is no need to "hack" into ATI's drivers. Stream SDK is available and Nvidia is free to use it as everyone else.

As for the "proper channels" nonsense, PhysX is open for license to ANYONE that wants to use it. AMD didn't want to. That's their problem, not NVIDIA's. If you want GPU physics go ask them how their efforts on that front are doing.

Proper channels means asking AMD if they want to use PhysX, not go to media saying that they can offer it to AMD if they wish.

As for the last part of your "comment", what you suggest is kind of hilarious, because you are comparing apples to oranges (not surprising). x86 instructions != CUDA and ANY physics API defaults to the CPU anyway. No one is being blocked from using PhysX. But given that GPU physics is supported by the underlying instruction set of CUDA, anyone that wants to use it, either licenses the tech or just be happy with CPU physics that ALL physics APIs support.

No, I am comparing business decitions like Nvidia have done here and some hardcore Nvidia fans actually defending them.
Business decitions like "Nvidia cannot guarrantie the results if Nvidia cards aren't the main renderer card, so they block users from PhysX that wants to use their Nvidia card as PPU if drivers find them in the system".
AMD and Intel cannot guarrantie the results of Nvidia's physX on their CPU's, so for the same reasons, they can block their CPU's from working if PhysX is found in the system.
Same idiotic arguments as those defending the blocking of physX can be used by others then Nvidia.
 
But that way, evil NVIDIA (that eats babies), would cripple AMD's performance while laughing in the distance!! :)

No baby eating is required.
Nv would not have to do anything to overtly cripple AMD cards while still maintaining a lead. They control PhysX, they know what is coming next for it at every stage of the game. If you think that is not a huge built in advantage, or that Nv would not exploit it, your nuts. And if they did ignore those built in advantages and did not tailor PhysX to their hardware or their hardware to PhysX, I would call them nuts.

On the other side you have AMD, they don't want to permanently play second fiddle so they are stonewalling looking for another solution. They had one in the works til Intel bought out their partner. But, since they also make CPU's, and all of the current, widely used, physics implementations have a CPU fall back, their choice in the matter becomes an easy one to make. Sit back, wait for open cl, and hope the majority physics middle ware players jump on board. They would be stupid to help PhysX or CUDA become the standard. If you control the standard you control the market.

This ain't politics, this ain't red or green, this is business. And they are both acting accordingly. AMD with it's stonewalling, Nv, with it's disallowing PhysX to run on a Nv card if an AMD card is also present, and apparently also gimping CPU PhysX to run on only one core to better showcase their GPU physics.
 
Is there any reason why we can't simply just keep on using the old 185 drivers which allow us to have both cards working together?
 
I might ask you the same question. There is no need to "hack" into ATI's drivers. Stream SDK is available and Nvidia is free to use it as everyone else.

And how does that translate to having PhysX translated to what ATI GPUs understand ?
Are you seriously suggesting that NVIDIA should be doing ATI's job ?? Really ??...

Tamlin_WSGF said:
Proper channels means asking AMD if they want to use PhysX, not go to media saying that they can offer it to AMD if they wish.

LOL, why would NVIDIA need to go to them ? It's open for license. AMD just needs to license it, if they want to. They don't seem to want to. Is that NVIDIA's fault ?

Tamlin_WSGF said:
No, I am comparing business decitions like Nvidia have done here and some hardcore Nvidia fans actually defending them.
Business decitions like "Nvidia cannot guarrantie the results if Nvidia cards aren't the main renderer card, so they block users from PhysX that wants to use their Nvidia card as PPU if drivers find them in the system".
AMD and Intel cannot guarrantie the results of Nvidia's physX on their CPU's, so for the same reasons, they can block their CPU's from working if PhysX is found in the system.
Same idiotic arguments as those defending the blocking of physX can be used by others then Nvidia.

The difference with that "analogy" is that you don't lose physics in a PhysX powered game, when you can't enable GPU Physics. PhysX defaults to the CPU, when the requirements for GPU physics are not met. While with the "CPU disabling" you wouldn't even be able to play the game.
And PhysX can be used by anyone that licenses the tech. Developers do it, why wouldn't AMD need to do it ? Should they get it entirely free ? Why isn't AMD sharing their OpenCL efforts to have GPU accelerated physics through Havok, with NVIDIA ?
That's right, because 1) NVIDIA doesn't have an Havok license from Intel and 2) Why would AMD share tech they've developed to work with the license they have for Havok, with a competitor ?
 
No baby eating is required.
Nv would not have to do anything to overtly cripple AMD cards while still maintaining a lead. They control PhysX, they know what is coming next for it at every stage of the game. If you think that is not a huge built in advantage, or that Nv would not exploit it, your nuts. And if they did ignore those built in advantages and did not tailor PhysX to their hardware or their hardware to PhysX, I would call them nuts.

On the other side you have AMD, they don't want to permanently play second fiddle so they are stonewalling looking for another solution. They had one in the works til Intel bought out their partner. But, since they also make CPU's, and all of the current, widely used, physics implementations have a CPU fall back, their choice in the matter becomes an easy one to make. Sit back, wait for open cl, and hope the majority physics middle ware players jump on board. They would be stupid to help PhysX or CUDA become the standard. If you control the standard you control the market.

This ain't politics, this ain't red or green, this is business. And they are both acting accordingly. AMD with it's stonewalling, Nv, with it's disallowing PhysX to run on a Nv card if an AMD card is also present, and apparently also gimping CPU PhysX to run on only one core to better showcase their GPU physics.

And I don't disagree with most of what you said. However, I do think that AMD made a poor choice of partner with all this, because Intel has no desire to let AMD use Havok and have their GPUs trounce Intel CPUs, with physics calculations. The fact that nothing, I repeat, NOTHING has come out, except a video or two, regarding OpenCL + Havok for AMD GPU physics, is a clear indication of this.
Not to mention that Intel will eventually have their own discrete GPU, which makes things even worse for AMD in this front, even more so if their CPUs continue to be lacking when compared to Intel's.

You also forget to mention that NVIDIA supports OpenCL as much as any other company. In fact, they chair the group that is behind its development. They will have PhysX ported to OpenCL eventually too, but that's going to take a while, since OpenCL is still fairly new and with the introduction of DX Compute, we'll really need to see what happens and what gets adopted the most, by developers.

This far, NVIDIA has everything covered. They own a physics API, support OpenCL their own CUDA (of course) and they even have already released tools to work with DX Compute (and they are the only ones at this point in time).
 
You also forget to mention that NVIDIA supports OpenCL as much as any other company. In fact, they chair the group that is behind its development. They will have PhysX ported to OpenCL eventually too, but that's going to take a while, since OpenCL is still fairly new and with the introduction of DX Compute, we'll really need to see what happens and what gets adopted the most, by developers.

DX is of course limited to Windows. No consoles, not even the X360 supports it. DX Compute will therefore also be limited to Windows, whereas OpenCL should run on any platform which has a semi-capable GPU. This would include the current consoles, Windows, Linux and Mac computers. The latter already supports it. As consoles form the biggest non-casual game market, I don't see DX Compute as anything more than a funky gimmick few will ever really use.
 
DX is of course limited to Windows. No consoles, not even the X360 supports it. DX Compute will therefore also be limited to Windows, whereas OpenCL should run on any platform which has a semi-capable GPU. This would include the current consoles, Windows, Linux and Mac computers. The latter already supports it. As consoles form the biggest non-casual game market, I don't see DX Compute as anything more than a funky gimmick few will ever really use.

To be used in games or to simply leverage GPU power in other calculations, under a Windows platform ?

It's not quite clear to me what will win, though at this point it's obviously a toss up. Ultimately the tools supplied for development will be one of the main keys. Much like the tools for DirectX game development, which made DirectX the preferred API to develop games on. Not to mention the fact that Microsoft has enough power (or money, however you want to look at it) to push Direct Compute enough to be used in every game that uses DirectX. The fact that Direct Compute isn't "DX11 hardware only", surely indicates that Microsoft is promoting it to be used in a much wider range of products.
 
To be used in games or to simply leverage GPU power in other calculations, under a Windows platform ?

It's not quite clear to me what will win, though at this point it's obviously a toss up. Ultimately the tools supplied for development will be one of the main keys. Much like the tools for DirectX game development, which made DirectX the preferred API to develop games on. Not to mention the fact that Microsoft has enough power (or money, however you want to look at it) to push Direct Compute enough to be used in every game that uses DirectX. The fact that Direct Compute isn't "DX11 hardware only", surely indicates that Microsoft is promoting it to be used in a much wider range of products.

Well, CUDA and to some extent Stream (Brook+) has cornered the GPGPU market, with OpenCL waiting to get its share of the pie. As GPGPU stuff is usually done (HPC) on non-Windows systems (a client of mine uses Linux for everything, many universities/labs do too), that's one place DC won't ever get into.

Also, I don't believe that DX has really 'won'. The API wars of the 90s are still far from over. Things just never got to a climax because GPU manufacturers ended up supporting both APIs. DX was crap until version 9, so it didn't 'win' due to superiority. As you said, it may have been the SDK and having everything in one package (until MSFT stripped DX down to basically just graphics, that is).

I guess that DC will end up in the same role as GLSL with OpenGL, to create custom shaders. Welcome to the future, DX :p
 
Is there any reason why we can't simply just keep on using the old 185 drivers which allow us to have both cards working together?


I'm thinking of buying a 9600gso for physx, can anyone tell me if i'll be able to use it in windows 7 if I just stick to old drivers?
 
Is there any reason why we can't simply just keep on using the old 185 drivers which allow us to have both cards working together?

Batman requires the newer drivers. There may be may a way to hack around it, but you will have to look around for it if there is a way.
 
Batman requires the newer drivers. There may be may a way to hack around it, but you will have to look around for it if there is a way.

um... by new drivers do you mean the 09.09.0814 drivers? are those the ones that disable physx if an ati card is present? or are you referring to the physx libraries inside the batman aa game folder?
 
Batman requires the newer drivers. There may be may a way to hack around it, but you will have to look around for it if there is a way.
Afaik, graphics drivers are responsible for ATI blocking, not PhysX System Software. You can install older GPU drivers, and then newest PhysX drivers separately.
 
Afaik, graphics drivers are responsible for ATI blocking, not PhysX System Software. You can install older GPU drivers, and then newest PhysX drivers separately.

I have the older Graphics drivers and the new PhysX Libraries seem to block my ability to use my 9800GT.
 
You didn't put much thought into that answer did you ?

NVIDIA must somehow hack into ATI's drivers on their own, in order to be able to guarantee proper functionality with its GPU doing physics computations and ATI doesn't really need to do anything to reap the benefits of GPU accelerated physics...Does that make sense to you ?...Of course it does :rolleyes:

As for the "proper channels" nonsense, PhysX is open for license to ANYONE that wants to use it. AMD didn't want to. That's their problem, not NVIDIA's. If you want GPU physics go ask them how their efforts on that front are doing.

As for the last part of your "comment", what you suggest is kind of hilarious, because you are comparing apples to oranges (not surprising). x86 instructions != CUDA and ANY physics API defaults to the CPU anyway. No one is being blocked from using PhysX. But given that GPU physics is supported by the underlying instruction set of CUDA, anyone that wants to use it, either licenses the tech or just be happy with CPU physics that ALL physics APIs support.

You didn't put much thought into that response now did you.

It seems anytime someone is remotely critical of PhysX you get the nVIDIA response squad (you and Elledan) to the rescue.

PhysX and CUDA are libraries apart from the Graphics API (DX or OpenGL). There is no need to hack into AMDs drivers. PhysX will work based on the PhysX SDK and CUDA driver implementation. The functionality is entirely dependent on the dedicated PhysX card and not the Graphics card.

IT WORKED. I've USED IT. There was NO reasonable reason (double use of the word "reason" used on purpose) or justifiable line of reasoning (human intellectual attributes) to disable it other than for pure marketing purposes.
 
um... by new drivers do you mean the 09.09.0814 drivers? are those the ones that disable physx if an ati card is present? or are you referring to the physx libraries inside the batman aa game folder?


My bad, the new libraries.
 
And I don't disagree with most of what you said. However, I do think that AMD made a poor choice of partner with all this, because Intel has no desire to let AMD use Havok and have their GPUs trounce Intel CPUs, with physics calculations. The fact that nothing, I repeat, NOTHING has come out, except a video or two, regarding OpenCL + Havok for AMD GPU physics, is a clear indication of this.

The one thing you need to take into consideration is that they have a common enemy in NV with physx already entrenched. Intel has a vested interest to take physx off the #1 spot with ATI's help for the immediate future. Not to mention on the discrete GPU front NV is the 800 pound gorilla in the room. Only after that should Intel worry about how to take on the other guy. And since Intel is the one in control of Havok it won't be that hard.

Not having a physx API to call their own really puts ATI in a bad spot, perhaps they should by Bullet?
 
This seems pretty bogus. I've ran a PPU with an ATi card since 2006, how can they say there are issues? Clearly there are no issues with PhysX and an ATI display adapter.
 
This seems pretty bogus. I've ran a PPU with an ATi card since 2006, how can they say there are issues? Clearly there are no issues with PhysX and an ATI display adapter.

PPU != GPU

And the PPU and GPU physx dosn't run 100% alike, trust me.
Even though the PCI PPU and PCI-E PPU runs just the same :p

AMD rejected PhysX...deal with it.
 
PPU != GPU

And the PPU and GPU physx dosn't run 100% alike, trust me.
Even though the PCI PPU and PCI-E PPU runs just the same :p

AMD rejected PhysX...deal with it.

The hardware certainly isn't. We never got a good look on what's in the PPU.

If NV wanted to push physx they could very well have released a striped down version of the GPU driver similar to how Tesla is run. They can even make windows not realise that it's a video card. Since windows can't really figure out what a device is for sure. As long as there is a suitable driver for it Windows will identify a device as anything the driver claims.
 
Back
Top