Nvidia has NOT supported Vulkan from day 1, the answer to that is "simple" Vulkan is AMD "baby" and Nv is unable to get the same "sweetheart" tricks/hax that MSFT has allowed them to do in DX for many many years.

they were ADAMANT against Vulkan in any way shape or form especially the "multi gpu" that could allow a Radeon to work with a Geforce (or lower end stuff to link up with higher end) among other things.

I remember reading along the lines of Nv not wanting to support Vulkan because they feared AMD leveraged something in the code that could hurt Nv products (even though Nv does this crap all the time, and has for a number of years)

Maybe Khronos told Nv along the lines of "well, you may not want to use Vulkan, but, we are done with OpenGL as of X date so you might as well jump on board"

Anyways.....if Nv will "properly" support Vulkan and not fk anyone else in the process. sweet deal, it might mean eventually more and more support for Linux "PC" gaming computers in a more user friendly way without the crapware/actions MSFT has been doing with Windows the last couple of years.

The more the merrier when they "play nice" in my books ^.^

You have invented a very weird, teenage fanboy alternate reality like you were somewhere behind the scenes to something "maybe this happened and then maybe they said that" -- utter batshit to anyone that's actually followed Vulkan development for years, seen NV show up to every GDC to give their presentation about the latest updates and progress in their support of it. There was never any animosity.

Predictable fanboy ankle biting about "Ngreedia" is cool but there's really no room for alternative facts when the real ones are easy enough to verify.
 
Last edited:
Ha!!! PhysX? That was Ageia's proprietary tech. NV just bought them. If memory serves they also offered some portion of it to ATI, but ATI turned it down. (Admittedly my memory's a bit hazy on that though...)

Gameworks is debatable. It's software tools. Devs can take them or leave them. They don't prevent anyone from making a game.

G-Sync is decent tech, but I agree on that one. Some middle ground should be chosen there, but... ...that's not how it works these days.

yes. nvidia has shown their intention several times that they have no issue for AMD to use their PhysX. there are third party developer that actually successfully running GPU PhysX natively on some AMD GPU. nvidia already give their green light (meaning they will not have to worry about being chase after by nvidia in court) so they just waiting for AMD support. and AMD outright turn down the proposal because if i remember it right AMD said it does not make sense for them to support PhysX when it does not have any marketing benefit to them.
 
They offered to license it to ATI, for an undisclosed amount. Intel countered the offer with Havoc at something far lower so they went that route.

while nvidia would ask some kind of licensing fee to use PhysX but i never heard about this intel offering something to AMD to counter nvidia offer for PhysX. this GPU accelerated physics all start with HavokFX. initially it should run on any hardware. but that plan was ruined when intel acquire Havok for themselves. nvidia know HavokFX will not going anywhere because HavokFX will lessen CPU importance in game physics processing. intel does not want to happen. so they acquire Ageia to further push GPU accelerated physics by their own way. since intel acquire havok to prevent GPU acceleration physics from gaining traction i doubt that intel ever offer AMD/ATI to use havok as their platform to push GPU accelerated physics. because if intel really did AMD will not introduce GPU accelerated physics with Bullet in 2009 (which is OpenCL based). and in the end this initiatives does not go anywhere either despite being fully open source.
 
Don't quote me on this, but I think nvidia offered AMD the physx license for free but they passed. BTW AMD was supposed to have GPU accelaration for havoc before nvidia even got Ageia, then bullet but that never happened.

i don't think it was free but from my understanding AMD does not even want to look at PhysX just because it was nvidia tech. though this kind of behaviour between the two are no longer as bad as it were back then. and yes you're correct about AMD (or was it more accurate ATI?) was supposed to have GPU acceleration function even before nvidia acquire Ageia. it was havokFX (that also work on nvidia GPU).
 
Nvidia has NOT supported Vulkan from day 1, the answer to that is "simple" Vulkan is AMD "baby" and Nv is unable to get the same "sweetheart" tricks/hax that MSFT has allowed them to do in DX for many many years.

they were ADAMANT against Vulkan in any way shape or form especially the "multi gpu" that could allow a Radeon to work with a Geforce (or lower end stuff to link up with higher end) among other things.

I remember reading along the lines of Nv not wanting to support Vulkan because they feared AMD leveraged something in the code that could hurt Nv products (even though Nv does this crap all the time, and has for a number of years)

Maybe Khronos told Nv along the lines of "well, you may not want to use Vulkan, but, we are done with OpenGL as of X date so you might as well jump on board"

Anyways.....if Nv will "properly" support Vulkan and not fk anyone else in the process. sweet deal, it might mean eventually more and more support for Linux "PC" gaming computers in a more user friendly way without the crapware/actions MSFT has been doing with Windows the last couple of years.

The more the merrier when they "play nice" in my books ^.^

lol where did you even read all this story? is there official statement from nvidia or you just hear some internet speculation and think it was the truth? nvidia were against mixed multi GPU that could happen with Vulkan? please link me any article to this. maybe nvidia not directly saying it but at least like saying mixing GPU in multi GPU setup is not recommended. i would very love to see it. mixed multi GPU is not a new thing. it has been tried long before Vulkan ever exist. here is some of the actual product and result:

https://www.anandtech.com/show/2910/4

if nvidia really are against this mixed multi GPU they would already release "updated" driver from preventing this from happening back then.

and please don't make joke about Khronos group giving pressure to nvidia to support vulkan or else they will be done with openGL. who do you think is current Khronos Group president?
 
fwiw, they're (khronos) still working on opengl, but I think they're shifting back toward a more professional use focus. Of course, they're implementing comparable features to vulkan where possible/feasible.

Khronos Group will never drop OpenGL completely over vulkan. the thing about opengl is they were used a lot by professional graphic software. in fact Vulkan being separated from OpenGL because of the level of control this professional want to have over opengl spec. the professional are free to adopt vulkan but one thing that is guaranteed was the API development will not going be bog down by those professional need.
 
not really, nobody used OpenGL so AMD didn’t have the budget to put any real resources into it.

for games maybe but a lot of professional software use them. they said it is one of the reason why nvidia dominated the professional visual solution. the problem with ATI (then AMD) side was the driver development for OpenGL. it was quite a mess to begin with. then when they got acquired by AMD some of the original team that develop the driver left the company. the new team that handling the development did not want to make everything from the scratch so they just patch things here and there. and they try not to make it more complicated than it already is. this is very different with nvidia which have solid development team from the very start. hence with vulkan AMD hope the can even the ground with nvidia or even better. and they are quite successful at that. but nvidia software development is not to be underestimated. software after all has always been their strong forte. vulkan might not that popular on windows but the adoption rate is really really good on linux. many of the new game that being ported to linux most of them either support vulkan along side of opengl or being vulkan exclusive. and so far nvidia vulkan performance is very good in linux vs AMD.
 
yes. nvidia has shown their intention several times that they have no issue for AMD to use their PhysX. there are third party developer that actually successfully running GPU PhysX natively on some AMD GPU. nvidia already give their green light (meaning they will not have to worry about being chase after by nvidia in court) so they just waiting for AMD support. and AMD outright turn down the proposal because if i remember it right AMD said it does not make sense for them to support PhysX when it does not have any marketing benefit to them.

I'm not sure where you got all THAT from. For years people clamored for Nvidia to allow PhysX to run on a secondary Nvidia card with an ATI card doing the primary rendering. After it was proven by enthusiasts that it easily worked perfectly with a hack to the drivers, Nvidia's response was to not only update their drivers to intentionally break this compatibility, but they also deliberately blocked PhysX from working in any system that even had an AMD card installed in it! Nvidia could have easily enabled support to work alongside AMD cards but chose instead to take their toys and go home.

https://hardforum.com/threads/how-to-activate-hardware-physx-to-play-with-an-ati-video-card.1456964
 
Weird to see nvidia embracing something that was based on an api that was originally amd's.

They are adding hardware specific extensions to Vulkan which is not good at all. Do you think this extension is going to work with AMD or even older Nvidia cards? Once devs use these extensions the games will run worse on AMD cards.
 
They are adding hardware specific extensions to Vulkan which is not good at all. Do you think this extension is going to work with AMD or even older Nvidia cards? Once devs use these extensions the games will run worse on AMD cards.

Nothing is stopping AMD or anyone from supporting ray tracing.
 
I don't think you understand what opengl is. It's not a benchmarking company. Note the Khronos group's stance on extensions. https://www.khronos.org/opengl/wiki/OpenGL_Extension

I hope that helps.

I know that it is still a group, but I’m damned sure I have read that ATI/AMD have had problems with control and what can and doesn’t get approved... It hasn’t been in the tech press for years, but I’m sure it was a thing, back in the ATI days...

Hopefully someone can confirm or deny this...
 
And since then they have given us open vendor and hardware agnostic API's such as G-Sync, PhysX, GameWorks...



oh, wait
The only API you listed is PhysX, which by the way is now the most ubiquitous hardware accelerated physics engine in games. Gameworks is a middleware that doesn't use any feature not already built into DirectX 11 and DirectX 12.
not really, nobody used OpenGL so AMD didn’t have the budget to put any real resources into it.
That was Microsoft's doing, by the way. Microsoft was developing DirectX and threatened to not support OpenGL in Windows if it was marketed to software developers for gaming.
 
I'm not sure where you got all THAT from. For years people clamored for Nvidia to allow PhysX to run on a secondary Nvidia card with an ATI card doing the primary rendering. After it was proven by enthusiasts that it easily worked perfectly with a hack to the drivers, Nvidia's response was to not only update their drivers to intentionally break this compatibility, but they also deliberately blocked PhysX from working in any system that even had an AMD card installed in it! Nvidia could have easily enabled support to work alongside AMD cards but chose instead to take their toys and go home.

https://hardforum.com/threads/how-to-activate-hardware-physx-to-play-with-an-ati-video-card.1456964

if you're saying this then you most likely did not know what really happen back then.

https://www.techpowerup.com/64787/r...dia-offered-to-help-us-expected-more-from-amd

it is AMD that refuse to get involved in anything PhysX related. in regards to blocking hybrid PhysX from working i think i can understand what nvidia perspective are at the time. PhysX is nvidia tech. so they are fully responsible for the tech to work as intended even if the primary GPU in use in not nvidia based. what if nvidia and AMD driver have some conflicts and GPU PhysX cannot run properly? nvidia will be the only one have to figure it out to make it work even with system that using AMD GPU as primary renderer. if nvidia chose to ignore this they will be called out. worse some crazy gamer might bring the issue to court over nvidia refusal to properly optimize or fixed "hybrid PhysX system". so before things escalated to that level of complexity they simply block hybrid PhysX. you still want to use hybrid system? proceed on your own with the hacked driver. if nvidia really want to stop hybrid system from working at all cost they probably bring this issue to the court by suing the guy/team behind the hack. but that never happen. that's like saying "you can do that if you want to but then don't come to us and forcing us to fix the issue when there is a problem".
 
They are adding hardware specific extensions to Vulkan which is not good at all. Do you think this extension is going to work with AMD or even older Nvidia cards? Once devs use these extensions the games will run worse on AMD cards.

Incorrect, if you are using an AMD card it just won't use it. If AMD puts in their hardware accelerator then they utilize their own RT or they can decide to duplicate what NVidia is doing.

NVidia used to dump a ton of development into OGL, as for content creation this goes great with how NVidia has mainly marketed their RT technology.
 
They are adding hardware specific extensions to Vulkan which is not good at all. Do you think this extension is going to work with AMD or even older Nvidia cards? Once devs use these extensions the games will run worse on AMD cards.

but that is simply the nature of Vulkan. why did you think Vulkan did really well as low level API? it is thanks to those extensions. even Id Tech highlighting this advantage over direct x.
 
but that is simply the nature of Vulkan. why did you think Vulkan did really well as low level API? it is thanks to those extensions. even Id Tech highlighting this advantage over direct x.

The same extension capability exists in DX 12 as that's how ray tracing in implemented there, DXR. All of the ray tracing games slated to launch or ray tracing support in the near future are using DXR as it has been further along than VKRay.
 
They are adding hardware specific extensions to Vulkan which is not good at all. Do you think this extension is going to work with AMD or even older Nvidia cards? Once devs use these extensions the games will run worse on AMD cards.
Adding extension goes waaaaaay back since the days of OpenGL, they have been added since like forever. I think the first ones implemented dated the original Quake.
 
The same extension capability exists in DX 12 as that's how ray tracing in implemented there, DXR. All of the ray tracing games slated to launch or ray tracing support in the near future are using DXR as it has been further along than VKRay.

Yea, the real benefit behind Vulkan is the open development - Nvidia or AMD can get in and place their hardware extensions easier than in DX12. This was another advantage of OGL, John Carmak essentially swooned over NV's dedication towards OGL, mostly outlining their performance with IT Tech 4.
 
They are adding hardware specific extensions to Vulkan which is not good at all. Do you think this extension is going to work with AMD or even older Nvidia cards? Once devs use these extensions the games will run worse on AMD cards.

Also Vulkan is littered with hardware specific extensions NOT related to Nvidia.
 
Yea, the real benefit behind Vulkan is the open development - Nvidia or AMD can get in and place their hardware extensions easier than in DX12. This was another advantage of OGL, John Carmak essentially swooned over NV's dedication towards OGL, mostly outlining their performance with IT Tech 4.

There's room for both open and proprietary development. Sometimes proprietary development can move along faster and take more risk. We're still waiting for ray tracing support in games and would be waiting even longer for Vulkan to get to production readiness. And how well ray tracing pans out is a still a huge unknown, it could be a flop.
 
There's room for both open and proprietary development. Sometimes proprietary development can move along faster and take more risk. We're still waiting for ray tracing support in games and would be waiting even longer for Vulkan to get to production readiness. And how well ray tracing pans out is a still a huge unknown, it could be a flop.
I'm not arguing for either to be honest, they both have their places. If there wasn't then one wouldn't exist.

As for RT, I'm hopefull. I don't really care about the RTX release until we get actual in-game ray tracing enabled and then we can see how it looks/performs. I've said for a while now, I'll upgrade for RT not pure performance.
 
Fighting it? What are you even talking about? You're not keyed in to the history here. Nvidia is not Microsoft - they aren't interested in pushing a API like DX12 to force OS lock-in. They don't care which API uses their cards.

Once again, Nvidia has been all over Vulkan from DAY ONE. Even before day one while it was still in development they had engineers dedicated to it.

My comment was in response to another post that suggested that AMD is mainly responsible for the success of Vulkan.
 
not really, nobody used OpenGL so AMD didn’t have the budget to put any real resources into it.


I'm not sure how long you've been around the PC scene but AMD (*ATI*) was as big as, if not bigger than, NV way back when OpenGL gaming was relevant. In the 90s NV was the underdog and ATI was the one we expected to fight 3dfx. ATI had the budget but they didn't have the leadership.

Back then just about every FPS used ID's OpenGL Quake engines and NV wasn't great there. From what I remember ATI had working OpenGL drivers while NV had completely borked drivers. NV heaped tons of development work into their OpenGL drivers and for those of us that had jumped on the Riva128 it took what felt like ages before we had real, usable drivers. The original drivers wouldn't even load textures; instead you'd have all white on the low-poly count models of the time. That was fixed and they worked on stability. That was fixed and performance closed in on 3dfx. Then suddenly the drivers were there and people started moving from Voodoo cards to the TNT and then, BOOM, Quake 3 releases and droves of people moved to the new GeForce 256 (this card basically created consumer Hardware T&L and the GPU moniker). 3dfx withered away and while ATI had released some sweet cards they had never really put the time into the OpenGL driver that NV did and by that point DX was really taking off anyway.

I may be missing or mis-remembering some aspects but this was almost 20 years ago. It was the golden age of PC hardware and software IMO. It was where NV earned their driver development powerhouse reputation and ATI earned the bad drivers and second-tier card reputation that still somewhat follows them today.
 
I'm not sure how long you've been around the PC scene but AMD (*ATI*) was as big as, if not bigger than, NV way back when OpenGL gaming was relevant. In the 90s NV was the underdog and ATI was the one we expected to fight 3dfx. ATI had the budget but they didn't have the leadership.

Back then just about every FPS used ID's OpenGL Quake engines and NV wasn't great there. From what I remember ATI had working OpenGL drivers while NV had completely borked drivers. NV heaped tons of development work into their OpenGL drivers and for those of us that had jumped on the Riva128 it took what felt like ages before we had real, usable drivers. The original drivers wouldn't even load textures; instead you'd have all white on the low-poly count models of the time. That was fixed and they worked on stability. That was fixed and performance closed in on 3dfx. Then suddenly the drivers were there and people started moving from Voodoo cards to the TNT and then, BOOM, Quake 3 releases and droves of people moved to the new GeForce 256 (this card basically created consumer Hardware T&L and the GPU moniker). 3dfx withered away and while ATI had released some sweet cards they had never really put the time into the OpenGL driver that NV did and by that point DX was really taking off anyway.

I may be missing or mis-remembering some aspects but this was almost 20 years ago. It was the golden age of PC hardware and software IMO. It was where NV earned their driver development powerhouse reputation and ATI earned the bad drivers and second-tier card reputation that still somewhat follows them today.

There was also a lot of fighting with developers to NOT promote Glide over standard DX/OGL extensions. When Nvidia released the original TNT you had the "dream" combo with the Voodoo2 if you wanted to enable glide or enable other renderers. To me Glide looked like crap when DX caught up, the lighting was dulled, textures blurred, shadows fuzzy but if you enabled DX8 (omg!) everything was sharp, the lighting didn't look like some 80's TV drama and the shadows were defined again.
 
Look up Nvidia Doom3 performance versus ATI. Those initial benchmarks were devastating.

This is why I swapped from my brand new X800 card to a 6800GT (followed by a second for SLI). id games are a staple on my machines since Wolfenstein 3D so this wasn't a small thing to me.
 
I'm not sure how long you've been around the PC scene but AMD (*ATI*) was as big as, if not bigger than, NV way back when OpenGL gaming was relevant. In the 90s NV was the underdog and ATI was the one we expected to fight 3dfx. ATI had the budget but they didn't have the leadership.

Back then just about every FPS used ID's OpenGL Quake engines and NV wasn't great there. From what I remember ATI had working OpenGL drivers while NV had completely borked drivers. NV heaped tons of development work into their OpenGL drivers and for those of us that had jumped on the Riva128 it took what felt like ages before we had real, usable drivers. The original drivers wouldn't even load textures; instead you'd have all white on the low-poly count models of the time. That was fixed and they worked on stability. That was fixed and performance closed in on 3dfx. Then suddenly the drivers were there and people started moving from Voodoo cards to the TNT and then, BOOM, Quake 3 releases and droves of people moved to the new GeForce 256 (this card basically created consumer Hardware T&L and the GPU moniker). 3dfx withered away and while ATI had released some sweet cards they had never really put the time into the OpenGL driver that NV did and by that point DX was really taking off anyway.

I may be missing or mis-remembering some aspects but this was almost 20 years ago. It was the golden age of PC hardware and software IMO. It was where NV earned their driver development powerhouse reputation and ATI earned the bad drivers and second-tier card reputation that still somewhat follows them today.
In the 90’s very few games used open GL it was primarily used for workstations here is a quote from John Carmack regarding QuakeGL

Theoretically, glquake will run on any compliant OpenGL that supports the texture objects extensions, but unless it is very powerfull hardware that accelerates everything needed, the game play will not be acceptable. If it has to go through any software emulation paths, the performance will likely by well under one frame per second.

At this time (march ’97), the only standard opengl hardware that can play glquake reasonably is an intergraph realizm, which is a VERY expensive card. 3dlabs has been improving their performance significantly, but with the available drivers it still isn’t good enough to play. Some of the current 3dlabs drivers for glint and permedia boards can also crash NT when exiting from a full screen run, so I don’t recommend running glquake on 3dlabs hardware.
3dfx has provided an opengl32.dll that implements everything glquake needs, but it is not a full opengl implementation. Other opengl applications are very unlikely to work with it, so consider it basically a “glquake driver”.

DirectX 2 launched with windows 95, and yes at that point there were very few games available but the ones that were available were pretty solid. The flagship title being Mechwarrior 2, buy the time DX3 launched it was considerably better than OpenGL's offerings and Microsoft did all the hardware validation testing cards and made entire testing suites available for hardware manufactures to test and validate their DirectX implementations. By the time DX 9 rolled out shortly after the release of Windows XP the writing was on the wall.

You are forgetting that in the 90's and early 2000's games were not just developed in DirectX or OpenGL but there was also 3DFx's Glide, S3's S3D, Matriox confusingly named theirs MSI, and ATI had CIF. That whole era was a crapshoot of graphic API's that more times than not was software rendered.
 
Last edited:
Ha!!! PhysX? That was Ageia's proprietary tech. NV just bought them. If memory serves they also offered some portion of it to ATI, but ATI turned it down. (Admittedly my memory's a bit hazy on that though...)

They offered to license it to ATI, for an undisclosed amount. Intel countered the offer with Havoc at something far lower so they went that route.

Don't quote me on this, but I think nvidia offered AMD the physx license for free but they passed. BTW AMD was supposed to have GPU accelaration for havoc before nvidia even got Ageia, then bullet but that never happened.

yes. nvidia has shown their intention several times that they have no issue for AMD to use their PhysX. there are third party developer that actually successfully running GPU PhysX natively on some AMD GPU. nvidia already give their green light (meaning they will not have to worry about being chase after by nvidia in court) so they just waiting for AMD support. and AMD outright turn down the proposal because if i remember it right AMD said it does not make sense for them to support PhysX when it does not have any marketing benefit to them.

Nvidia didn't offer Physx to AMD, it never happened. Nvidia said that if AMD ever contacted them in regard to using Physx on AMD GPUs then they would love to have a conversation. There would have been a license fee, it would not have been free. But, it was just words for the press, they knew full well that AMD would never contact them about Phsyx because AMD would have also needed to support CUDA and that would have went completely against OpenCL, something that they were actively involved in and it would also have went against Havok, which they had been working on for a couple of years.

So, don't make it out like Nvidia were offering any sort of Olive branch or anything.

And, Nvidia blocked off access to using Physx when an AMD card was in the computer after a couple of years. Oh, they gave a BS excuse that it was too much development expense and they were implementing quality control etc. Then a few months after they had disabled it they released a Beta driver where the ability to use Physx with an AMD card in your system had been reenabled accidently. It got pulled pretty quickly, but, it showed how they actually lied about it. The real reason they disabled it was because AMD had the 5xxx cards out and Nvidia had nothing. And it's as simple as that.

https://www.tweaktown.com/articles/3344/ati_radeon_hd_5870_5970_with_nvidia_physx/index.html

So to sum up, No, Nvidia never offered Phsyx to AMD, it was nothing but a token gesture for the press at the time, safe in the knowledge that AMD could never contact them.
 
  • Like
Reactions: Creig
like this
if you're saying this then you most likely did not know what really happen back then.

https://www.techpowerup.com/64787/r...dia-offered-to-help-us-expected-more-from-amd

it is AMD that refuse to get involved in anything PhysX related. in regards to blocking hybrid PhysX from working i think i can understand what nvidia perspective are at the time. PhysX is nvidia tech. so they are fully responsible for the tech to work as intended even if the primary GPU in use in not nvidia based. what if nvidia and AMD driver have some conflicts and GPU PhysX cannot run properly? nvidia will be the only one have to figure it out to make it work even with system that using AMD GPU as primary renderer. if nvidia chose to ignore this they will be called out. worse some crazy gamer might bring the issue to court over nvidia refusal to properly optimize or fixed "hybrid PhysX system". so before things escalated to that level of complexity they simply block hybrid PhysX. you still want to use hybrid system? proceed on your own with the hacked driver. if nvidia really want to stop hybrid system from working at all cost they probably bring this issue to the court by suing the guy/team behind the hack. but that never happen. that's like saying "you can do that if you want to but then don't come to us and forcing us to fix the issue when there is a problem".

Marketing Rubbish. They stopped support for Phsyx when an AMD card was in the system to keep people buying Nvidia cards because they were late with Fermi and had nothing to counter the 5xxx cards.

And come on, Nvidia never offered AMD Phsyx. The noise they made for the press was the same noise that AMD made to the press when they were developing Mantle. Oh Nvidia can use it, all they have to do is contact us. Blah, Blah. There was no way Nvidia was going to support Mantle when it was totally owned by AMD. Just like there was no way AMD was ever going to support Physx when they would have had to use CUDA too. Two tech's completely owned by Nvidia.
 
Nvidia didn't offer Physx to AMD, it never happened. Nvidia said that if AMD ever contacted them in regard to using Physx on AMD GPUs then they would love to have a conversation. There would have been a license fee, it would not have been free. But, it was just words for the press, they knew full well that AMD would never contact them about Phsyx because AMD would have also needed to support CUDA and that would have went completely against OpenCL, something that they were actively involved in and it would also have went against Havok, which they had been working on for a couple of years.

when all this was happening OpenCL still does not exist (or barely comes out). back in 2008 the are group of developer successfully run GPU PhysX natively on AMD GPU. feel free to speculate but don't take it as a definite truth. it might just words for press but their intention could also be real. what nvidia has let other people been doing with CUDA over the years already a fine proof that if AMD really want to license PhysX nvidia would have allowed it. if anything AMD should know going to PhysX route is much better than going with Havok since there is no way GPU accelerated physics will become a reality when Havok is being owned by intel. and they finally decided to make their own choice instead of accepting PhysX or Havok; Bullet.

So, don't make it out like Nvidia were offering any sort of Olive branch or anything.

i'm not saying it like that. but they do have their own agenda. and i look at all of this from business perspective. not about being good or bad.

And, Nvidia blocked off access to using Physx when an AMD card was in the computer after a couple of years. Oh, they gave a BS excuse that it was too much development expense and they were implementing quality control etc. Then a few months after they had disabled it they released a Beta driver where the ability to use Physx with an AMD card in your system had been reenabled accidently. It got pulled pretty quickly, but, it showed how they actually lied about it. The real reason they disabled it was because AMD had the 5xxx cards out and Nvidia had nothing. And it's as simple as that.

it could be. but blocking PhysX just because they did not have competitor to 5k series does not make sense for some reason. first how many game actually have GPU PhysX? even back then you can probably count them on one hand. and the list did not improve much for almost a decade later. so blocking PhysX in reality did not have effect at all towards gamer meaning doing so will not going to change anything. gamer will not going to use all nvidia just so they can have PhysX in handful of games. second the existence of hacked driver. if this is really about their sales performance then nvidia will do everything in their power to prevent this hacked driver from spreading to the public. but they did nothing at all to counter them. mind you that when it comes to sales performance nvidia are very serious about it even if they end up ruining their relationship with their partners (look what happen with MS Xbox or more recently like the Green Light program where nvidia voids manufacturer warranty if they did not follow nvidia guideline and restrictions even on factory overclock cards). the reason they block hybrid PhysX really is simple as you said and they directly said it in one of their official statements. so no need to create secondary story like how it is actually done to prevent people from using ATI cards.

And come on, Nvidia never offered AMD Phsyx. The noise they made for the press was the same noise that AMD made to the press when they were developing Mantle. Oh Nvidia can use it, all they have to do is contact us. Blah, Blah. There was no way Nvidia was going to support Mantle when it was totally owned by AMD. Just like there was no way AMD was ever going to support Physx when they would have had to use CUDA too. Two tech's completely owned by Nvidia

interesting you mention about mantle. if you have been following mantle development since the very beginning you should know nvidia did not say anything about it for a few months. not until AMD really show who they actually prefer to have the access to Mantle. if anything both nvidia and intel engineers has been asking about access to mantle since the very beginning when AMD said anyone can have a look and eventually use them (though intel is being more persistent about it until they actually have working DX12 drivers in the lab ahead of having access to mantle). i'm not to be pro nvidia or anti AMD here but the reality is simply what we see. when nvidia said anyone can do anything they want with CUDA have you ever heard how people can't have access to it for what ever reason? this is what happen with mantle. AMD give "beta" excuse for not letting other IHV to even look at Mantle and yet if you're game developer you pretty much get instant access to Mantle program and even the right to use it in commercial games. and that's when AMD preaching Mantle being open source API while nvidia never said so with their CUDA.
 
Look up Nvidia Doom3 performance versus ATI. Those initial benchmarks were devastating.

4B12353A-3CB3-43C5-A361-AE539BFD743D.jpeg
 
when all this was happening OpenCL still does not exist (or barely comes out). back in 2008 the are group of developer successfully run GPU PhysX natively on AMD GPU. feel free to speculate but don't take it as a definite truth. it might just words for press but their intention could also be real. what nvidia has let other people been doing with CUDA over the years already a fine proof that if AMD really want to license PhysX nvidia would have allowed it. if anything AMD should know going to PhysX route is much better than going with Havok since there is no way GPU accelerated physics will become a reality when Havok is being owned by intel. and they finally decided to make their own choice instead of accepting PhysX or Havok; Bullet.



i'm not saying it like that. but they do have their own agenda. and i look at all of this from business perspective. not about being good or bad.



it could be. but blocking PhysX just because they did not have competitor to 5k series does not make sense for some reason. first how many game actually have GPU PhysX? even back then you can probably count them on one hand. and the list did not improve much for almost a decade later. so blocking PhysX in reality did not have effect at all towards gamer meaning doing so will not going to change anything. gamer will not going to use all nvidia just so they can have PhysX in handful of games. second the existence of hacked driver. if this is really about their sales performance then nvidia will do everything in their power to prevent this hacked driver from spreading to the public. but they did nothing at all to counter them. mind you that when it comes to sales performance nvidia are very serious about it even if they end up ruining their relationship with their partners (look what happen with MS Xbox or more recently like the Green Light program where nvidia voids manufacturer warranty if they did not follow nvidia guideline and restrictions even on factory overclock cards). the reason they block hybrid PhysX really is simple as you said and they directly said it in one of their official statements. so no need to create secondary story like how it is actually done to prevent people from using ATI cards.



interesting you mention about mantle. if you have been following mantle development since the very beginning you should know nvidia did not say anything about it for a few months. not until AMD really show who they actually prefer to have the access to Mantle. if anything both nvidia and intel engineers has been asking about access to mantle since the very beginning when AMD said anyone can have a look and eventually use them (though intel is being more persistent about it until they actually have working DX12 drivers in the lab ahead of having access to mantle). i'm not to be pro nvidia or anti AMD here but the reality is simply what we see. when nvidia said anyone can do anything they want with CUDA have you ever heard how people can't have access to it for what ever reason? this is what happen with mantle. AMD give "beta" excuse for not letting other IHV to even look at Mantle and yet if you're game developer you pretty much get instant access to Mantle program and even the right to use it in commercial games. and that's when AMD preaching Mantle being open source API while nvidia never said so with their CUDA.



LOL sorry, but, Could you twist the facts any further?

Fact 1. There was no offer from Nvidia to AMD to use Phsyx.
Fact 2. AMD would never use it as it was completely run by Nvidia. (there was no way they were going to use CUDA)
Fact 3. Nvidia never went looking to use Mantle, in fact they said it was useless to them with Dx12 coming. Which was more lies because as soon as it became Vulkan they were all how wonderful it is and Dx12 still hasn't taken off. The reason they never went looking for Mantle was because it was an api totally owned by AMD.
Fact 4. The BETA driver I mentioned was not a Hacked driver, it was an official driver from Nvidia that showed how much they were lying about development expense etc. Blocking Phsyx was purely to lock users into using their hardware when the 5xxx cards from AMD were released. That's the real truth, not this BS about development expenses.

You say you aren't Pro Nvidia or anti AMD, then you twist the facts to make AMD look like the bad guys and Nvidia to look like the good guys in both the mantle and PhsyX cases. It's pretty obvious that Nvidia blocked PhsyX to prevent ATI/AMD users from getting it. And it was also pretty obvious that Mantle was going the open source route, which it did AMD donated it to the Kronos group to become Vulkan.
 
Last edited:
Back
Top