The Streamline is a Lie

Status
Not open for further replies.
Have you already forgotten the Geforce Partner Program (GPP)? I couldn't link the OG HardOCP article because I couldn't find it.

Bennett summarizes the opinions as follows:
  • The terms of the GPP agreement are potentially illegal
  • The GPP will hurt consumer choices
  • The GPP will hurt a partner's ability to do business with other companies like AMD and Intel
The article isn’t there anymore… it was removed.

But no I hadn’t, GPP was a shitty marketing ploy, it was anti AMD for sure, and would have squeezed AMD out it was a play by play of the old dealership contracts which were used to spur competition, but that ultimately only works on a level playing field. AMD doesn’t have the resources to compete on that front. Had they implemented it, they would have gotten sued for sure.
 
The article isn’t there anymore… it was removed.

But no I hadn’t, GPP was a shitty marketing ploy, it was anti AMD for sure, and would have squeezed AMD out it was a play by play of the old dealership contracts which were used to spur competition, but that ultimately only works on a level playing field. AMD doesn’t have the resources to compete on that front. Had they implemented it, they would have gotten sued for sure.

They might end up like Intel, found guilty of illegal anticompetitive tactics and paying fines that amount to a fraction of the profits generated by those tactics.
 
They might end up like Intel, found guilty of illegal anticompetitive tactics and paying fines that amount to a fraction of the profits generated by those tactics.
Intel finally managed to get one of the bigger ones overturned.

https://thehill.com/policy/technolo...th-intel-in-appeal-of-12b-antitrust-fine/amp/

But yeah, it’s entirely possible they do, Nvidia’s not used to being pressured so it’s interesting to see what they do when they are.
Personally I haven’t seen them do anything outrageous yet, but there’s always tomorrow.
 
Yes it is the default physics engine for Unreal 4 and Unity 3D. It’s also open source not sure under which license but source and binaries are available from their GitHub.
https://github.com/NVIDIAGameWorks/PhysX
PhysX isn't Open Source only the free SDK is, all the stuff that actually makes PhysX work is proprietary and is part of NVIDIA GameWorks. Also, they're up to 5.1 but the Github is at 4.1. Not every game that uses Unreal engine uses PhysX. Borderlands 3 straight up doesn't have it. In house or Havok pretty much took over. Doing a bit of Google'ing and the last game I found to use PhysX was Mount & Blade II: Bannerlord released in 2020. A game I've never heard of. PhysX was used extensively in the past but it's pretty much dead today.
Anything running on an iOS device is using it, so are thousands of desktop applications. MoltenVK was a stopgap until development libraries caught up for non XCode environments, they have now.
Not a huge milestone for iOS devices to use it since the most extensive graphics seen on them is Among US. On Mac OSX you don't have many applications and the only 3rd party application I can think of that actively supports it is World of Warcraft which has been bought by Microsoft. "According to Apple, more than 148,000 applications use Metal directly, and 1.7 million use it through high-level frameworks." They mean MoltenVK when they say "high-level frameworks". How's that stopgap working out?
 
Last edited:
PhysX isn't Open Source only the free SDK is, all the stuff that actually makes PhysX work is proprietary and is part of NVIDIA GameWorks. Also, they're up to 5.1 but the Github is at 4.1. Not every game that uses Unreal engine uses PhysX. Borderlands 3 straight up doesn't have it. In house or Havok pretty much took over. Doing a bit of Google'ing and the last game I found to use PhysX was Mount & Blade II: Bannerlord released in 2020. A game I've never heard of. PhysX was used extensively in the past but it's pretty much dead today.

Not a huge milestone for iOS devices to use it since the most extensive graphics seen on them is Among US. On Mac OSX you don't have many applications and the only 3rd party application I can think of that actively supports it is World of Warcraft which has been bought by Microsoft. "According to Apple, more than 148,000 applications use Metal directly, and 1.7 million use it through high-level frameworks." They mean MoltenVK when they say "high-level frameworks". How's that stopgap working out?
If you find mount and blade 2 on sale grab it, it’s a lot of fun. You want to find a lance and a nice horse, then run around with the lance couched mowing people down. It’s everything I should have done in my first divorce.
 
If you find mount and blade 2 on sale grab it, it’s a lot of fun. You want to find a lance and a nice horse, then run around with the lance couched mowing people down. It’s everything I should have done in my first divorce.
Slow learner?
 
... Closed source software is certainly viable. It's the way in which Nvidia uses its software to create lock-ins and lock-outs that is the problem. ...
From how I understand it, this prevents any "lock-outs" because the interface from the game engine to the graphics pipeline will have some standard hooks/api's.

The game devs can begin using a single set of hooks, that are supported (I'm sure AMD will add support) by all three GPU vendors.

That's it.

The driver processing for effects can stay "closed" if they want, or they can be open. Doesn't matter to the game dev, doesn't matter to the gamer.

It allows for proprietary and open drivers to share an interface with the game engine. Which makes the closed vs open argument moot. In this way it helps all involved. You can argue this allows nVidia driver portions to remain "closed", while allowing the ecosystem of game engine/driver/gpu to all still work together in spite of it. So in this respect, nVidia wins and can keep some IP closed. But Game dev's also win be gaining a single interface to all available technologies, and Gamers win as a result as well, regardless of GPU platform they play on.

Y'all should be celebrating.

But, this is a forum on the internet, and hate/trolls/fighting has been the order of the day forever. Carry on.
 
From how I understand it, this prevents any "lock-outs" because the interface from the game engine to the graphics pipeline will have some standard hooks/api's.

The game devs can begin using a single set of hooks, that are supported (I'm sure AMD will add support) by all three GPU vendors.

That's it.

The driver processing for effects can stay "closed" if they want, or they can be open. Doesn't matter to the game dev, doesn't matter to the gamer.

It allows for proprietary and open drivers to share an interface with the game engine. Which makes the closed vs open argument moot. In this way it helps all involved. You can argue this allows nVidia driver portions to remain "closed", while allowing the ecosystem of game engine/driver/gpu to all still work together in spite of it. So in this respect, nVidia wins and can keep some IP closed. But Game dev's also win be gaining a single interface to all available technologies, and Gamers win as a result as well, regardless of GPU platform they play on.

Y'all should be celebrating.

But, this is a forum on the internet, and hate/trolls/fighting has been the order of the day forever. Carry on.

That's a very pro-Nvidia take on the situation. I expect no less based on your previous posts over the years.
 
It allows for proprietary and open drivers to share an interface with the game engine. Which makes the closed vs open argument moot. In this way it helps all involved. You can argue this allows nVidia driver portions to remain "closed", while allowing the ecosystem of game engine/driver/gpu to all still work together in spite of it. So in this respect, nVidia wins and can keep some IP closed. But Game dev's also win be gaining a single interface to all available technologies, and Gamers win as a result as well, regardless of GPU platform they play on.

Y'all should be celebrating.

But, this is a forum on the internet, and hate/trolls/fighting has been the order of the day forever. Carry on.
Proprietary software works great for the moment you're using it. Give it 3 or 5 years later and none of those features in the game works on newer hardware with newer OS's. All this does is cause headaches for anyone trying to play these games because it turns out you need to download this obscure file from a website that'll probably give you the digital version of HIV while having to edit a registry entry to even get the game to boot. Even then it'll just crash every so often.

Just because it's partially open source doesn't make it any less proprietary. Without it being completely open source it just means no other GPU manufacturer will support it. Open source alone isn't enough to prevent bitrot because if Nvidia is in charge of the development then it's also going to bitrot. We don't need Nvidia to simplify upscaling when this is something that should be dealt with from the Khronos Group or by OS. Nothing Nvidia ever does has ever lasted and just causes headaches for people who want to play their games on Windows 12.
 
From how I understand it, this prevents any "lock-outs" because the interface from the game engine to the graphics pipeline will have some standard hooks/api's.

The game devs can begin using a single set of hooks, that are supported (I'm sure AMD will add support) by all three GPU vendors.

That's it.

The driver processing for effects can stay "closed" if they want, or they can be open. Doesn't matter to the game dev, doesn't matter to the gamer.

It allows for proprietary and open drivers to share an interface with the game engine. Which makes the closed vs open argument moot. In this way it helps all involved. You can argue this allows nVidia driver portions to remain "closed", while allowing the ecosystem of game engine/driver/gpu to all still work together in spite of it. So in this respect, nVidia wins and can keep some IP closed. But Game dev's also win be gaining a single interface to all available technologies, and Gamers win as a result as well, regardless of GPU platform they play on.

Y'all should be celebrating.

But, this is a forum on the internet, and hate/trolls/fighting has been the order of the day forever. Carry on.
If it works, then great. All I know is wayland/mesa devs tried to work with nvidia on a standard API and nvidia wouldn't budge...still hasn't afaik.
 
From how I understand it, this prevents any "lock-outs" because the interface from the game engine to the graphics pipeline will have some standard hooks/api's.

The game devs can begin using a single set of hooks, that are supported (I'm sure AMD will add support) by all three GPU vendors.

That's it.

The driver processing for effects can stay "closed" if they want, or they can be open. Doesn't matter to the game dev, doesn't matter to the gamer.

It allows for proprietary and open drivers to share an interface with the game engine. Which makes the closed vs open argument moot. In this way it helps all involved. You can argue this allows nVidia driver portions to remain "closed", while allowing the ecosystem of game engine/driver/gpu to all still work together in spite of it. So in this respect, nVidia wins and can keep some IP closed. But Game dev's also win be gaining a single interface to all available technologies, and Gamers win as a result as well, regardless of GPU platform they play on.

Y'all should be celebrating.

But, this is a forum on the internet, and hate/trolls/fighting has been the order of the day forever. Carry on.
It's amazing that you know so many of the finer details on something that's apparently nothing more than a concept and PR piece at this point. There's a number of ways that they could implement something like this and some are far from hardware agnostic.
 
The question isn't what 1.0 will look like. That will look fine, and probably well designed for all current hardware.

What does 4.0 look like?

Will NV continue to try to make something flexible and agnostic? Maybe a tiny nudge for one approach vs the other? Dunno. But it is a concern if it is not an independent standard.
 
Last edited:
I forgot that they actually released something* so calling it just a concept was inaccurate but the only other vendor that has said anything about using it hasn't released the hardware or drivers that would use it so the multi-vendor aspect is nothing more than a concept and we have no way of actually knowing if it will be agnostic in practice. I've seen a few articles talking about this and all of them seem to be based on a PR release by Nvidia and are very light on details which makes me doubt that anyone has taken the time to go through all of it and look for predictable issues much less the unpredictable or less obvious issues.

Based on the scope of comments here and elsewhere I'm skeptical that any of the people defending it are doing anything other than repeating talking points and making assumptions just as I'm skeptical that Nvidia would ever do anything truly altruistic and that doesn't have a hidden self-serving ulterior motive.

*Edit: I have yet to see any serious discussion about what they've released or anybody cite anything in it specifically as a source for information.
 
It's amazing that you know so many of the finer details on something that's apparently nothing more than a concept and PR piece at this point. There's a number of ways that they could implement something like this and some are far from hardware agnostic.
But so far no one else has brought forth any such solution. Games being able support multiple upscaling paths with little extra work will benefit gamers also (If one day 'AMD BetterScaling' is better than 'NV JacketScaling', or vice versa). If it can be done in engine (as said in this thread) then those developers should now know that it's possible to insert their own form of this. But I don't think NV "owns" this tech as people are suggesting anyway as it's just a way to call upon each brands tech. I think Nvidia is worried that FSR will get close enough to DLSS that devs will only need to support it which will render that branding obsolete; this will allow expand DLSS to be available in those "FSR only" games.
 
But so far no one else has brought forth any such solution. Games being able support multiple upscaling paths with little extra work will benefit gamers also (If one day 'AMD BetterScaling' is better than 'NV JacketScaling', or vice versa). If it can be done in engine (as said in this thread) then those developers should now know that it's possible to insert their own form of this. But I don't think NV "owns" this tech as people are suggesting anyway as it's just a way to call upon each brands tech. I think Nvidia is worried that FSR will get close enough to DLSS that devs will only need to support it which will render that branding obsolete; this will allow expand DLSS to be available in those "FSR only" games.
I'm not against the concept and I'd be happy to eat my words if this ends up being helpful and doesn't unfairly hinder their competition I'm just skeptical that nvidia would do something like that. I'm not convinced that it is bad but I've also seen nothing that convinces me otherwise and I trust nvidia less than most corporations which I already don't generally trust.

I do think that the endgame could simply be to keep dlss in games as a feature they can advertise and mentioned that in an earlier post, if that's the case it's a big meh in both positive and negative ways.
 
I'm not against the concept and I'd be happy to eat my words if this ends up being helpful and doesn't unfairly hinder their competition I'm just skeptical that nvidia would do something like that. I'm not convinced that it is bad but I've also seen nothing that convinces me otherwise and I trust nvidia less than most corporations which I already don't generally trust.
I get the vast skeptical, but I think Nvidia would certainly do something that help people having access to motion vector for their temporal upscaling (or motion blur and what not) for many reasons:

1) Far from a 100% success rate of priority tech in the pass, On gpu PhysicX-hair and what not, CUDA did have a lot of success over time but it is far from perfect

2) Fierce competition that almost force it, for them to not be locked out, many game engine have their own DLSS competitor, AMD will have is own, Intel has is own open source coming out, because of the importance of the PS5/Xbox DLSS alone solution will never be even thought for AAA games title, they need to upscale on console for which it is arguably a more important battle (it is more often feeding 4k screens and virtually never natively), chance are good that this does not catch up that DLSS support would just disappear or would need to be paid for to stay alive.

3) How much potential value there would be for something like this to catch up for Nvidia (i.e. if intel/AMD goes fully for it) has long they consider they will be the master hardware/software on extra feature that are not rasterisation, which is not a bad bet for them at all, it could make the next AI/other module to run to the GPU much easier to happen and be popular fast, instead of taking years to catch up, not being special when that occur and disappear leaving place to Havoc and others.

There is a scenario here, that without any lockout of competitor, Nvidia hurt them by a big amount if they go in that route, with Nvidia offering many extra features they don't and went they create a competitor inferior to it, the next 3-4 year's could be critical here for Nvidia competitor to stay alive.

The endgame is keeping DLSS alive and popularizing DLSS like feature, some AI gpu driven with a cool name for exampe, fast loading via this extra decompressing hardware for directstorage, use our knew compressing file format and others feature that rise their moat, without having to make anything that make it impossible for Intel/AMD to make their own plugins that plug-in directly with the same api that make taht all game that use it will also use Intel/AMD and work on them has well, but a generation late and inferior.
 
The free CUDA handouts are a good way to show what they are up to with this move.
 
I find it fitting that even if Nvidia is being benevolent with this move, it fails because so many are too skeptical due to their past bullshit shenanigans.
I use NVidia’s stuff and for a number of business reasons I am completely in their camp like it or not. But they deserve every ounce of shade thrown their way. Nvidia are assholes, and Intel may be dicks, but … <Team America rants>… Nvidia may be trying to come off as altruistic here but Intel is forcing their hand here, and that’s the only reason this is a thing.

Intel may be new to the game but if their datacenter tech and patents are any indication of what is to come, then Nvidia has their hands full, neither Nvidia nor AMD can compete with Intels ability to race to the bottom. Intel has the ability to subside their fabrication with the rest of their stack which is a luxury that nobody else has, furthermore their investors and board understand that losses may be needed to grab a foothold in the market which is a game that neither Nvidia nor AMD can afford for long.

Note: I’m going to have these guys a mulligan, they f’ed up this encounter so bad that instead of a TPK this is going to turn into a prison break.
 
I forgot that they actually released something* so calling it just a concept was inaccurate but the only other vendor that has said anything about using it hasn't released the hardware or drivers that would use it so the multi-vendor aspect is nothing more than a concept and we have no way of actually knowing if it will be agnostic in practice. I've seen a few articles talking about this and all of them seem to be based on a PR release by Nvidia and are very light on details which makes me doubt that anyone has taken the time to go through all of it and look for predictable issues much less the unpredictable or less obvious issues.

Based on the scope of comments here and elsewhere I'm skeptical that any of the people defending it are doing anything other than repeating talking points and making assumptions just as I'm skeptical that Nvidia would ever do anything truly altruistic and that doesn't have a hidden self-serving ulterior motive.

*Edit: I have yet to see any serious discussion about what they've released or anybody cite anything in it specifically as a source for information.
Ok. And the same can be said for Frgmstr's post on [H], or any article we have yet read, or any post or opinion in this thread.

Time will tell and all that.

But, nVidia DOES benefit from this, they help make it easier for game engines' to use GPU features, all GPU features, from any vendor. They also get to keep their tech partially closed, which they want to do. Anything that might give competitors insight into the inner GPU workings they keep close to the vest. That's their business. I don't care as long as the tech is in the games I play. If this helps get their tech into more games, great! That's good for me, good for game dev's.

nVidia isn't a saint, we can all agree. Show me 1 company that is. Show me one poster in this forum that is. I don't want a repeat of Physx, one of their mistakes. The times they've done something that hurt the gaming experience, piss me off as much as anyone.

Fact is, they have been leading in the GPU/Graphics innovations for a decade, everyone else just copies what they do. IMHO they've done more good than bad.

It might be better for game dev's if things were more open, I think this moves in that direction (from what I have read). Again, we will see.

In the past they've done everything they could to slow competitors down in achieving something equal to their offering. That's business. I doubt it really hurt their competitor tho... so many people will buy 'just because'.. The competitors hurt themselves by not innovating, only ever following. That's on them. Where is the innovation that they create, and then just "share"? Nowhere. And when they make something that's a copy, they have to make it open to have a snowball's chance in hell of it being adopted/succeeding (freesync). Basically everything new in graphics for the last 10+ years has followed this pattern.

Keep whining without proof. I will keep playing games with better tech, and be happy for it.
 
Ok. And the same can be said for Frgmstr's post on [H], or any article we have yet read, or any post or opinion in this thread.

Time will tell and all that.

But, nVidia DOES benefit from this, they help make it easier for game engines' to use GPU features, all GPU features, from any vendor. They also get to keep their tech partially closed, which they want to do. Anything that might give competitors insight into the inner GPU workings they keep close to the vest. That's their business. I don't care as long as the tech is in the games I play. If this helps get their tech into more games, great! That's good for me, good for game dev's.

nVidia isn't a saint, we can all agree. Show me 1 company that is. Show me one poster in this forum that is. I don't want a repeat of Physx, one of their mistakes. The times they've done something that hurt the gaming experience, piss me off as much as anyone.

Fact is, they have been leading in the GPU/Graphics innovations for a decade, everyone else just copies what they do. IMHO they've done more good than bad.

It might be better for game dev's if things were more open, I think this moves in that direction (from what I have read). Again, we will see.

In the past they've done everything they could to slow competitors down in achieving something equal to their offering. That's business. I doubt it really hurt their competitor tho... so many people will buy 'just because'.. The competitors hurt themselves by not innovating, only ever following. That's on them. Where is the innovation that they create, and then just "share"? Nowhere. And when they make something that's a copy, they have to make it open to have a snowball's chance in hell of it being adopted/succeeding (freesync). Basically everything new in graphics for the last 10+ years has followed this pattern.

Keep whining without proof. I will keep playing games with better tech, and be happy for it.
I never said that they couldn't benefit from a more open approach just that it hasn't been their modus operandi.

Your claims that nvidia has been the only one pushing gpu technology forward simply isn't true though; amd was behind hardware tesselation, ati helped develop gddr3 which the first version of gddr to really seperate itself from ddr(nvidia actually used it first but wasn't involved in developement), they took a risk(and failed on the consumer side) with HBM, their work on vulkan is the main reason we have a new more efficient directx, and I believe they were the first to play around with post process aa.

I'd say that nvidia's main strength has been execution more than innovation, though they have come up with new stuff as well(dlss, real time ray tracing, gsync[even if in a reverse of normal amd was more responsible for making it mainstream]).
 
Apparently you didn't read my post...

...Fact is, they have been leading in the GPU/Graphics innovations for a decade, everyone else just copies what they do. IMHO they've done more good than bad....
and actually, it's probably closer to 15 years or more.

Your claims that nvidia has been the only one pushing gpu technology forward simply isn't true though; amd was behind hardware tesselation,
"The first consumer graphics card featuring hardware tessellation that made its way to the market was the ATI Radeon 8500 in 2001."

ati helped develop gddr3 which the first version of gddr to really seperate itself from ddr(nvidia actually used it first but wasn't involved in developement),
Despite being designed by ATI together with JEDEC, the first card to use the technology was nVidia's GeForce FX 5700 Ultra in early 2004...

they took a risk(and failed on the consumer side) with HBM, their work on vulkan is the main reason we have a new more efficient directx, and I believe they were the first to play around with post process aa.
AMD contributed to Mantle, designed to improve upon DX11 by adding console-like API's. Mantle only ever worked on AMD GPU's and is long dead. Vulkan came from that later.
From AMD's webpage: "Forged by the industry. Developed by the Khronos Group, the same consortium that developed OpenGL®, Vulkan™ is a descendant of AMD's Mantle"
It helped in cases of CPU-bound games, which means pc's with older CPU's. I'm all for anything that helps, but this wasn't really anything new in Graphics innovation. DX12 is the better choice if you are GPU constrained, which is always the case if you are pushing the graphics edge.
We can thank Mantle for pushing Microsoft to finally release DX12 at least, so I'll give you that. (Doesn't count as Graphics innovation though).
 
Apparently you didn't read my post...


and actually, it's probably closer to 15 years or more.


"The first consumer graphics card featuring hardware tessellation that made its way to the market was the ATI Radeon 8500 in 2001."


Despite being designed by ATI together with JEDEC, the first card to use the technology was nVidia's GeForce FX 5700 Ultra in early 2004...


AMD contributed to Mantle, designed to improve upon DX11 by adding console-like API's. Mantle only ever worked on AMD GPU's and is long dead. Vulkan came from that later.
From AMD's webpage: "Forged by the industry. Developed by the Khronos Group, the same consortium that developed OpenGL®, Vulkan™ is a descendant of AMD's Mantle"
It helped in cases of CPU-bound games, which means pc's with older CPU's. I'm all for anything that helps, but this wasn't really anything new in Graphics innovation. DX12 is the better choice if you are GPU constrained, which is always the case if you are pushing the graphics edge.
We can thank Mantle for pushing Microsoft to finally release DX12 at least, so I'll give you that. (Doesn't count as Graphics innovation though).
I gave several counter points including several within that decade you're now fixating on, your third point is simply restating what I already said(in the portion you quoted) and we all know where MS got large chunks of dx12 not to mention the motivation to actually release a new 3d api, you're right though I forgot to mention the mantle step in all that.
 
  • Like
Reactions: kac77
like this
What? ATI had the first consumer card with hardware tessellation in 2001 as GoodBoy said, and AMD didn't buy ATI until 2006. AMD had fuck all to do with hardware tessellation. You seem to be confusing ATI for AMD; they are not the same.
The 5850 was announced in late sep 2009 which the first card with real *usable tesselation.

Edit: I believe the 5000 series was the last generation to carry the ATI name but was in actuality AMD.
*Edit 2
 
Last edited:
In the past they've done everything they could to slow competitors down in achieving something equal to their offering. That's business. I doubt it really hurt their competitor tho... so many people will buy 'just because'.. The competitors hurt themselves by not innovating, only ever following.
It is hard to innovate when your competitorS (both Nvidia and Intel) use dirty tricks that crush your sales, kills your revenue, prevents investing in R&D, that stifles innovation.
 
What? ATI had the first consumer card with hardware tessellation in 2001 as GoodBoy said, and AMD didn't buy ATI until 2006. AMD had fuck all to do with hardware tessellation. You seem to be confusing ATI for AMD; they are not the same.
** Flame suit on***

That might be the case but it's still not nVidia. I view Nvidia like I view pre 2017 Apple. Most of nVidia's stuff came from others. Very little of it was designed in house.

The only thing I'll give them credit for is DLSS, but that's just an answer to pushing a technology that still isn't ready to be done natively.

The absolutely hilarious part is DLSS against RT changes the fidelity and even changes the object itself providing an almost realistic effect which ain't too far off from rasterization in today's engines. You're (we all are at this point) paying more for an almost realistic effect that the human eye can't even discern without being told when or where it's applied. Why?
 
** Flame suit on***

That might be the case but it's still not nVidia. I view Nvidia like I view pre 2017 Apple. Most of nVidia's stuff came from others. Very little of it was designed in house.

What GPU "stuff" came completely from AMD?
 
We can thank Mantle for pushing Microsoft to finally release DX12 at least, so I'll give you that. (Doesn't count as Graphics innovation though).
I’m not sure about that, the time between AMD announcing Mantle and Microsoft launching DX12 was too short, even Microsoft can’t just shit out a fully functioning and documented API that fast. I’m thinking Microsoft was working on that behind closed doors well before AMD started mentioning Mantle.
 
PhysX isn't Open Source only the free SDK is, all the stuff that actually makes PhysX work is proprietary and is part of NVIDIA GameWorks. Also, they're up to 5.1 but the Github is at 4.1. Not every game that uses Unreal engine uses PhysX. Borderlands 3 straight up doesn't have it. In house or Havok pretty much took over. Doing a bit of Google'ing and the last game I found to use PhysX was Mount & Blade II: Bannerlord released in 2020. A game I've never heard of. PhysX was used extensively in the past but it's pretty much dead today.

Not a huge milestone for iOS devices to use it since the most extensive graphics seen on them is Among US. On Mac OSX you don't have many applications and the only 3rd party application I can think of that actively supports it is World of Warcraft which has been bought by Microsoft. "According to Apple, more than 148,000 applications use Metal directly, and 1.7 million use it through high-level frameworks." They mean MoltenVK when they say "high-level frameworks". How's that stopgap working out?
Ghostwire: Tokyo uses PhysX, and that just came out.
 
I’m not sure about that, the time between AMD announcing Mantle and Microsoft launching DX12 was too short, even Microsoft can’t just shit out a fully functioning and documented API that fast. I’m thinking Microsoft was working on that behind closed doors well before AMD started mentioning Mantle.
I believe his claim was Mantle finally forced Microsoft into launching DX12. The fact that Microsoft was working on that behind closed doors and released it only after AMD announced Mantle still can be true.
 
I believe his claim was Mantle finally forced Microsoft into launching DX12. The fact that Microsoft was working on that behind closed doors and released it only after AMD announced Mantle still can be true.
DX12 was likely in production before Mantle was even a thing, leaked internal documents have Microsoft starting on it back in 2011. Saying Mantle forced anything is just too much of a stretch. If anything Mantle was a desperate attempt to create their own proprietary API to emulate some of Nvidia’s success with them, or some way to keep themselves in the news so people would remember they existed. The DX12 launch was timed pretty well to line up with their OS launch and was happening regardless of anything AMD was doing at the time.

The only thing that Mantle did was jumpstart Vulkan, but that was done out of necessity not design.
 
DX12 was likely in production before Mantle was even a thing, leaked internal documents have Microsoft starting on it back in 2011. Saying Mantle forced anything is just too much of a stretch. If anything Mantle was a desperate attempt to create their own proprietary API to emulate some of Nvidia’s success with them, or some way to keep themselves in the news so people would remember they existed. The DX12 launch was timed pretty well to line up with their OS launch and was happening regardless of anything AMD was doing at the time.

The only thing that Mantle did was jumpstart Vulkan, but that was done out of necessity not design.
All three worked together on Vulkan, GL5 was WIP long before Mantle/DX12, and once Mantle was announced and preliminary design was released, the DX and OpenGL working group took an interest and talked to AMD about integrating some of the design ideas into their own (at the time wip) APIs. GL5, at the time, was quite different. But it was early enough and designed in such a way that many parts of Mantle/Vulkan could be used there too (although, through a very different API). At some point in all this, AMD started working with the (now) Vulkan Workgroup on designing Vulkan, using Mantle (and maybe DX12 too) as it's inspiration.

This all going from memory, anyway. Some of my timeline might be off a bit.
 
All three worked together on Vulkan, GL5 was WIP long before Mantle/DX12, and once Mantle was announced and preliminary design was released, the DX and OpenGL working group took an interest and talked to AMD about integrating some of the design ideas into their own (at the time wip) APIs. GL5, at the time, was quite different. But it was early enough and designed in such a way that many parts of Mantle/Vulkan could be used there too (although, through a very different API). At some point in all this, AMD started working with the (now) Vulkan Workgroup on designing Vulkan, using Mantle (and maybe DX12 too) as it's inspiration.

This all going from memory, anyway. Some of my timeline might be off a bit.
I know Kronos announced their plans for a low level API in 2014, at that stage OpenGL4 was still pretty new. And at the time their announcement of wanting a low level API was believed to be more a response to Apple’s Metal than to Microsoft’s announcement that DX12 was coming. But the members of Kronos would have known they DX12 was going to be a low level API so in retrospect that belief doesn’t really hold up.

I hadn’t heard of OpenGL5, the place I was working at during that time was still waiting on the promised changes due in 4.5.
 
I know Kronos announced their plans for a low level API in 2014, at that stage OpenGL4 was still pretty new. And at the time their announcement of wanting a low level API was believed to be more a response to Apple’s Metal than to Microsoft’s announcement that DX12 was coming. But the members of Kronos would have known they DX12 was going to be a low level API so in retrospect that belief doesn’t really hold up.

I hadn’t heard of OpenGL5, the place I was working at during that time was still waiting on the promised changes due in 4.5.
They had a lot of ideas they wanted to put into the next version, but kept pushing it back because there wasn't a whole lot of interest. Eventually someone finally said "we're doing it," and work began. Not sure when anymore.

Edit: Around 2014, apparently. https://www.phoronix.com/scan.php?page=news_item&px=MTc1OTA

And they didn't release 5.0, just kept with the 4.x versioning. 5.0 was being hyped at the time, so that's probably why I misremembered.
 
Last edited:
It is hard to innovate when your competitorS (both Nvidia and Intel) use dirty tricks make better products that crush your sales, kills your revenue, prevents investing in R&D, that stifles innovation.
Fixed it for you.

AMD as a company is doing quite well. They have no excuse for not innovating in the GPU space. Being a "follower" is actually a sound business strategy. Doesn't make them a saint though as half of the commenters here seem to believe.
I know Kronos announced their plans for a low level API in 2014, at that stage OpenGL4 was still pretty new. And at the time their announcement of wanting a low level API was believed to be more a response to Apple’s Metal than to Microsoft’s announcement that DX12 was coming. But the members of Kronos would have known they DX12 was going to be a low level API so in retrospect that belief doesn’t really hold up.

I hadn’t heard of OpenGL5, the place I was working at during that time was still waiting on the promised changes due in 4.5.
I think you are correct. The "AMD forced Microsoft to release DX12" was something the news stories purported at the time. A more accurate statement, if there was any impact of one event on the other, would be "caused Microsoft to move up their timeline to release DX12".
It was coming eventually...

so that part of my post can be changed from:
We can thank Mantle for pushing Microsoft to finally release DX12 at least, so I'll give you that. (Doesn't count as Graphics innovation though).
To:
We can thank Mantle for accelerating Microsoft's timeline to release DX12, but that doesn't count as Graphics innovation though.
I gave several counter points including several within that decade you're now fixating on, your third point is simply restating what I already said(in the portion you quoted) and we all know where MS got large chunks of dx12 not to mention the motivation to actually release a new 3d api, you're right though I forgot to mention the mantle step in all that.
Try again? Show me 1 innovation AMD has created since 2007 (15 years), that wasn't a copy of something nVidia did first.
 
Last edited:
Status
Not open for further replies.
Back
Top