Current state of PhysX? Is it well and truly dead?

NoxTek

The Geek Redneck
Joined
May 27, 2002
Messages
9,300
I've come to a point where I'm fortunate enough to have several 'back up' GPUs at my disposal should my Asus Strix GTX1080 fail at a time when crypto miners have run the prices up to astronomical levels. I've got a couple of GTX 660s, a GTX 770, a GTX 780, a GTX 980, and I think even a lowly GTX 560 floating around here now.

So I started considering dropping one of the above extra cards in my machine as a dedicated PhysX card and I thought surely the landscape has evolved in the few years since I last took a look. Seemingly nope. The last AAA title that had PhysX support appears to be Fallout 4, and before that really it's just all of the Batman Arkham games and the Borderlands Sequels with a bunch of no-names sprinkled in.

Why the hell has PhysX stagnated so much? When it's done right it really does add awesomeness to a game (see: Batman Arkham Asylum/City/etc). Is it that hard to code for? Does nVidia charge an exorbitant licensing fee or something?

Meanwhile I'm still pondering throwing one of these extra cards in just to replay the few titles that do support it. Or maybe somehow offload streaming / encoding to the secondary GPU....
 
It's still alive, in some sense, but the GPU accelerated portion is not widely supported. On the CPU side, many popular engines do use PhysX (both Unity and Unreal do).

The basic issue is not the cost. It's free for PC, but I believe there is an extra fee for consoles (not sure if this changed).

Mainly the problem is that the GPU acceleration only works on Nvidia cards, and big engines like Unity and Unreal are hesitant to integrate proprietary features into their code. In that same vein, developers are not eager to spend time and money on a feature only a portion of their customers can use (even if Nvidia is fairly dominate on PC). Some studios have still went with GPU PhysX, maybe due to relationship with Nvidia, or because they want to push the envelope, but it's far and few between. The situation will likely not get any better, and engines like Unreal have their own GPU particle physics now, leaving developers with less reason to integrate PhysX.

That said, some older games can still support it, and it's awesome when it's working. I had a great time with Alice Madness Returns and Mirror's Edge, and there are probably other games that are worth going back for. I just wouldn't expect many more titles given the current situation.
 
It gets GPU support when nVidia decides to pay a developer to implement it.

Similar to Gameworks.
 
Did you mean Hairworks?
I won't have you slandering technological marvels and their unprecedented effects just for fun.

#hairmatters

Ironically, i'm bald, LOL

Hairworks is a subset of Gameworks. Technically, PhysX is also now under the Gameworks umbrella.

Not to be confused with AMD's TressFX, which is AMD's version of Hairworks.
 
Now i have to wonder if your notion of irony is truly up there, or if you mistook my jesting for an actual argument.
 
Now i have to wonder if your notion of irony is truly up there, or if you mistook my jesting for an actual argument.

I think I understand the meaning of irony but if there was a joke there I apologize, I missed it and thought you were trying to make a correction on my part.
 
I remember when PhysX first came out, got’s me an Ageia 128 megabyte PhysX card a few weeks before they went on sale.

The only games that had it at that time was ghost recon and a few apps from ageia and a game, forgot the name.

The fan died and I sent it back to bfg, this was just as they were folding, they sent the card back unfixed.

I am sure I could find a fan or ghetto mod it but when the fan broke physx was basically dead.

I still have the card, maybe someday it might be worth some money to some collector ?
 
I remember when PhysX first came out, got’s me an Ageia 128 megabyte PhysX card a few weeks before they went on sale.

The only games that had it at that time was ghost recon and a few apps from ageia and a game, forgot the name.

The fan died and I sent it back to bfg, this was just as they were folding, they sent the card back unfixed.

I am sure I could find a fan or ghetto mod it but when the fan broke physx was basically dead.

I still have the card, maybe someday it might be worth some money to some collector ?

Why sell the card when you can frame it as a reminder about early adopting technologies
 
I thought I'd chime in, because I'm a developer and know PhysX fairly well (not, by far, an expert on the product itself, but I use it and I've developed in C++ for various industries for decades).

It's far from dead. Factually, it's about to experience quite a resurgence.

NVidia recently released the product as open source, though that only impacts versions 3.4.2 and 4.x (now at 4.1), and the license excludes consoles (though the source is applicable to console software advancement, a title can't be released on console platforms using the open source license alone).

The Unreal engine uses PhysX by default. Unity just upgraded their implementation in their 2019.1 release to PhysX version 4.1. Any game built on these engines most likely uses PhysX without making that clear.

I can predict with about 85% confidence that the GPU implementation in the PhysX code will be augmented to support either OpenCL or some other GPU compute tech. NVidia never had an interest, but the "public" of developers include many (myself among them) who can and (maybe not me) will deploy GPU implementation on non-NVidia devices. I assume a number of corporations would throw resources at this, from Sony to any/all AAA publishers.

The recent version includes new joint types (articulations), that are far more stable, promising much better automobile and robotics simulation. Performance has ramped over over the last several versions, as has stability (which has a dual meaning in a physics engine, but here I refer to the software's reliability and resource utilization, especially RAM).

There are likely a lot more titles that used PhysX than you're familiar with, as many might not claim it specifically over the game engine used (incorporating PhysX somewhat anonymously).

PhysX isn't simple in the sense that it is a physics engine, where complexity is just part of the deal. It is, however, well written (though it shows it's roots from many years ago compared to modern C++ code), well documented and rather comprehensible. If one uses a good game engine, the use of PhysX is almost hidden from view. Physics can be "scripted" or "modeled" in those engines. For example, in Unity, one can import a mesh, simplify a version of it for the physics simulation, set mass, friction (choose a physics material), and suddenly it's a rigid body object that reacts in the scene. One hardly knows the physics engine is actually there, except that objects are colliding, bouncing, reacting to gravity, etc.

That said, the more complex one's design, the more work involved. Try to create a robotic arm, for example, and you're in for a winding road with any physics engine (creating child links, adjusting motor powers, setting angular limits, solving for inverse kinematics in some cases). The old joints in previous versions of PhysX were simply not stable enough to create multi-jointed robotic arms without a LOT of effort. That is not limited to robots alone - the same problem happens if you attach a trailer to a truck. Even a simple door can break its hinges if it is hit hard enough (and where that should not happen, it's nearly impossible to prevent).

The new joints (articulations), however, are ideal for this. The same issue can impact vehicles (beyond the "standard" vehicle model provided). If you make a car that drives on a paved road, you're fine. If you want to make a "monster truck" that drives over objects, forget it. The vehicle's standard approach does not really account for a car where the tire touches a wall before the vehicle does. The system really only considers the patch of the tire that contacts the ground. Hit anything like a bump or a rock and that's like driving a shopping cart, not a car, with long legs and 1" wheels at 90 mph (can't even drive that over a door jam). The tires you see are just visual representations - they're not what the physics engine uses.

That said, you have a good physics engine right there, so you can fashion traction aware tires that are actually round (not just a ray cast toward the ground), and simulate a "monster truck" or a tank or whatever. For enthusiasts, students and amateurs that would seem like a lot of work, but professional development teams expect it, and there are products one can add on to either the game engine (Unity) or PhysX itself to help streamline the concept. Many are "in house" though.

Havok remains about the last of the "big ticket" engines left. PhysX was, once. Havok has a few advantages, but the price may not be worth it to many. Havok's primary advantage is determinism. Simply put, applicable to network (Internet) play, it means that all viewers/players will see the same simulation given the same inputs. PhysX is only partially deterministic in this sense. That's less of a problem than an amateur or student might think, but it does make networked game development simpler. In PhysX you can arrange for it, especially with immediate mode. This means the code runs a simulation on general objects that aren't particularly important to gameplay, then isolates key objects for simulation in a separate group under a subset of physics intended specifically for those objects/characters. This subset can be made deterministic (more than the rest of the game), allowing for synchronization on key objects.

Unity is building their own physics engine based on partnership with Havoc. Soon they'll offer a choice of using PhysX (built in, as they have for years), Unity's own engine or the full Havok (at a price). It is part of their push for their "new" tech called "ECS", which isn't fully baked and won't be in use for at least a year or more. It's in alpha at present.

CryEngine has their own physics engine (never used it), if I recall correctly. Amazon bought complete rights to CryEngine, released that open source, and thus allow for multiple physics engine options (if you work at it). Amazon calls their game engine Lumberyard.

I bring these up because they are, at present, about the entire collection of competitors to PhysX outside of the open source engines, like Bullet, Newton and ODE. Newton did seem to die off, but appears to be forked or picked up by another team. ODE hasn't changed in years. Bullet, on the other hand, is still in constant development, and recently added (perhaps still beta level) a GPU compute version. In practical use, Bullet can seem on par with PhysX (for a while it was even better, before PhysX 3.x), and some large projects have used Bullet. Contributors to Bullet included Sony and IBM (limited parts). At this point, however, Bullet will have to advance to catch up to the state of PhysX 4.1.

I'd have to guess, but I doubt anyone has seen a game with PhysX more current than about version 3.3 (I don't think the newer versions have been out long enough for a new AAA title to be released with it).

Unity, for example, places PhysX behind a barrier. Script is written in C#, and physics is presented as a C# interface. The native C++ code of PhysX can't even be seen or accessed. It could be any physics engine inside and you wouldn't know. This also means the application is entirely dependent on Unity to get the configuration right. That's ok at release time, it likely will work well, but in a distant future (2 years), products may exist that Unity's 2 year old code can't configure, and so resources go unused unless the vendor releases an update using a newer release of Unity (or at least one that's patched with an update).

Unreal is scripted in C++ (they have lots of options, but the point is that PhysX is not on the other side of an impenetrable barrier). Same for Lumberyard (C++).

That said, even for Unity a developer can ignore the PhysX engine incorporated in Unity, and use the latest version in a C++ native module. I do it. NVidia is currently posting an alpha project for that, and there's a C++ library to support full scripting of Unity projects in C++, meaning that with enough work and determination one can bypass the barrier Unity puts up between code and PhysX. At that point, however, the developer is taking the responsibility of configuration. If one inquires deeply about capabilities, and connects to resources with liberal checks and options, it can be quite flexible. If they use the example code initialization, instead, you're getting a single thread PhysX implementation with a basic GPU (if any).

Depending on the application using PhysX, the configuration can be tricky. Older products that one might toss into a new computer may not recognize the GPU, or may not configure adequately to use resources not known at the time the original product was built. For example, some might assume no more than a quad core CPU, and can't enable use for an 8 core machine. This is a key point because while you assume PhysX is running on the GPU, what you might not realize is that only SOME of the work is run on the GPU (the bulk calculation work). There is still a significant amount of work required on the CPU to connect the result of physics calculations to the visual models of the game, stream data, provide user input, etc. That work can be threaded, but if the product never envisioned an 8 or 16 core CPU, it may not even work as well as it did on the older hardware the code understands. Even drivers and GPU compute unit capabilities that don't match older versions (even though the newer one is vastly superior) may go unused because the software doesn't recognize them, and ignores them. This is less about PhysX itself and more about how PhysX was incorporated into the product.

On another front, however, this can be due to different ABI's (the binary code for a GPU). That is unique to each model. This doesn't work like AMD/Intel CPU's, where the ABI (binary code) is nearly identical on either brand. Even within a single brand, like NVIdia, the binary generated for the GPU can differ vastly on different models/generations. This means the compiler is ON the GPU card. It isn't a great one, though. There are better "off line" compilers that aren't on the GPU card. Those, however, can't possibly be incorporated into an older product. This means only VERY compatible code can recompile on a GPU of a newer model than known during development, and then only with the ON CARD compiler, not the best optimizing compiler for that GPU.

In other words, like I said when I opened up, it's complex in the sense that physics is complex to deploy in a game. That's just the cost of that fact.

PhysX, on the other hand, JUST NOW got open sourced (well, ok, it was a few months ago, but no one has ever seen a released product of wide acclaim that has used this new version yet).

Expect the new open source release to spawn a rapid evolution of PhysX, and in far more directions than anyone ever expected NVidia to support or allow.
 
Thanks for sharing. I think that all makes sense (I'm a developer as well).

The confusion is that Nvidia (at one point) was heavily marketing PhysX as a GPU solution, even to the point of allowing customers to add second or third cards (in an SLI system) to power PhysX.

While the software (CPU) version of PhysX was and still is popular, it is the hardware (GPU) acceleration that most people associate with Nvidia and PhysX. On this end things are very well dying or dead.

The problem here was two-fold. One, it was proprietary and locked to Nvidia brand hardware. Many developers did not want to exclude parts of the market, or spend time/money on features many people couldn't see (for example AMD or Intel GPU users).

Secondly, the usefulness of hardware PhysX was vastly over-hyped. In reality, it couldn't be used to make games with more complex gameplay-affecting physics as it was too costly to read data back to the CPU.

So you end up with generally cooler effects like capes flowing in the wind, or explosion debris, or glass breaking etc, but not anything that can enable more complex games (such as maybe a Rube Goldberg kind of game).

As we have seen time and time again, these vendor locked proprietary features typically don't become standard, and usually only survive so long as the owner invests in the software (e.g. by sponsoring games, etc.) not because of natural market forces.

And it is sad. Physics is games could be somewhere completely different today if things had evolved in another way.
 
While the software (CPU) version of PhysX was and still is popular, it is the hardware (GPU) acceleration that most people associate with Nvidia and PhysX.

The problem ... it was proprietary and locked to Nvidia brand hardware. Many developers did not want to exclude parts of the market...


These are key points. The same game on a comparable AMD GPU would be described, infamously, as running "slow as a dog" because users knew PhysX couldn't run on that GPU.

Secondly, the usefulness of hardware PhysX was vastly over-hyped. In reality, it couldn't be used to make games with more complex gameplay-affecting physics as it was too costly to read data back to the CPU.

Here's where we diverge a bit. This issue was much worse on the older bus interfaces, and made worse in older API's (older OpenGL, DX9 to DX11). During that era, too, the GPU's were less powerful, and of course the CPU's were less powerful, and RAM itself was slower (this reaches back to the era of DDR2 or before on the CPU).

My point is that I'd have to append the caveat that the cost of moving data from GPU back to CPU was high on affordable GPU devices. NVidia was trying to use the "advantage" of GPU compute resources to sell high end (expensive) cards. It almost worked, because this "floated" NVidia from a really tough PR era (the melting GPU's of XBox 360, for example) to a point where the are a lot of enthusiasts that insist on NVidia hardware.

Edit: I thought I'd add one more point about bus transport cost. It wasn't just the bus cost in the older API's, it was also the fact that data had to be copied before calculations could begin in the GPU. These older interfaces present the GPU as a server, with the CPU as a client. The newer API's allow the GPU to read RAM from the motherboard (with locks and permissions to be applied). This lowers the cost because the data doesn't have to be copied before calculations begin, the GPU can rip through system RAM to feed the data into the compute engine, and then write the results out to system RAM without having to copy all the data first. Even with the GPU is over a bus (a card), the bus is running upwards to 30 GBytes per second (a speed similar to system RAM itself). The bus is no longer the real issue as a result, its the copy associated with the older interfaces. With that copy removed in Metal, DX12 and Vulkan, the result of GPU calculations are available much sooner than in the older API's that require that data to be copied (even to get the result set out of the GPU memory into system memory).

This is why it can be observed that one new GPU can be faster than two older GPU's, assuming that the software properly coordinates timing and balance of compute power between graphics and physics.

If you look at the source of PhysX, you can see that what is put on the GPU are the large data calculation functions. They're prefixed with macros indicating they are "CUDA_CALLABLE". When building CPU versions of the engine, these functions route to a CPU implementation, but when building for GPU they route to a CUDA version. Most of the engine continues to operate on the CPU. One might say the GPU code is little more than a math library attached to the engine, but at key performance bottlenecks. When those operate in the new APIs, the GPU operates more like a vector compliment to the CPU's vector engine than the older cards could. I long ago predicted that what we now consider a separate GPU compute engine would eventually be folded into the CPU's vector processing system with much higher parallelism than CPU's made today. Perhaps I'm dreaming some, but AMD is closer to that than anyone already - just not in CUDA.

You can measure the performance difference, and what you get is an order of magnitude higher performance on those functions in the GPU build, despite the point of data traffic over the bus. That data is packed exactly to transport quickly, not like the rest of the scene data which are more complex data structures.

I can see at most around 20K independent, awake objects colliding and moving due to momentum on a CPU based PhysX simulation (type 4 core Intel 4Ghz device) at 30 FPS, but upwards of 100K on the GPU implementation at the same FPS.

I do think, though, that physics modeling is not well attended in many titles. I believe, without full evidence other than my own use of various physics engines, that some developers overstuff the engine needlessly. For example, I've never found anyone using the concept of LOD in physics, only graphics. In my own work I've used multiple engine groups, so that more distant objects are simulated in a separate group run at a lower frame rate (and therefore consuming lower resources, while still keeping real time). When objects are far away you can't see and interact in such a way as to witness any difference between 60 FPS and 30 FPS (or even 20 FPS), because many of them don't move a fraction of a pixel over several frames.


As we have seen time and time again, these vendor locked proprietary features typically don't become standard, and usually only survive so long as the owner invests in the software (e.g. by sponsoring games, etc.) not because of natural market forces.

This is where we're in full agreement, and your point is exactly spot on.

At this point, though, the PhysX code is no longer proprietary. It could be nearly trivial to implement an OpenCL or Metal based implementation.

In the modern API's (Vulkan, Metal, DX12) there is less separation (less of a client/server model) to the GPU compute resources. With high end GPU's embedded we may well see an era, soon, where GPU compute units are no longer called GPU's - they'll be more like an ultra extension of the vector processing system. From a technical viewpoint, we really only need the last 2D (fragment) stage of the video pipeline to be aimed right at the output signal to the displays, everything else going on in the display pipeline is math, and so is all other compute work required of the GPU to this point.
 
Last edited:
I just wish there was a way to run that ancient Cellfactor demo. Sure You can launch it with physx disabled in .ini but I want full hpysx!
And I always wanted to run "Hangar of Doom" demo but it never launched :(
Physx came, went and I never got a chance to play those
 
The Cellfactor demo was awesome. Way better than any other physics demo that has come out, before or after.
 
First time I had a PhysX game was one of the Sherlock Holmes games. Still got it and it even has the Ageia installer on it. Had it barely a month before Nvidia bought them out. I thought the idea was awesome and it became a feature I sought after when getting games.

JVene & cybereality Thanks for the detailed explanations. I hope the GPU accelerated does make a comeback. It's been pretty depressing since 2015 watching that go away and CPU take over. Had great success until then with my dedicated cards but these days even when I retest my old games that supported it they don't seem to work anymore. NV PhysX indicator still shows GPU for those games but AB shows 0-1% usage and honestly zero performance gain over not using.
 
With multicore CPU's finally seeming to be commonplace, I expect more CPU accelerated physics than I would GPU accelerated.
 
While I agree we'll likely see a lot of CPU based physics in lots of games, I point out that while CPU core counts grew from dual, through quad, to 16 cores or more, the GPU has also grown from a few dozen to a few thousand at a time, which means some physics simulation work just can't compare between CPU and GPU accelerated implementations.

That said, the GPU is usually busy already, so it is up to a matter of what the simulation requires vs what the rendering requires.
 
It's the most widely used physics middleware in games today. You just don't see it because it's tied to the engine and there are no options for it that are visible to the user anymore.
 
I just wish there was a way to run that ancient Cellfactor demo. Sure You can launch it with physx disabled in .ini but I want full hpysx!
And I always wanted to run "Hangar of Doom" demo but it never launched :(
Physx came, went and I never got a chance to play those

There may be hope, check out this page and it may help you to get it running

https://pcgamingwiki.com/wiki/Glossary:PhysX

I am currently playing some older PhysX games and it can sometimes be difficult to figure out what PhysX software is needed, but if you keep looking you will find someone else has had the problem and usually it can be fixed be installing some older PhysX software that then allows the game to work!

For example Turning Point Fall Of Liberty (yea, not the greatest game I know) wouldn't even start and just crashed, but install some PhysX software and it runs fine now.
 
Last edited:
It's the most widely used physics middleware in games today. You just don't see it because it's tied to the engine and there are no options for it that are visible to the user anymore.
You are correct, but it's not in the same context as originally. Now it's just another physics engine that can run in software or with some GPU acceleration. It's no longer the separate hardware bits that it used to be.
 
Wasn't meant to be good or bad, was just a statement. It's good it's finally open source so others can use it, but they tried to lock it up for a long time before they had to change gears.
 
You are correct, but it's not in the same context as originally. Now it's just another physics engine that can run in software or with some GPU acceleration. It's no longer the separate hardware bits that it used to be.
In my opinion I'd prefer any physical simulation in the game as it does add something for me, so Physx or software or any other solution is welcomed.

Checking the internet for the past few days there is so much info and fighting about Physx it's a plethora of entertainment. A search led me to [H]ere.

Physx is interesting and it being open sourced now maybe some charitable person may port it to be accelerated on AMD hardware, improve speed even on CPU, one can hope.
 
In my opinion I'd prefer any physical simulation in the game as it does add something for me, so Physx or software or any other solution is welcomed.

Checking the internet for the past few days there is so much info and fighting about Physx it's a plethora of entertainment. A search led me to [H]ere.

Physx is interesting and it being open sourced now maybe some charitable person may port it to be accelerated on AMD hardware, improve speed even on CPU, one can hope.

I didn't mean it as a bad thing as mentioned, more so that it took them so long. Support a single vendor, or support something for all vendors. Especially after all the antics, purposefully degrading CPU prrfo manc, disabling physx if it detected an AMD GPU, etc. If they didn't change course it would have completely disappeared, especially since Nvidia stopped producing hardware physx it has no benefit over the other competitors. By open sourcing and allowing compatibility it is still in the market and doing well. My point was that it's not the same physx from a few years ago though, in a good way.
 
I didn't mean it as a bad thing as mentioned, more so that it took them so long. Support a single vendor, or support something for all vendors. Especially after all the antics, purposefully degrading CPU prrfo manc, disabling physx if it detected an AMD GPU, etc. If they didn't change course it would have completely disappeared, especially since Nvidia stopped producing hardware physx it has no benefit over the other competitors. By open sourcing and allowing compatibility it is still in the market and doing well. My point was that it's not the same physx from a few years ago though, in a good way.

Yes, as a gamer it's better for all of us if it's a level playing field for features as we all get a great experience and less buyers remorse when we see some cool feature our card doesn't support, all the closed stuff breeds more anger, and quite frankly let the competition fight on a level playing field and let the best competitor win, also if AMD does it 20% slower but the card cost a little less that is a good trade off, better for the whole ecosystem.
 
Yeah, but Nvidia was pushing their hardware at the time, so I'd they made it run good in software they would lose sales. And disabling a feature if it detected a competitors card was just plain stupid... I guess they were afraid you would but their cheapest card for physx and a high end and card for the graphics? It's Nvidia, it's not the first time they tried to lock something down and won't be the last. One day something might even stick, lol.
 
Yeah, but Nvidia was pushing their hardware at the time, so I'd they made it run good in software they would lose sales. And disabling a feature if it detected a competitors card was just plain stupid... I guess they were afraid you would but their cheapest card for physx and a high end and card for the graphics? It's Nvidia, it's not the first time they tried to lock something down and won't be the last. One day something might even stick, lol.

A lot of Devs agreed, check this post from Epic in 2014 concerning the Unreal Engine:

Hi Daminshi,

Darthviper is correct. This has been discussed multiple times here on the forums. Nvidia's Apex and PhysX is included with the engine in a cross-platform manner so that it runs equally on AMD and Nvidia GPUs. This is done by using it on the CPU rather than GPU. Any of Nvidia's tech included in their Gameworks is integrated by the developers making their games.

If you have any questions or concerns feel free to ask.

Tim


https://forums.unrealengine.com/com...ary-BS-Are-there-any-alternatives=&viewfull=1

I think ultimately Nvidia did the right thing, but like you say it was very late. I agree they probably did some sweetheart deals with Devs for hardware etc. to get them to include GPU PhysX since they were basically forcing most gamers to stick to Nvidia and maybe even invest in 2 GPUs, one for PhysX.

I've been testing lots of GPU Physx and PhysX on the CPU (Unreral Engine) games, and the CPU centered PhysX acceleration games are very good and the GPU accelerated may be a bit more effects and particles heavy, but they are both really great.

I started looking into this because I have two gaming PCs one with a 280X and one with a 980TI and for games that don't use any PhysX, the AMD card is great, and I have a PhysX capable card too. I tried "CPU PhysX acceleration" with a 4770K for a game meant for GPU PhysX, Metro 2033 and it's OK except when lots of stuff is going on. The 980TI is obviously better. AMD still has a metric ton of great games and any "CPU PhysX" (Unreral Engine) or other Physics engine (Havok, Bullet) games work great, it's still a great card.
 
Last edited:
Another Nvidia specific api that nobody wants anymore... You'd think they would eventually get the point and start trying to move the industry forward instead of locking it down.

that would be the day wouldn't it, where NV stops their BS and truly and with all their abilities work WITH the industry at large instead ot the way they have been "we have to be the only one / are the world leader" mantra, no matter what they (Jensen Huang) does / did / is doing.

Such a crying shame, beyond anything else, at one point and time Radeon were "able" to use PhysX via GPU and did a much better job at doing so than Nv stuff was able to manage, while still having performance to spare, some cases a good chunk lower cost to end consumer (a lower price bracket of GPU)

course NV not wanted that to happen AT ALL (same went with Tessellation initially)


The industry needs to grow the fudge up, the hour is getting late as they say, no matter how "amazing" something is or could be, it loses much of it's luster when those who hold the keys crap all over everyone's floor and expect to be paid big $$$$ for that crapping over all things that can be nice and good.

--------

I personally not have "faith" in NV anything to "play fair" they have shown their true colors time and time again, at any cost, even when THEY not paid to make it happen in first place..they should be asking "how the heck you manage to get so much more than what we were able?"

yeh right lool

------

I very much liked the concepts of Tessellation / multi-gpu and such things, especially when in a software / hardware agnostic fashion, so that no matter if you have AMD or ATi, Nvidia, soon enough Intel, it just "works" sometimes better sometimes not as well, but it worked, not so much when tantrums are thrown cause so and so does it so much better or "they had a head-start SOOO unfair" crud.....

Batman games use PhysX as well, they apparently do a fantastic job in the way they do so.. Shadow of War and Mordor I believe do as well (software PhysX via CPU I am not certain however)

be a fine day when industry makes sure the "bullying methods" stop for our systems...like that is going to happen anytime soon, same time, cannot happen soon enough

IMHO

---------------------
 
that would be the day wouldn't it, where NV stops their BS and truly and with all their abilities work WITH the industry at large instead ot the way they have been "we have to be the only one / are the world leader" mantra, no matter what they (Jensen Huang) does / did / is doing.

Such a crying shame, beyond anything else, at one point and time Radeon were "able" to use PhysX via GPU and did a much better job at doing so than Nv stuff was able to manage, while still having performance to spare, some cases a good chunk lower cost to end consumer (a lower price bracket of GPU)

course NV not wanted that to happen AT ALL (same went with Tessellation initially)


The industry needs to grow the fudge up, the hour is getting late as they say, no matter how "amazing" something is or could be, it loses much of it's luster when those who hold the keys crap all over everyone's floor and expect to be paid big $$$$ for that crapping over all things that can be nice and good.

--------

I personally not have "faith" in NV anything to "play fair" they have shown their true colors time and time again, at any cost, even when THEY not paid to make it happen in first place..they should be asking "how the heck you manage to get so much more than what we were able?"

yeh right lool

------

I very much liked the concepts of Tessellation / multi-gpu and such things, especially when in a software / hardware agnostic fashion, so that no matter if you have AMD or ATi, Nvidia, soon enough Intel, it just "works" sometimes better sometimes not as well, but it worked, not so much when tantrums are thrown cause so and so does it so much better or "they had a head-start SOOO unfair" crud.....

Batman games use PhysX as well, they apparently do a fantastic job in the way they do so.. Shadow of War and Mordor I believe do as well (software PhysX via CPU I am not certain however)

be a fine day when industry makes sure the "bullying methods" stop for our systems...like that is going to happen anytime soon, same time, cannot happen soon enough

IMHO

---------------------
At this point I'd say SLI is more alive than PhysX. At least truly hardware based anyways. CPU's and Engines are reaching new levels and it's just not needed for a dedicated GPU. So it seems anyways. I still would would love to see some tests to truly prove but current drivers show hardware GPU support even though MSI AB show .01 usage when I tried paring a 1080 with a 2080TI for both Batmank Arkham City and Metro Lastlight. It's dead.

With modern releases like Metro Exodus, Shadow of the Tomb Raider, and Red Dead Redemption 2, I can see CPU spikes for particle, object, effects but unfortunately means nothing because of what's happened to PhysX.
 
My favorite game to use PhysX was Borderlands 2. The fluid simulations it had were very impressive at the time and I haven't seen any other game really try to replicate it. It also used it for tons of particle effects and cloth physics. Heck, part of the reason Borderlands 3 didn't excite me much was because it didn't have PhysX which to me was a big part of what defined Borderlands 2.
 
Last edited:
I have to add that Physx also works great for the particle effects in Control which I assume is running on the CPU.
 
Back
Top