PhysX: a dying tech?

Only a small step forward from where we were then? I mean, all this talk, all this work, all this time and all we get is a few extra waving flags or bouncing boxes.

Yep. Blame people shifting to laptops and iPads en masse, while the industry cleans up people's wallets exploiting 5 year old in-order/DX9 class console hardware. People like us who have tricked out desktops are quickly becoming a thing of the past. Everybody I know has switched to laptops and game consoles.
 
Yep. Blame people shifting to laptops and iPads en masse, while the industry cleans up people's wallets exploiting 5 year old in-order/DX9 class console hardware. People like us who have tricked out desktops are quickly becoming a thing of the past. Everybody I know has switched to laptops and game consoles.

Those are the same people who spend less on computers anyways and never get cutting edge tech, with a price premium that funds research.

The major change is companies that used to run workstations are now giving laptops because then their workers can take them home. The funny thing is, laptops are noticeably slower than comparatively prices desktops by a fair margin and any additional work at home it may be canceled out due to that inefficiency.


As far as OpenCL for physics goes, it'll happen, but may still be limited to eye candy. The communication lag increase exists for any GPU physics solution.

Normally, it is CPU-1>GPU--->output
with GPU physics, it becomes:
CPU-1>GPU-2>CPU-3>GPU--->output

Triple the communication latency, not good for First Person games at least.
 
Normally, it is CPU-1>GPU--->output
with GPU physics, it becomes:
CPU-1>GPU-2>CPU-3>GPU--->output

Triple the communication latency, not good for First Person games at least.


sticks thumbs in ears lalalalalalalalalalalalalalalalalalalLALALALALALAlalalalalalala! ;)
 
The communication lag increase exists for any GPU physics solution.

Normally, it is CPU-1>GPU--->output
with GPU physics, it becomes:
CPU-1>GPU-2>CPU-3>GPU--->output

Triple the communication latency, not good for First Person games at least.

The problem is not so much the fact that this communication is necessary, inter-core CPU communication is very common and absolutely not an issue as it's on the order of nanoseconds, which even for a GHz core is basically real-time.

The real problem is the PCIe bus which, from the POV of the CPU, is as dynamic and agile as a glacier at hundreds of ns. Even system RAM is pretty zippy in comparison (~50 ns). On top of that VRAM access from the GPU also takes on the order of hundreds of ns: http://forums.nvidia.com/index.php?showtopic=80451

While the VRAM issue can be negated somewhat by smart access of the relevant data, and the PCIe transfer latency dealt with by asynchronous memory transfers, it's hardly ideal. I have no idea whether OpenCL has this kind of granularity anyway as it seems to be a far more abstract API from what I have seen. My only hands-on experience has been with CUDA, which is what PhysX uses at this point for GPU-acceleration.

Something like what AMD is doing with Bulldozer, having essentially a GPU core integrated into the CPU die is very interesting, as it could make the throughput on matrix calculations physics calculations are filled with a very low-latency affair, and thus far higher bandwidth. I have no doubt it would crush even the fastest regular CPU in terms of physics calculations due to the large number of FP matrix calculations Bulldozer and similar architectures can execute per cycle.

In short, there is hope for the latency issue, but I'm not sure OpenCL will play any significant role in the physics revolution due to its high-level approach. CUDA is becoming a lot easier to work with as well, even allowing C++ code to run natively instead of just its restricted sub-set of C. It's hardly as scary as it used to be. If only AMD would start adding CUDA support to its drivers :)
 
I don't foresee AMD or Intel ever buying a CUDA or PhysX license. Intel and AMD have a vested interest in not using it. Intel wants you to do it on their CPU, AMD wants you to either, do it on their CPU, or on their GPU via OCL. I am not sure Nv even wants to license it anyway. The email and the one unofficial offer by a PR guy at some industry show hardly convinced me that they wanted AMD and Intel in on it. They have a nice lucrative super computing niche, and while prolly not profitable, or even effective as such, PhysX as a marketing tool, so they may not care if it ever gets licensed at all. .
 
Last edited:
Well, the situation would improve a lot if AMD for example would also offer a competing physics API, even if it only runs on their own GPUs. It would at least make GPU accelerated physics a standard feature, no matter which GPU is installed. Intel doesn't even have any OpenCL/GPGPU-capable GPU/IGP, so no competition can be expected from that side.

During the late 90s/early 00s things were quite messy with regard to the graphics API situation. It took quite a few years for things to settle down into the situation we have now from having half a dozen competing APIs all vying for superiority. All we need for GPU physics at this point is for a similar competitive environment to form :)

I think it would be good for AMD too, as they could show that their cards are just fine for GPGPU/HPC purposes as well. It should also increase the maturation rate of OpenCL as at this point it's basically only Apple using it in its Macs.
 
The late 90's API wars sucked. It took 3DFX's death and MS's DX to end that little API war. I am not so sure I want MS and Direct Compute to kill CUDA and relegate OCL to alternative OS, certain minority niche apps, and hardcore open source developers, the way MS and DX did with Glide and OGL. Yeah, right now Direct Compute is a lot like DX3, but we all know where it eventually went.
 
The late 90's API wars sucked. It took 3DFX's death and MS's DX to end that little API war. I am not so sure I want MS and Direct Compute to kill CUDA and relegate OCL to alternative OS and hardcore open source developers like MS and DX did with Glide and OGL. Yeah, right now Direct Compute is a lot like DX3, but we all know where it eventually went.

You're thinking too low-level. Physics libraries are built on top of those languages :) PhysX can be implemented on top of CUDA, OpenCL, Brook+, DC... ad nauseam. The thing is that there is no competition for GPU accelerated physics. PhysX is the only game in town at this point. Havok and Bullet are firmly limited to CPU physics at this point, ergo they don't compete with PhysX.

On a sidenote, I don't see DirectCompute going very far. It's limited to Windows at this point, which means the most advanced use it will see is to add fancy effects to games, somewhat like how GLSL is used with OpenGL to write shader routines. Using OpenCL would be more logical from a pure platform-independence point of view, and CUDA makes more sense where performance matters.
 
Physx on the CPU just makes way more sense, and thats why Havok is doing so well.
 
You're thinking too low-level. Physics libraries are built on top of those languages :) PhysX can be implemented on top of CUDA, OpenCL, Brook+, DC... ad nauseam. The thing is that there is no competition for GPU accelerated physics. PhysX is the only game in town at this point. Havok and Bullet are firmly limited to CPU physics at this point, ergo they don't compete with PhysX.

On a sidenote, I don't see DirectCompute going very far. It's limited to Windows at this point, which means the most advanced use it will see is to add fancy effects to games, somewhat like how GLSL is used with OpenGL to write shader routines. Using OpenCL would be more logical from a pure platform-independence point of view, and CUDA makes more sense where performance matters.


I am not so sure. 5 years have gone by and PPU/GPU PhysX is still nearly where it started, even though CPU PhysX has gained significant market penetration. I think some gamers, and Nv give a shit about GPU accelerated physics, and that is it. The AAA devs don't seem to give a crap, Intel certainly does not, and all AMD seems to do is pay, (coming someday soon maybe), lip service to GPU accelerated physics.
I think CPU PhysX is here to stay as long as Nv wishes to support it. GPU PhysX and GPU physics in general looks like it is going to stay in it's tiny niche. If we had a 5 or 6 AAA titles a year that Nv did not have to code GPU PhysX into them selves it might be a different story. But, what we have is Nv coding GPU PhysX into 2 AAA titles a year for what seems like marketing purposes only.

I did not think DX was going to go anywhere either. We had Glide and Ogl after all.

Time will tell..
 
Last edited:
I think that new architectures like Bulldozer and its APU concept are going to make 'GPU'-accelerated physics a reality. Just think about it, would you want to use the sluggish and scarce SSE units of the CPU, or move a few nm over on the die and use the massively-parallel vector processor embedded into the bloody die? Of course, at that point I'm not sure what the distinction between 'CPU' and 'GPU' physics is going to be, if any :)

I am convinced that graphical realism has hit a wall, and that we need real physics if we want to bring added realism to games. And lower development budgets while we're at it.

BTW, regarding DX, it really didn't go anywhere. It's firmly stuck in Windows land. The XBox doesn't use it, nor does any other platform. In comparison virtually every platform supports OGL (ES).
 
I think that new architectures like Bulldozer and its APU concept are going to make 'GPU'-accelerated physics a reality. Just think about it, would you want to use the sluggish and scarce SSE units of the CPU, or move a few nm over on the die and use the massively-parallel vector processor embedded into the bloody die? Of course, at that point I'm not sure what the distinction between 'CPU' and 'GPU' physics is going to be, if any :)

I am convinced that graphical realism has hit a wall, and that we need real physics if we want to bring added realism to games. And lower development budgets while we're at it.

BTW, regarding DX, it really didn't go anywhere. It's firmly stuck in Windows land. The XBox doesn't use it, nor does any other platform. In comparison virtually every platform supports OGL (ES).

both xboxs used a subset of DX, iirc. Some of the funcitonality of Xenos made it over to DX9 and later DX10, though the actual Xenos unit lacked a lot of hardware virtualization (big reason for DMA -->CPU).
 
both xboxs used a subset of DX, iirc. Some of the funcitonality of Xenos made it over to DX9 and later DX10, though the actual Xenos unit lacked a lot of hardware virtualization (big reason for DMA -->CPU).

Not a subset. The XBox APIs are inspired by the DX APIs, but they do not share any code. This does make porting between DX and XBox APIs easier, but it's not the same code. In particular MSFT would be stupid to use DX on the XBox, as DX is meant to abstract the hardware, while with their consoles the hardware is fixed so they can completely optimize calls and routines. Supporting the straight DX API wouldn't be impossible, but very impractical.

Many console and handheld games use the native APIs instead of using higher-level APIs, including OpenGL ES, as it frees up more resources.
 
Not a subset. The XBox APIs are inspired by the DX APIs, but they do not share any code. This does make porting between DX and XBox APIs easier, but it's not the same code. In particular MSFT would be stupid to use DX on the XBox, as DX is meant to abstract the hardware, while with their consoles the hardware is fixed so they can completely optimize calls and routines. Supporting the straight DX API wouldn't be impossible, but very impractical.

Many console and handheld games use the native APIs instead of using higher-level APIs, including OpenGL ES, as it frees up more resources.

Huh, I guess I was fooled by the extreme ease of XNA 2--3.1 (last one for the Zune HD :(), to port a project between Console and Windows.

Thank you for the info! :)
 
physx is now basically a physics engine that supports CUDA and the CPU. If they expand that to support OpenCL or DirectCompute I see it staying around for a long time. If they keep only supporting Nvidia cards I see physx as being hamstrung compared to the competition.
 
Huh, I guess I was fooled by the extreme ease of XNA 2--3.1 (last one for the Zune HD :(), to port a project between Console and Windows.

Thank you for the info! :)

You're welcome :)

XNA does make it very easy to develop for multiple Windows platforms, yeah. It has got all the low-level and library stuff all worked out for you, taking away the burden of the actual porting :)
 
physX? i thought it was scrapped years ago

Why? For having the monopoly position in GPU-accelerated physics and doing very well in CPU-only physics as well? What kind of business classes did you follow?
 
Why? For having the monopoly position in GPU-accelerated physics and doing very well in CPU-only physics as well? What kind of business classes did you follow?
one could be forgiven for thinking physX was dead for all the relevance it has these days. I mean, have you seen a title where PhysX was implemented in a way that got anyone excited?
 
one could be forgiven for thinking physX was dead for all the relevance it has these days. I mean, have you seen a title where PhysX was implemented in a way that got anyone excited?

I think we already went over that one: the one title which uses PhysX to its full extent is also the title which can not be played at all by anyone not running an NVidia GPU and which will cause shareholders to get an instant heart-attack. Hence we're stuck with super-conservative physics engines, which try their utmost to not offend of exclude anyone regardless of what pathetic kind of excuse for a CPU and/or GPU they might be running.

And of course today's consoles, aside from the PS3, can not even do anything approaching real physics, which in today's world of ports galore means an near-death sentence for real physics. Next generation's consoles (2012?) should have real physics capability and we should be able to see it take off then. Especially at the pathetic 1080p 'HD' resolutions of modern TVs there's not much more which can be improved graphically, hence physics will be the logical next thing to throw more hardware and processing power at.
 
I think we already went over that one: the one title which uses PhysX to its full extent is also the title which can not be played at all by anyone not running an NVidia GPU and which will cause shareholders to get an instant heart-attack. Hence we're stuck with super-conservative physics engines, which try their utmost to not offend of exclude anyone regardless of what pathetic kind of excuse for a CPU and/or GPU they might be running.

And of course today's consoles, aside from the PS3, can not even do anything approaching real physics, which in today's world of ports galore means an near-death sentence for real physics. Next generation's consoles (2012?) should have real physics capability and we should be able to see it take off then. Especially at the pathetic 1080p 'HD' resolutions of modern TVs there's not much more which can be improved graphically, hence physics will be the logical next thing to throw more hardware and processing power at.
No question, not arguing that one at all. i've made it clear earlier what my thoughts on console ports are. All I was saying is that from the outside, someone with no knowledge might very well think physX disappeared after the introduction of the original PPU hardware, becuase thats effectively where we are with hardware accelerated physics. the initial buzz around the PPU was probably one of the biggest boosts to physics being introduced to gaming and talked up, but ironically all we've wound up with since then are CPU based effects.
 
The worst part about CPU-based physics calculations is that it basically limits one to rigid-body effects, as something as basic as cloth or extensive soft-body calculations would make even an i7 980x grind to a halt. It's both amazing and totally expected that people accept this situation as 'realistic' physics.

Then again Quake II looked like absolute junk as well, looking back at it now :)
 
I hope Directcompute takes off. Right now GPU accelerated physics is like the proprietary graphics APIs we all used 15 years ago (GlIDE, RRedline, and the like). Bullet is a great idea, but because its open source it'll be hard to develop for with all the forks that'll likely appear if it starts to be taken seriously by software developers.

Microsoft is large enough to get both hardware and software developers to play nice and agree on a unified solution, which is the only way GPU accelerated physics will ever become anything more than just a gimmick.
 
I hope Directcompute takes off. Right now GPU accelerated physics is like the proprietary graphics APIs we all used 15 years ago (GlIDE, RRedline, and the like). Bullet is a great idea, but because its open source it'll be hard to develop for with all the forks that'll likely appear if it starts to be taken seriously by software developers.

Microsoft is large enough to get both hardware and software developers to play nice and agree on a unified solution, which is the only way GPU accelerated physics will ever become anything more than just a gimmick.

Well, depending on how it's used, it'll probably always be treated as a gimmick, depending on how devs utilize it.

The bigger issue with Physx right now, is nobody really has taken it to the edge - both in terms of reality and physics. I mean, even Crysis refused to use it, despite both (2million usd from nVidia for the second one makes it a surefure TWIMTBP game) of them being TWIMTBP games.
I don't think MS will really try to push in on this until the next xbox is finished and shipping - I mean, despite how many features DX11 has over the xbox - there is still some functionality left on the xbox that is not in DX11, it's MS' personal stomping ground.

EDIT: all above is just my opinion
 
What? DirectCompute has got nothing to do with physics... plus it's far more proprietary than OpenCL (100% vs 0%), and is limited to Windows, whereas OpenCL and CUDA (and even Brook+, GLSL, etc.) are cross-platform.

You should say 'it'd be nice if MSFT made everyone decide on a single, open API to use for physics, based on the open OpenCL API' :)
 
MSFT will have an open physics API when they make DirectX open to cross-platform; I do not see this ever happening.

As far as PhysX itself goes, I see only one situation in which it will be fully implemented:

A console, perhaps most likely to be the PlayStation 4 if Sony keeps its NVIDIA ties, ships with an NVIDIA GPU with hardware PhysX support, and a developer makes a platform exclusive title for said console.
 
Stream proessors on the CPU die will likely open up Physics processing in general once those processors become popular.

However...even if all the new generation of processors in 2011 have support for it, don't count on games really utilizing it until EVERY processor out there is one of these models. Multi-core processors came out HOW many years ago? And how many games actually USE more than 2-3 cores?
 
Stream proessors on the CPU die will likely open up Physics processing in general once those processors become popular.

However...even if all the new generation of processors in 2011 have support for it, don't count on games really utilizing it until EVERY processor out there is one of these models. Multi-core processors came out HOW many years ago? And how many games actually USE more than 2-3 cores?

Games only using a limited number of cores is due to the limited parallel nature of the game's threads. The only exception is the physics engine. Using Havok/Bullet physics library can easily saturate an i7's cores. That PhysX CPU only uses a single thread is due to legacy cruft and will be corrected in the next version of the library.

What we'll see more of in the future is adaptive physics levels in games, like we're seeing for GPU PhysX-enabled games already, whereby the level of physical accuracy is determined by the system's processing power.
 
What else requires dedicated hardware for physics instead of just soft body and fluid simulation? Why should developers waste the hardware resource just to make water looks more realistic than a pre-scripted movement?
 
What else requires dedicated hardware for physics instead of just soft body and fluid simulation? Why should developers waste the hardware resource just to make water looks more realistic than a pre-scripted movement?

Everything?

Simulating 100% accurate phyics is extremely demanding, especially when there are 10s or 100s of thousands of particles that need to be accounted for. GPUs are inherently better at calcuating physics.

Too bad that consoles are the main dev platform, people are too cheap to add a dedicated physics card, and that ATI declined nvidias offer to allow physx to work on their cards.

You know, everyone complains about physx being proprietary, but I never hear those same people complaining about directx over opengl. opencl is fine, but unless I've been living under a rock, I haven't seen a damn thing relating to physics calculation with opencl. Shit or get off the pot, I want some realistic physics in my games, and I honestly don't see that happening on todays cpus.
 
I don't really want 100% accurate physics in games. I want Hollywood physics in my games. Regardless of what we want, we are going to get more of what we have been until the next crop of consoles launch.

Few care that DX is proprietary, because MS does not make video cards. Too few people really cared about nix gaming enough to cause enough of a stir to matter it seems. Besides, by the time nix really started getting any notice, DX was fairly well entrenched.
DX succeeded where PhysX is languishing, because DX was arguably more needed, and Windows had the market share to actually force a standard. Though to be honest I am uncomfortable comparing the two. While not apples and oranges they are at least tangerines and oranges.
 
Direct physics computation in games would be akin to playing a game being ray-traced. The APIs strike me a clunky and inefficient currently. It's a lot easier to script behavior that visually looks like some fancy physics software than to actually implement the software that would compute the behavior in real time(not to mention the potential that a lot of players will simply leave that software disabled to account for lower end PCs anyway). Sure the player might notice some strange behavior when they kick a barrel over but they're likely to forget about it within a few seconds as the game progresses.

I think it depends less on the hardware and more on the software, moving forward. Once the APIs get fine-tuned a lot we'll start seeing more realistic stuff. I think the current incarnation of Physx will be even less relevant, not to mention the hardware support of it, once that happens.
 
Back
Top