Physics API war: AMD + Havok vs. Nvidia + PhysX vs. Intel Larrabee + Havok

LemonJoose

n00b
Joined
Feb 16, 2008
Messages
22
This whole physics on the GPU war is going to be very interesting. It reminds me of the old 3dfx Glide vs. MS DirectX vs. OpenGL war. Glide was proprietary to 3dfx Voodoo cards, while OpenGL and DirectX were freely available to all GPU makers to support. 3dfx's Glide remained popular only as long as two things were true: 1) it offered a performance advantage over OpenGL or DirectX on Voodoo cards, and 2) 3dfx Voodoo cards had about 90% of the gamer marketshare for 3D video acceleration. Glide then died as OpenGL became better supported by id software, and also as Microsoft stepped in with big improvmenets to DirectX which made it a better-performing solution which supported all GPUs.

In this current physics battle, I see Nvidia attempting to use PhysX in the same way that 3dfx used Glide, since Nvidia currently has a marketshare advantage over AMD/ATI. Nvidia wants to convince developers to take their proprietary route to gain an advantage over AMD/ATI. The problem I see is that Nvidia's GPU gamer market share advantage over AMD/ATI and Intel is not nearly as extreme as 3dfx's market share advantage was. Also, developers hate having to support a bunch of different proprietary APIs, which is why they rapidly moved to hardware independent APIs like DirectX and OpenGL.

The other problem for Nvidia is that because Havok never required extra dedicated hardware like PhysX did, and by all accounts Havok did a better job of accelerating physics on the CPU than PhysX did even with dedicated PPU hardware, Havok is already being utilized by a larger number of PC games than PhysX is.

However the difference with this physics API battle compared to the old 3D API battle, is that Havok is also a proprietary API (owned by Intel), and there is currently no open physics API or MS-supported hardware-independent physics API being put forward as a serious contender to provide a universal solution.

I believe that Intel views Nvidia as their main competitor on the graphics front and has decided to buy time for their Larrabee video platform by working with AMD/ATI to support Havok on AMD/ATI GPUs, hoping to increase Havok marketshare in advance of releasing Larrabee, which is also sure to support Havok.

Whatever happens as this battle is played out, no matter which API (Intel's Havok, or Nvidia's PhysX) appears to be on the way to becoming the winner, look for the API which appears to be on the way to losing to be released to the open source community in an attempt to neutralize the leader's advantage by providing an open, universal API to attract more developer support. Then the pressure will be on the leading API to do the same thing. Because of this, I predict that any short-term proprietary physics advantage by either competitor will be very short lived.

I believe AMD right now is actually in the catbird's seat in this API battle, since even though they don't currently have control of their own physics API, they can play Nvidia and Intel off each other, and could potentially be the kingmaker. Or maybe they end up licensing both to become the only hardware maker to provide support for both physics APIs. However, if Intel buys Nvidia, that could be very bad news for AMD, and will be sure to spark anti-trust protests by AMD.

I also imagine that if a proprietary physics API battle drags on for too long, developers will get frustrated and go to Microsoft and beg them to add physics instructions to their DirectX API.
 
I don't know that there will be a major war over gaming middleware. Development will still be constrained by the lowest common denominator, no matter who "wins." Do you count on everyone who buys your game to have many CPU cores or a GPU capable of CUDA/PhysX or neither one? With diverse hardware configurations available in gaming PCs, the games will need to scale down.

Both options have strengths and weaknesses. Havok's view on using "CPU" physics falls in line with Intel's goals for Larrabee (it's a bit puzzling why AMD is lined up unless they also plan on making a Larabee clone... or just wants to appear to be doing something) and the general trend of increasing cores per CPU in the future. Key words: in the future. PhysX's GPU physics takes advantage of a large install base of cards to accelerate physics right now, and considering the speed ups available, is currently a much cheaper option. While nvidia has a big lead in discrete GPUs, they are not the only game in town.

There is a difference between GLide and PhysX. GLide, while based on OpenGL, was not OpenGL. Even 3dfx's pitiful attempts with a miniGL driver was almost excusable for a while because other early 3D manufacturers also had miniGL drivers before developing full ICDs. And GLide was jealously guarded by 3dfx as its own property, no one else allowed to use it. CUDA and PhysX are available for other manufacturers to implement, as nvidia has offered since the beginning of CUDA and Cg before it. Basically it's only pride and development time that prevents a competitor from adopting CUDA and GPU PhysX.

It's hard to count one or the other out, or even see a replacement in the near future. PhysX has an advantage of being "free" or very low cost, but Havok is still an attractive and popular option for other developers despite the price. I wouldn't doubt if AMD is hedging its bets and developing a CUDA layer into drivers. nvidia doesn't make CPUs, but it automatically benefits from Havok because people run GPUs on AMD and Intel CPUs anyways. IOW, it's not a disadvantage because competing GPUs don't benefit directly from CPU physics either (i.e. plug in either manufacturers' cards into the same system and the CPU physics acceleration is the same).

While some might fret at this kind of stuff, competing development is exactly what's needed now. The sooner this stuff is out and being used in real games, the sooner it can be tweaked and improved. What works will prevail and what doesn't will die.
 
Havok and PhysX are different. Simply put, Havok is simpler, can run well on a CPU, more widely adopted and does what people think of nowadays for game physics (rigid body, rag doll physics, kinematics, etc) - we've seen it lots of current AAA titles. PhysX is more ambitious (cloth/hair/fluid dynamics/deformation in addition to rigid body stuff), more suited to a GPU, and much less widely adopted. I would say PhysX has more potential, but unless Nvidia makes it more open, I'm not sure how many truly innovative integrations of PhysX into gameplay we'll see, since that would alienate half the market.

More likely that we'll see trivial PhysX implementation for visual effects rather than for gameplay affecting uses, at least in the short term. I'm not sure how well PhysX will run on a quad core CPU, but if that's good enough, and its free to license, we may see more developers integrate it past the short term timeframe.

Another [H] thread about Physx with a good intro:
http://www.hardforum.com/showthread.php?t=1036230

[Edit: From wikipedia, it appears as though PhysX is free for developers, not 100% sure about this thought.]
 
Yes, Havok is being investigated for use with ATI GPU's, but with Intel also entering the discrete GPU market at the end of this year, they will certainly push to have GPU hardware acceleration possible.

The big issue is that PhysX has been a catch-22: do you alienate a portion of your market (25% of the discrete GPU market is ATI, and how much of the market is still using pre-8-series GPU's) and insert physics effects that change game play or do you limit the physics processing ability but make the game marketable to everyone?
 
Personally, I think PhysX is the more advanced engine. I don't have a PhysX card, but just from the demos I've seen its impressive (with cloth, real-time fluids, bendable metals, etc.). It can scale, there are a number of big games using it like Gears of War, UT3 or GRAW. And thats just with the software driver. So its not like developers would be cutting out half the market. ATI users could also have the option of buying a PhysX card if this physics engine ended up becoming popular. My money's on PhysX. I mean, remember the Havok-FX chip? What did they ever do with that?
 
PhysX runs just fine on CPU as well (everything that havok does anyway, the advanced stuff needs the hardware). It's used in a lot of console titles for that very reason.
 
Yes, Havok is being investigated for use with ATI GPU's, but with Intel also entering the discrete GPU market at the end of this year, they will certainly push to have GPU hardware acceleration possible.

I was under the impression that discrete GPUs would be coming from intel in 2009, and not Q1, and I was under the impression that intel themselves said this. This fact means little at this point between these two physics APIs, atleast the silicon itself.

As I see it, havok and two separate entity fighting to occupy the same space, although said space is different. Havok is built into the game engine and is designed to run in the CPUs operational scope, making it streamline and easy to implement. Where as PhysX is for complex physics, such as fluid dynamics and wind, and so forth, and because of this, requires more processing power than any "standard" general purpose CPU can provide while still providing adequate power for all of the other tasks that a CPU must handle while in a game.

Both aim to accomplish the same task, physX is thinking a bit more out of the box with their dedicated hardware, but the dedicated part sort of fell through, but got picked up by nvidia and (apparently) their GPUs can handle all of what the PPU did anyways. Both APIs can be run on CPUs, which would be an optimal solution, and which may become the only solution depends on how much intel puts into this, but as games advance, CPUs seem as they can never do enough.

Both platforms as far as I can tell allow developers full access to tools in which to integrate havok or physX into their games, but both platforms are really held back by the seemingly weak implementation levels and fanbase. havok seems to have the upper hand when it comes to games that implement their technology, but physX has the edge when it comes to the actual technology at hand.

Nvida has the upper hand atleast in the short term. 9 out of 10 games I installed last year had an nvidia 'TWIWMTBP" intro to it, or a logo placed on the box, the only one for ATi was HL2 EP2. (didn't play the rest of the orange box, but its the same game engine, and the same optimizations) Which means nvidia has a larger base of game developers to leverage its "optimizations" on, and one of those will be physX. Maybe it will run on all processing medium, but it may run better on nvidia's solution, see what im getting at. NV also has a larger budget in which to provide support to leverage itself into the gamer community, seeing the position AMD is in with expendable cash, and intels to date non existent role in the API war, with the exception of buying the rights of havok then giving the rights out to AMD, I am not seeing how this maneuver is translating into games, as much as I am seeing it with nvidia.

Its too early to tell on this one, nvidia is out to an early lead, one which depends on a constant supply of money and support to maintain, which may run dry if nvidia over expands, which I am afraid they may be doing. Timing is going to be key, so is the list of games in which they choose to integrate their API into.
 
Hope it's true and that Intel allows nVidia to use Havok for use of PhysX in exchange.
Havok is "CPU optimized." Havok dumped HavokFX. There is nothing to exchange since nvidia doesn't make x86 CPUs.
 
Poop then. I was hoping for a boost on the current and future Havok titles.
 
well here are my 2cents. Physics will not just advance on nvidia's hardware, period. Game develpores want their game to be scalable on as many systems as they can, simply put there is no way that nvidia can succeed cuda based physics api just on its own, they need ati and intel as well, Nvidia seems to be monoplizing the game that is why I am starting to hate nvidia, It has to open platform in order to succeed, Nvidia became a victim of its own success with g80 and g92, and they seem to be doing the same with Physx, what they don't realize is that they are keeping the competition away from consumers, if they want to earn my respect they should make it open platform, ATi is a believer of open platform as well. Nvidia seems to not care about making thier technology mainstream and highly adaptable, they just want to play with their own toys like little kids. They need to just grow up and act like grown ups. its ok to make money but its not ok to keep the technology from advancing more rapidly than it should. they willl only slow down physx.
 
That OC3d article seems kind of old, no? 6/10/2008 was a long time ago...
 
AMD could have bought Ageia. They chose not to. They thought it was overinflating it's worth after Intel bought Havok.
NVIDIA Responds To GPU PhysX Cheating Allegation
HotHardware: What has the adoption rate in the developer community been like, since you've rolled out Ageia technology and IP into your graphics product line?

NVIDIA, Taylor: The adoption rate has been fabulous among the developer community. Our surveys say that about 66% of developers not currently using PhysX are planning to adopt it in the future. Support for PhysX is across multiple platforms as well, including Xbox 360, PS3 and Wii, with moderate to no licensing fees. Put it this way, if you were a developer, limited to doing physics for your game or other application on the CPU, why wouldn't you support it on the GPU, especially if you can do better effects and more of them this way? Furthermore, if developers already use the CPU for PhysX then it will be a simple drop-in to GPU-enable the game.
 
I dont think they are cheating they are simply taking advantage of their phsyx drivers, but they have no competition and that score has no credibility in 3dmark vantage when ati hardware is not running it, so you can't directly compare the two score where one physx test is runnning CPU and the other system is running it on GPU, it is not cheating it is just unfair result without direct competition from ATi. it will be fair if ati had the cuda compiled physx api drivers for their cards.
 
I dont think they are cheating they are simply taking advantage of their phsyx drivers, but they have no competition and that score has no credibility in 3dmark vantage when ati hardware is not running it, so you can't directly compare the two score where one physx test is runnning CPU and the other system is running it on GPU, it is not cheating it is just unfair result without direct competition from ATi. it will be fair if ati had the cuda compiled physx api drivers for their cards.

And where mite those be? What do you suggest do, drop support for physx on its cards until ATi gets its shit together and writes a driver to take advantage of physx for 1 game which isn't even a game, its just an over glorified, over hyped benchmark, just like every game made in the last few years built in, this one just adds up the frames per second, and gives you a number, somehow the number gives the software legitimacy. Nvidia didn't tell futuremark to make physics calculations part of the score to factor in PPUs, but futuremark did because vantage is no longer just a GPU benchmark :rolleyes:. Nvidia just took advantage first, because they already had the framework from agena.

As mentioned before, the only thing stopping ATi from being able to support physX is the time to write to drivers, and pride, and I think #2 mite be getting in the way.
 
I also imagine that if a proprietary physics API battle drags on for too long, developers will get frustrated and go to Microsoft and beg them to add physics instructions to their DirectX API.

That's what will happen, eventually. Having two APIs, one for each brand of GPUs is a nightmare for developers. They either have to implement both APIs in their game engine, or choose one. If they choose one, it means the games will run much slower on the other brand of GPUs..there will be "Nvidia games" and "ATI games", and while you'll be able to run an Nvidia game on an ATI card, it will run much slower and/or use dumbed down physics.. Not an ideal situation because that would limit the market for the new games.

Imagine if ATI used OpenGL only, and Nvidia Direct3D only.. and if you tried to run a Direct3D game on ATI, it would default to software rendering.. That's the situation we might end up in with this "API battle".
 
That's what will happen, eventually. Having two APIs, one for each brand of GPUs is a nightmare for developers. They either have to implement both APIs in their game engine, or choose one. If they choose one, it means the games will run much slower on the other brand of GPUs..there will be "Nvidia games" and "ATI games", and while you'll be able to run an Nvidia game on an ATI card, it will run much slower and/or use dumbed down physics.. Not an ideal situation because that would limit the market for the new games.

Imagine if ATI used OpenGL only, and Nvidia Direct3D only.. and if you tried to run a Direct3D game on ATI, it would default to software rendering.. That's the situation we might end up in with this "API battle".


This won't happen, for two reasons. The first is that Havok is not GPU accelerated, so you could play any Havok game with an Nvidia GPU just fine. The second is that ATI will be supporting Physx in the future, at least according to the link posted on the previous page. So while there may be a war between Havok and PhysX, I wouldn't be too worried about games that only run on one company's hardware.
 
Back
Top