Sandy Bridge Physics

Mlongrie

n00b
Joined
Jun 15, 2011
Messages
35
Anyone think it will ever be possible to use integrated graphics for a physics processor?
 
HD Graphics 2000/3000 do not have any GPGPU functionality available. The IGP can't be used for physics, folding or any other GPGPU applications.

Intel owns Havok. Ivy Bridge will have Intel's first GPGPU capable IGP. I think GPU acceleration for Havok may happen after the release of Ivy Bridge via DirectCompute and/or OpenCL.

GPU PhysX is locked to Nvidia now and probably for long time.
 
GPU PhysX is locked to Nvidia now and probably for long time.

Which means it won't be used in games very much... No one wants to lock in to one vendor. I'd like to see a deal with NVIDIA and Intel with PhysX. There are tons of people with integrated graphics and Physx would definitely have a much larger audience.
 
Which means it won't be used in games very much.
False, for a couple of reasons. On the PC side, it is an inexpensive option compared to its nearest professional level physics middleware competition, Havok, which only runs on CPUs. PhysX is not GPU only; it can also run on the CPU without GPU acceleration. At this point I'd have hoped that would be obvious since it is multi-platform from handhelds to current gen consoles.

From Wikipedia: "PhysX technology is used by the game engines Unreal Engine 3, Gamebryo, Vision, Instinct, Diesel, Unity 3D, Hero and BigWorld[21] and is the physics platform of more than 300 video games,[2] such as Bulletstorm, Need for Speed: Shift or Castlevania: Lords of Shadow. Most of these games use the CPU to process the physics simulations."

TBH, it's not professional game developers who are the ones getting their panties in a bunch over hearing the word PhysX or Nvidia. So who is..? ;) It's a business choice and the PhysX middleware is viable option for many developers. It's a bonus when some include hardware GPU acceleration, which not all do in any meaningful way.
 
Anyone think it will ever be possible to use integrated graphics for a physics processor?
It's absolutely unnecessary. As pointed out, Havok can provide physics equal to or better than PhysX and run it on the CPU with minimal FPS loss. PhysX itself has been show to be capable of being processed on the CPU quite nicely. However, Nvidia codes it so it runs better on GPU's than CPU's. They do this on purpose by the way (obvious).

Dedicated Physics processing is a marketing gimmick. When it was released several years ago, CPU's were still struggling. Now days CPU's have so much processing power than add-on's for Physics are 100% unnecessary. On-die physics will never happen and dedicated physics processing for gaming is nearly dead.
 
It's absolutely unnecessary. As pointed out, Havok can provide physics equal to or better than PhysX and run it on the CPU with minimal FPS loss. PhysX itself has been show to be capable of being processed on the CPU quite nicely. However, Nvidia codes it so it runs better on GPU's than CPU's. They do this on purpose by the way (obvious).

Dedicated Physics processing is a marketing gimmick. When it was released several years ago, CPU's were still struggling. Now days CPU's have so much processing power than add-on's for Physics are 100% unnecessary. On-die physics will never happen and dedicated physics processing for gaming is nearly dead.

How do you know on-die physics won't happen? AMD is working on an openCL version of Bullet Physics and with Fusion already being a huge success you can bet it will go in this direction.
 
How do you know on-die physics won't happen?
You're right, I don't know, nobody does. I just look at the market. It's declining, which to me means there's no incentive to do it. Plus like I said I feel it's unecessary since CPU's are so powerful and capable now, especially with all the cores they are getting, many of which sit idle or are not fully utilized. Use them instead.
AMD is working on an openCL version of Bullet Physics and with Fusion already being a huge success you can bet it will go in this direction.
That may be but Bullet still doesn't support GPU acceleration. In the current market, what incentive is there?

I do think though that the slim chance that GPU accelerated physics takes off, it will be something like OpenCL based, because Nvidia has proved to us that brand exclusive just doesn't work.
 
It's absolutely unnecessary. As pointed out, Havok can provide physics equal to or better than PhysX and run it on the CPU with minimal FPS loss.
That was not stated above and Havok physics certainly eats CPU cycles just as running PhysX on the CPU also does. ;)

Depending on the complexity, either option above can eat a significant portion of CPU time, which is why games that are meant to run on a wide range of hardware have either options for physics complexity (fewer or more elements simulated) or just dumb down physics for the lowest supported hardware (dual core 2GHz, for example).
 
I do think though that the slim chance that GPU accelerated physics takes off, it will be something like OpenCL based, because Nvidia has proved to us that brand exclusive just doesn't work.

I hope that is the case or someone cleverly writes a PhysX to OpenCL emulator of sorts (a previosu April Fools Joke). I have both Nvidia and ATI cards and I just want the same experience across all platforms!
 
Unless Nvidia is pushed to compete with an OpenCL compatible Havok that doesn't exist yet, I have pretty much given up hope that Nvidia will port PhysX to OpenCL any time soon. And even if it does move to OpenCL, I wouldn't expect other GPU architectures to run it at nearly the same performance level, or possibly keep a higher performance, native CUDA version in parallel.

At this point, I'm counting on Intel (Havok) having a general GPU accelerated physics engine first.
 
Back
Top