Accelerated physics for ALL Havok games?

honestly I do not know how to interpret that in the context of near future end user benefit...sound promising though, huh? Fingers crossed...on the other hand when the next gen gpu's come out I am as likely to go nvidia as I am to stay ATI...the most important criterion will be image quality and frame rate with hardware physics support a distant third
 
Old Havok games will not magically get GPU accelerated physics. It is mentioning that developers won't have to do anything special to support GPU accelerated physics. That's great news for games going forward where developers have purchased the GPU-accelerated middleware, apparently at any stage in the development cycle if using the CPU only Havok physics library.
 
And even if the old Havok titles were to magically be hardware accelerated, the most you would notice is lower cpu usage and/or higher frame rates. They would not suddenly have destructible environments, physics accurate explosions, water, cloth, hair, etc., if the devs did not put them there in the first place.
 
Old Havok games will not magically get GPU accelerated physics. It is mentioning that developers won't have to do anything special to support GPU accelerated physics. That's great news for games going forward where developers have purchased the GPU-accelerated middleware, apparently at any stage in the development cycle if using the CPU only Havok physics library.

I don't believe they will get any esoterical implementation of Havok. ;)

I do wonder if the Havok games will get GPU acceleration with the port to OpenCL. If you look at the post from MHouston, System Architect, AMD, he tells us this about how they accelerated Havok on GPU:

AMD did the porting for the initial demos for GDC. We took the C functions that underpin the Havok API and ported them to OpenCL, i.e. some runtime OCL code and then the compute loops turned into OCL kernels.
http://forum.beyond3d.com/showpost.php?p=1281098&postcount=54

If to look at the quote in the OP, one can wonder if all Havok games might get GPU accelerated with OpenCL.

If so, that would be great news. Might increase FPS on games played on laptops and low-end to mid-range systems.
 
From what I understand, Havok FX used a different API, and would not have been compatible with Havok (original API). Unless AMD magically changed something I expect this to be still the case. PhysX never had this issue as it was designed to use vector processors like PPUs right from the start whereas Havok is CPU-only without hardware acceleration as a design goal.
 
From what I understand, Havok FX used a different API, and would not have been compatible with Havok (original API). Unless AMD magically changed something I expect this to be still the case. PhysX never had this issue as it was designed to use vector processors like PPUs right from the start whereas Havok is CPU-only without hardware acceleration as a design goal.

Havok FX? This is about the demo Ati made this year of Havok Opencl. Has nothing to do with HavokFX (which was ment to use SM3).
 
Havok FX? This is about the demo Ati made this year of Havok Opencl. Has nothing to do with HavokFX (which was ment to use SM3).

You don't think that if Havok was able to they'd have made Havok FX BC with Havok? As this is still Havok (the company) doing the dev work, they're likely to follow the same approach unless something magically changed. And no, OpenCL isn't a magical bullet.
 
You don't think that if Havok was able to they'd have made Havok FX BC with Havok? As this is still Havok (the company) doing the dev work, they're likely to follow the same approach unless something magically changed. And no, OpenCL isn't a magical bullet.

I'd prefer that we don't venture into esoterical implementations of software. Makes the discussion hard, since I don't believe in magic. ;)

They are obviously not following same approach as with Havok FX. Its not even mentioned that Havok on Opencl will be its own product either. Don't forget that Nvidia ported PhysX to Cuda in its days and didn't create a new PhysX FX.

AMD themselves did the porting to opencl, leaving the toolset unchanged (in other words, not using a Havok FX and using the same toolset as done with currect CPU Havok). As mentioned, GPU acceleration doesn't have to be enabled from the developers side. Other mentions about the implementation has been that inqueries can be made through opencl to determine the balance between CPU and GPU load. This leads me to believe that its through opencl it gets determined if to run on CPU, GPU or both.
 
Well, OpenCL can be biased to prefer CPU or GPU, but from what I understand from the spec and from some info in another PhysX-related thread it doesn't give as fine-grained control over what runs where and how as CUDA and Brook+ do. This might explain why nVidia PhysX on OpenCL demo wasn't that impressive. TBH OpenCL seems like a kludge with no real use in applications where performance matters.

But we'll see, I guess. It'll still take until next year before OpenCL gets released and begins maturing.
 
Well, OpenCL can be biased to prefer CPU or GPU, but from what I understand from the spec and from some info in another PhysX-related thread it doesn't give as fine-grained control over what runs where and how as CUDA and Brook+ do. This might explain why nVidia PhysX on OpenCL demo wasn't that impressive. TBH OpenCL seems like a kludge with no real use in applications where performance matters.

But we'll see, I guess. It'll still take until next year before OpenCL gets released and begins maturing.

Of course you can select which compute device you wish to run the program on. I suggest you read the slide at the bottom of the page here:
http://www.khronos.org/opencl/

Opencl is impressive, since it gives you more control over the system resources.

It won't take until next year before OpenCL gets released. Its already released in Macs Leopard and AMD supports OpenCL in it:
AMD’s video cards fully support the OpenCL version 1.0 implementation used in Apple’s just released Snow Leopard operating system.
http://www.slashgear.com/amd-radeon-graphics-cards-get-into-mac-computers-3154388/

Please notice that a full OS has Opencl implemented in it. Thats a BIG vote from Apple.

I really don't know what you have against OpenCL. Its free. Its Open source. Its multiplatform, and all major hardware developers aim to support it. AMD's Brook and Nvidia's CUDA is for spesific hardware only.
 
Oh, I don't mind OpenCL, I just needed to see some more examples of its usage to convince me of its use. The Ars Technica review of Snow Leopard convinced me it's worth playing with. Sadly I don't have any Macs here, nor am I inclined to buy one. So I guess I'll just wait until next year when OCL becomes available on other platforms too :)
 
Oh, I don't mind OpenCL, I just needed to see some more examples of its usage to convince me of its use. The Ars Technica review of Snow Leopard convinced me it's worth playing with. Sadly I don't have any Macs here, nor am I inclined to buy one. So I guess I'll just wait until next year when OCL becomes available on other platforms too :)

Fair enough. :)

I love open standards though that can work on all hardware. There are a lot of programs that are being developed on Opencl now and more will come. The thing about opencl is that it is a multiplatform solution for parallell computing and also gives access to all computing devices in the machine within the same API. Makes distribution of software much easier for developers.
 
Fair enough. :)

I love open standards though that can work on all hardware. There are a lot of programs that are being developed on Opencl now and more will come. The thing about opencl is that it is a multiplatform solution for parallell computing and also gives access to all computing devices in the machine within the same API. Makes distribution of software much easier for developers.

Indeed, the Ars Technica review finally showed me the possibilities of it. People have been marketed as an alternative to CUDA/Brook+ and so on for ages, and the spec itself was quite vague on a lot of points as well, which all didn't help make me feel very interested. Now I am, though :)
 
Back
Top