Any new PPUs?

Mnx4

[H]ard|Gawd
Joined
Jul 29, 2005
Messages
1,039
Are there any upcoming PPUs or physics cards? Or is that not possible since Aegia was aquired?
 
ppu development went kaput with nvidia's acquisition of ageia earlier this year. you should only expect hardware physx to be implemented on nvidia gpus for the foreseeable future.
 
ppu development went kaput with nvidia's acquisition of ageia earlier this year. you should only expect hardware physx to be implemented on nvidia gpus for the foreseeable future.

Yup..dead tech=PPU
 
The tech is not dead, it's just being implemented differently.

As far as dedicated PPU's... I agree that we won't see anything new.

As far as PhysX support, stick with any Nvidia card 8 series and up and you won't miss out.

As far as general physics support in the future, not all of the cards have been dealt yet. There is still some fluctuation that can occur once DX10.1 & 11 come out (both of which are said to include physics support). It is unclear what will happen to the current PhysX implementations once it does.
 
Well, I'd love to stick to a nvidia gpu running as a physics card, but it's still not an option for anyone running a mobo with only 1 PCIE 16X slot or for someone with an ATI card. While it's not a booming market for sure, it'd be nice to see at least an upgraded version that could at least compete with the newer Nvidia cards in physx mode.
 
PhysX was obsolete the day it was released. Anyone that didn't see this coming needs to reconsider their need for a killer nic or _______ fatality product.
 
PhysX was obsolete the day it was released. Anyone that didn't see this coming needs to reconsider their need for a killer nic or _______ fatality product.

And you want to add hardware accelerated physics support to simulations and games how? Go away, troll.

To the OP, nVidia GPUs are now the new PPUs. You can run a second GPU as a dedicated PPU, or combine the functionality on the same GPU. There are also dedicated acceleration cards from nVidia especially (based on GPU cores...), but those are primarily aimed at GPGPU purposes and aren't very cheap.
 
Well, I'd love to stick to a nvidia gpu running as a physics card, but it's still not an option for anyone running a mobo with only 1 PCIE 16X slot or for someone with an ATI card. While it's not a booming market for sure, it'd be nice to see at least an upgraded version that could at least compete with the newer Nvidia cards in physx mode.

If thats the only PCIE slot you have you could go with a PCI 8400 GS (They go for about $50). If you have other PCIE slots (8x, 4x, or 1x) your options get a lot larger (Not all video cards have to run on a 16x PCIE slot). And don't forget if you have a good Nvidia card, you may not even need a second card (I have seen good results running a 8800GTX running both PhysX & Graphics).
 
People have been telling me that an 8800 GT is about the lowest you can go for a PPU-only GPU without compromising on performance.

*shrugs* Eventually it all comes down to what games you play and at which resolution :)
 
There is still some fluctuation that can occur once DX10.1 & 11 come out (both of which are said to include physics support).

Whoever said that must not have understood the material.
Firstly, DX10.1 has been out for quite a while. The runtime is in Vista SP1, and the Radeon 3000/4000 series support DX10.1 completely. nVidia's GTX series support the most compelling DX10.1 features though an extension of DX10.

Secondly, DX11 will NOT have any kind of physics support. What it WILL have is support for a new type of shader, which is called 'compute shader'. Basically it's much like what Cuda already offers (and OpenCL will offer soon, now that the standard is complete), a way to perform generic processing on the GPU.
So in theory one could implement physics on the DX11 (or OpenCL) compute shader system much like how nVidia implemented it using Cuda.
The problem however is that writing a good physics library is a very complex and specialist task. So in practice most developers will still need to rely on middleware like Havok or PhysX to take advantage of it.

Problem here is that Havok is owned by Intel and PhysX is owned by nVidia. Both companies will gain nothing by porting their physics solutions over to DX11/OpenCL, because it already runs on their own hardware. By porting it to DX11/OpenCL it will run on all competing hardware aswell. So there is no incentive to port it over.
Which means some other company will have to take up the task of developing a physics library. So far nothing has been announced.
 
People have been telling me that an 8800 GT is about the lowest you can go for a PPU-only GPU without compromising on performance.

*shrugs* Eventually it all comes down to what games you play and at which resolution :)

That is pretty acurate.I had an 8400GS and it was horrible for PhysX.Barely better than running it on the CPU.I got hold of a 9800GT and saw a HUGE improvement.
 
@ Scali2

Yes, I agree with your comments. I was being too general in the post you quoted me on. Physics had not been officially stated to be included in DX... just that is was possible/implied though the programable shaders.
 
Ok, so lets say you have a board with 2 PCI-e x16 slot and two PCI-e x1 slots and one PCI slot, and the x16 slot is taken up by video. What would you get, an 8400GS PCI or another PCI-e x16 card and us it in a x1 slot. Also if you did use it in a x1 slot, which one would you buy?
 
Ok, so lets say you have a board with 2 PCI-e x16 slot and two PCI-e x1 slots and one PCI slot, and the x16 slot is taken up by video. What would you get, an 8400GS PCI or another PCI-e x16 card and us it in a x1 slot. Also if you did use it in a x1 slot, which one would you buy?

It would depend on what you have the primary card... optimally if it was good you could get a second one so you could either run SLI, or 3D+PhysX combo. That way you gain a benefit depending on what you're playing (switching from one mode to the other is very easy). If it wasn't good, you could replace it with a better card and then run the old card as the PhysX card (assuming it was an 8 series or higher).

So far, the PhysX enabled games we not so demanding where it would take more than one video card to render... so the second would be free to do the PhysX.
 
Ok, so lets say you have a board with 2 PCI-e x16 slot and two PCI-e x1 slots and one PCI slot, and the x16 slot is taken up by video. What would you get, an 8400GS PCI or another PCI-e x16 card and us it in a x1 slot. Also if you did use it in a x1 slot, which one would you buy?

I'd stick to one good GPU... considering you pretty much want something along the lines of an 8800 GT for current and future PhysX-enabled games, using anything less would be quite ineffective.
 
I'll be getting a 9800GT from the BFG agp to pcie upgrade program. The problem I have is, I only have ne PCIe x16 slot... Maybe I should switch to the Lanparty jr?
 
PPU2 has become nV high-end DX10 cards or extra midrange dedicated.

So if I go for PPU2 now to replace
X2 4400+
2 GB
8600GT.
PPU
Would be next year.

Vista U64
ci7 920
6 or 12GB
A basic Dual SLI/CF X58 board.
And GTX280 55nm in SLI


Beside nV and AMD support OpenCL.
PhysX will become OpenCL compatible I hope.
AMD might do PhysX to without hacking.
 
I don't think PhysX will be OpenCL. nVidia has no reason to.
OpenCL is basically a ripoff of Cuda. It doesn't add anything extra on nVidia hardware.

I also doubt that AMD will ever do anything with PhysX, certainly no hacks. If anything, they'll have to do it the official way, by licensing the technology from nVidia (which nVidia has already offered, but AMD has declined).
 
I don't think PhysX will be OpenCL. nVidia has no reason to.
OpenCL is basically a ripoff of Cuda. It doesn't add anything extra on nVidia hardware.

I also doubt that AMD will ever do anything with PhysX, certainly no hacks. If anything, they'll have to do it the official way, by licensing the technology from nVidia (which nVidia has already offered, but AMD has declined).

You make AMD sound like the bad guys. Remember what NVidia used to charge motherboard manufacturers in royalties to put an NVidia chipset on the board? Ever think that maybe they haven't changed their act?
 
You make AMD sound like the bad guys. Remember what NVidia used to charge motherboard manufacturers in royalties to put an NVidia chipset on the board? Ever think that maybe they haven't changed their act?

Where exactly do I make AMD sound like the bad guys?
I'm just saying that I doubt they will provide some kind of unofficial/unlicensed PhysX support, because of the legal issues involved (you don't want to give your biggest competitor good reasons to sue you). So they'll have to go down the legitimate path, and license nVidia's PhysX technology if they want to support PhysX at all.
Since nVidia has already offered AMD a license, and AMD turned it down, it seems to me that AMD is not interested in supporting PhysX at all. Not sure how that would make AMD 'the bad guys'?
 
IMO I don't think they turned them down because of the tech, but because of the price of the licensing.
 
IMO I don't think they turned them down because of the tech, but because of the price of the licensing.

I don't know why they turned it down, but PhysX might well prove to be priceless technology. In that case, if they did turn it down because of the price, that was a big mistake, perhaps the last mistake that ATi will ever make.
 
I don't know why they turned it down, but PhysX might well prove to be priceless technology. In that case, if they did turn it down because of the price, that was a big mistake, perhaps the last mistake that ATi will ever make.

Yeah, nobody knows why exactly they turned it down (classified information and such...). I agree that it might be something that'll come back to haunt AMD in a few years time. Heck, they might be scrambling to add PhysX support by the end of next year, if not sooner.

We'll see :)

Also, weren't there some people who made PhysX acceleration work on AMD cards already?
 
Out of curiosity: is nvidia going to continue to provide driver support to the already released PPU's or are they dead in the water?
 
Also, weren't there some people who made PhysX acceleration work on AMD cards already?

That turned out to be a hoax. They just posted some hacked screenshots, and were never heard from again.
Funny enough (just like with the DX10-on-XP stuff earlier), some people actually thought (or still think) that it was a working solution.
 
Out of curiosity: is nvidia going to continue to provide driver support to the already released PPU's or are they dead in the water?

i think nvidia will continue to support them for the foreseeable future, seeing as how they are not remotely a threat performance-wise to even a decent mid-range phyx capable gpu.
 
That turned out to be a hoax. They just posted some hacked screenshots, and were never heard from again.
Funny enough (just like with the DX10-on-XP stuff earlier), some people actually thought (or still think) that it was a working solution.

I see, I only caught some glimpses of this rumour, but never bothered to verify it :)

One could implement the PhysX API using OpenCL, though... AMD supports it too, so it may be a viable way :)
 
One could implement the PhysX API using OpenCL, though... AMD supports it too, so it may be a viable way :)

Sure you could... They could even implement it through their Stream SDK, or the Brook+/CAL stuff they used before that.
But who's going to... and you probably need some kind of license to implement the API, since nVidia owns the rights.
 
Sure you could... They could even implement it through their Stream SDK, or the Brook+/CAL stuff they used before that.
But who's going to... and you probably need some kind of license to implement the API, since nVidia owns the rights.

I merely said it was technically possible :p I'd want to steer clear of nVidia's lawyers :)
 
You make AMD sound like the bad guys. Remember what NVidia used to charge motherboard manufacturers in royalties to put an NVidia chipset on the board? Ever think that maybe they haven't changed their act?

Nvidia certainly has done some scandalous things in the past especially under Derek Perez's Marketing helm, but I honestly don't get the feeling that he painted AMD in a particularly negative or positive light.
 
I don't know why they turned it down, but PhysX might well prove to be priceless technology. In that case, if they did turn it down because of the price, that was a big mistake, perhaps the last mistake that ATi will ever make.

OpenCL will not be insignificant. Apple's using it in Snow Leopard, and AMD is working with the JVM vendors to seamlessly offload certain processing to OpenCL graphics cards. Besides, OpenCL and PhysX are not mutually exclusive. The real deciding factor will be what Microsoft decides to do with DirectX. Whichever direction they go in terms of DirectX implementation, whether it's OpenCL, PhysX, Havok, or something else, the other options will eventually end up being like Glide.
 
OpenCL will not be insignificant. Apple's using it in Snow Leopard, and AMD is working with the JVM vendors to seamlessly offload certain processing to OpenCL graphics cards. Besides, OpenCL and PhysX are not mutually exclusive. The real deciding factor will be what Microsoft decides to do with DirectX. Whichever direction they go in terms of DirectX implementation, whether it's OpenCL, PhysX, Havok, or something else, the other options will eventually end up being like Glide.

OpenCL is not a physics API (neither is DirectX). I've said it before: who's going to make a physics API?
 
OpenCL is not a physics API (neither is DirectX). I've said it before: who's going to make a physics API?

It's all just floating-point math, but there are probably API calls which translate collision and other events to calculations and memory accesses. It will be done at the engine level, or people will just use Physx or Havok or some open source alternative which will come out of the blue.

One thing I don't see is Intel and AMD paying Nvidia a fee to use their stuff when they can just write their own or collaborate on something they don't have to pay for (something like Havok, for instance).
 
It's all just floating-point math,

Lol yea, who can argue with that kind of logic!
I'll give you one better: In the end it's all 0s and 1s!

Thing is, it's VERY hard to write a good physics API. Why do you think we only have two big players in the market (PhysX and Havok)? Anyone could easily have written a physics library for CPUs at any time... Very few people did... and even less actually succeeded.

I don't see your logic though... It will be done at engine level? What? Physics? Well yes, more or less... But what does that have to do with OpenCL or DirectX? Neither are engines.
Still comes back to the same problem: very few developers are capable of writing a physics library. Crytek could pull it off... but they didn't release their current CryPhysics to the public... even if they would make a version using DX11 of OpenCL, they may not make it available to other developers. It might just be another selling point for their game engine.

One thing I don't see is Intel and AMD paying Nvidia a fee to use their stuff when they can just write their own or collaborate on something they don't have to pay for (something like Havok, for instance).

AMD now pays Intel for Havok... and it doesn't even give them GPU acceleration, unlike PhysX. So there goes that argument out of the window.

As for Intel... they have no interest in PhysX yet, because they don't have a GPU yet... and when they do, they have Havok, so they're going to try and compete. At some point either one will have to give up and support the competing technology... or a third alternative will have to come out and support all architectures.

As it stands however, PhysX is the only solution supporting game consoles, multicore PCs and GPUs. That, together with the fact that the SDK is free for use, makes it a very attractive option to developers.
nVidia has been working very hard to get PhysX in the hands of developers, and I think in a few months the PhysX games will outnumber the Havok games.
Then it's up to Intel to try and regain the market with a completely new GPU architecture and the first try at a GPU-accelerated version of Havok.
 
Why would AMD pay Intel for Havok? AMD is not developing any games. Btw how long is your a "few months"?
 
So before 20th Dec 2009, the PhysX games will outnumber the Havok games? I'm so gonna bump this thread then. :p

Btw I'm suprised that your "a few months" is that long. ;)
 
Back
Top