Best Current PhysX GPU?

pc1x1

[H]ard|Gawd
Joined
Jan 1, 2008
Messages
1,165
I am thinking of getting SLI, and its a bit cheaper now than what I budgeted so I figured, I get a another card for PhysX only, whats the best GPU to get for dedicated PhysX? Does it max out a 9800 GTX? or a 250 GTS? etc? Should I use a 260 for PhysX or no point? And if so, do you think the next version of PhysX will max out and need a 260? etc?

Cheaper the better :D

ps. I need something with a water block readily available. 9800 GTX looks like a winner.
 
A GTX285 will give you the best performance (but is a horrible value). The more power available to PhysX, the better (true now, but even more in the future). However, there aren't that many games that support it now, and most games use it only to enhance the non-PhysX experience (as they don't want to alienate users without PhysX). So if you want, you can pay $300 for the ultimate PhysX experience, or you can get a 9600/9800-based card and get about the same results most of the time.

Look up some of the benchmark threads in the Physics section.
 
A GTX285 will give you the best performance (but is a horrible value). The more power available to PhysX, the better (true now, but even more in the future). However, there aren't that many games that support it now, and most games use it only to enhance the non-PhysX experience (as they don't want to alienate users without PhysX). So if you want, you can pay $300 for the ultimate PhysX experience, or you can get a 9600/9800-based card and get about the same results most of the time.

Look up some of the benchmark threads in the Physics section.

I see, even dedicated? Wow I didnt think PhysX was that demanding. I will take a look thanks!

I most likely will have two 295 GTX's, or maybe GT300's when they come out, as my SLI pair, then wanted a simple card for PhysX.
 
The point with PhysX is that it can scale pretty much indefinitely, much like graphics (which technically are a subset of physics). So it is possible even now to create a game whereby even a GTX285 as dedicated PhysX card can't keep up.

Ergo the question of 'what is enough for PhysX' if defined as 'in the period 2009/2010 at least' should be answered with 'high-end GF8/9' card or 'lower-end GT200' such as the GTX260. Especially if you plan on playing the more demanding FPSs and such which are coming out this year with a fair degree of PhysX elements in them it'd pay to not skimp too much on the PhysX card :)
 
i'm considering buying a 9600GT for PhysX
not too shabby ;)
 
A 9600GT isn't a bad choice at all, it should give decent performance even with upcoming games :)

:cool:

yep, that vid card is rather small, doesn't use that much power and doesn't get super hot :)
 
:cool:

yep, that vid card is rather small, doesn't use that much power and doesn't get super hot :)

Exactly :)

I hope to upgrade to a GT300-level card this year and use the 8800GTS I have now as PhysX card. I may use a GT200-level card instead, though :)
 
Exactly :)

I hope to upgrade to a GT300-level card this year and use the 8800GTS I have now as PhysX card. I may use a GT200-level card instead, though :)

right now the 260 GTX looks promising, only because its easy to get water blocks for it.

Nice picture on your site Ell, you need to post more stuff :)
 
2 card SLI is usually only 50-75% efficient so wouldnt 1 of the cards, ie the "slave" have sufficient power to handle physx on it's own assuming it's dual GTX295s? even dual 285's should be fine, shouldnt they?
 
Personally I wouldn't go much past a 9800gtx.
I really dont have allot of money to throw around right now either.
I think i would just use it for sli. But thats just cause I really dont think the phsx is up to par with how it will be next gen.
Seems like its still buggy.
signature_gtx285.jpg

its definitely a valid point, I don't have alot of money to throw around myself, But I am trying to "finish" my current build at all costs, so just a little crazy :).

Just saw the old 260 GTX, 96 for $125, Its getting close to being able to be bought for "fun"
 
right now the 260 GTX looks promising, only because its easy to get water blocks for it.

Nice picture on your site Ell, you need to post more stuff :)

I must admit to only aircooling my equipment :)

And thanks, I'll try to update my photo gallery with new shoots. Main issue is having to do everything myself with a tripod, having someone else take the pics would be a lot easier :p
 
I must admit to only aircooling my equipment :)

And thanks, I'll try to update my photo gallery with new shoots. Main issue is having to do everything myself with a tripod, having someone else take the pics would be a lot easier :p

I was referring to artwork pictures in your site heh www.nyanko.ws, do you do digital art yourself?, didn't see any normal pictures on www.nyanko.ws? do you also do photography? I liked the layout of your software site, etc :). Great job.

But back to topic, I wish Nvidia would give us standards on to how much PhysX will scale into, because right now we have no way to gauge what the ideal PhysX card will be. At least with Ageia, you knew games designed for it, would be fine. Now its just a guess and check :(.
 
I was referring to artwork pictures in your site heh www.nyanko.ws, do you do digital art yourself?, didn't see any normal pictures on www.nyanko.ws? do you also do photography? I liked the layout of your software site, etc :). Great job.
Yeah, I have a gallery on my personal site, thought you were referring to that :p sorry. I do a bit of photography, although I spend far too little time on it :( The image on the Nyanko site was borrowed from someone with permission, I wish I could draw like that.

But back to topic, I wish Nvidia would give us standards on to how much PhysX will scale into, because right now we have no way to gauge what the ideal PhysX card will be. At least with Ageia, you knew games designed for it, would be fine. Now its just a guess and check :(.

Look it from a graphics perspective. Ten years ago people were drooling over the first GeForce GPUs, nowadays you'd be hard pressed to use such a card in even a budget system. Even IGPs (excluding sucky Intel IGPs) have many times the processing power of the first GF1/2 cards now.

What is a good graphics card now will be an ancient relic in 5-10 years from now. Ditto for physics cards. Game developers like myself scale our games to fit the hardware available at that time (or well, they should *glares at Bethesda and the Crysis team*). There is no limit to how processing intensive a game could be made with OpenGL/OpenAL/PhysX or similar, except by the limits of the hardware.

With the first PPU from Ageia (~GF8500GT-level hardware) there only was one PPU and since Ageia was only small, it didn't have the power to churn out a zillion PPU models every six months like nVidia and ATi. This made things seem relatively static, but that was only an illusion.

So, in summary, buy what seems to be hitting the right performance/$$ spot for you now unless you have some money you really want to get rid off :)
 
Yeah, I have a gallery on my personal site, thought you were referring to that :p sorry. I do a bit of photography, although I spend far too little time on it :( The image on the Nyanko site was borrowed from someone with permission, I wish I could draw like that.


Look it from a graphics perspective. Ten years ago people were drooling over the first GeForce GPUs, nowadays you'd be hard pressed to use such a card in even a budget system. Even IGPs (excluding sucky Intel IGPs) have many times the processing power of the first GF1/2 cards now.

What is a good graphics card now will be an ancient relic in 5-10 years from now. Ditto for physics cards. Game developers like myself scale our games to fit the hardware available at that time (or well, they should *glares at Bethesda and the Crysis team*). There is no limit to how processing intensive a game could be made with OpenGL/OpenAL/PhysX or similar, except by the limits of the hardware.

With the first PPU from Ageia (~GF8500GT-level hardware) there only was one PPU and since Ageia was only small, it didn't have the power to churn out a zillion PPU models every six months like nVidia and ATi. This made things seem relatively static, but that was only an illusion.

So, in summary, buy what seems to be hitting the right performance/$$ spot for you now unless you have some money you really want to get rid off :)

Its ok :), didn't know you had a personal site, was checking out your work one, since you mentioned game development on a few threads. Interesting stuff in there, and cool projects your doing, way above my programing level haha. Basic FTW! Always wanted to make a 2D Physics/Sprite based fighting game though. But you definitely sound like an intelligent person, so post any cool physics stuff you can come up with whenever possible. Also I am a bit of an art fan, so thought that picture was very serene, etc. You should post your photography stuff in your work portfolio, you never know heh!

Topic: I do agree the sky is the limit, but to me that's the problem. It seems PhysX is still an afterthought, and basically seems like "stick fly paper on the wall and see if it sticks approach". We all know hardware physics is coming in some way shape or form, either by CPU, GGPU, GPU, and Dedicated Hardware. I just find the lack of standards annoying. For example, the common trend, a decent high end or high mid range graphics card will tend to last about 1-2 years. I think its somewhat safe to say, (unless theres a revolution) that a 4870 and a GeForce 260/280/275/285 should run most games with good settings for the next year or two. We have road maps, and since the consoles don't do hardware physics, I think its safe that its PC platform only for now. So why don't they road map their intentions with physics. They have the Way its Meant to be Pay'ed program, they could say, ok guys, all games in the next year that are targeted to a 200's series gpus, will utilize physics up to the 9800GTX card. I think that would

1) Force the developers not to be lazy, and or invest the right resources in optimizing. Cryostasis I am looking at you!

2) Up the value proposition, If I know my high end card today, can be used for PhyX tomorrow, I am more prone to shell out more for it. IE I get my 295 GTX, now I use it for 2 years, the I get the GT400 series GPU, I then get my 295 and use it for dedicated physics, the card now lasts me 3 generations, so makes more financial sense.

I don't know about you guys, but me personally, what I do whenever I upgrade, I give away my previous parts to family. So if I get a new graphics card, I normally run SLI/Crossfire, I give one of them to parents, or sibling, then keep the other for something else, or back up, with a proper road map, I could give one of my 295 GTX's to parents, keep one for physics, and when the physics jump pass it on, keep doing that, and cycling them, in a way that makes me buy more graphics cards then I normally would. So I don't understand why they don't do that. If I pass a card to my dad, he normally wants another for SLI, or he may want to get another for PhyX if it proves to be cool. Having a "buffet" approach worries me, because it may make developers lazy, and we'll have the "Crysis" effect. where no matter what, you can't max out the game, and thats fine for one generation IMO, but 2-3 then something is wrong. The game wasn't optimized well enough. The Crysis engine is great, and is argueably the pretty engine on the market today, however, It does not scale well with 4 cores, nor 4 graphics cards. Thats been proven, on benches, and here at [H]. So my concern is without standards and guidelines, we may get that, either games that use alot of PhysX horsepower for no reason, or games that are optimixed with little effects because someone may be using a 8XXX series card, etc.

We need a road map. Heh /rant.

With that said, I am aiming for a 9800 GTX, or a 260 216 for my PhyX card. Which I shall be waiting for price drops :) or when I have the $ haha.

BTW, when I say max out a game, ie Crysis, I am talking about my 2560x1600 res, with all settings on enthusiasts, with some decent AA, AF at Min 30 FPS, preferably a solid 60. Thats my *personal benchmark for maxed out. AFAIK, thats impossible right now, no matter what graphics card you use, or processor.
 
Actually there's one among the current consoles which can do PhysX hardware-accelerated: the PS3. While its SPEs suck for anything branchy like AI code, it has a lot of vector processing power with them, which would make it a very attractive target for physics simulation processing.

I don't think that forcing some kind of migration path on game devs would work... we went from software to hardware accelerated graphics (a subset of physics) during the late 90s, a process which took many years and really only completed the first years of this century. It'd be hopelessly optimistic to expect HW-accelerated physics to just wipe away 'the way it used to be' (tm). Look at some threads on this forum especially those related to the new Ghostbusters game to see what I mean with 'resistance'. I honestly don't expect HW physics to get really big until there's only one standard (not PhysX/Havok/a few dozen smaller ones) and over 90% of potential gamers have a decent or great physics card in their systems.
 
Actually there's one among the current consoles which can do PhysX hardware-accelerated: the PS3. While its SPEs suck for anything branchy like AI code, it has a lot of vector processing power with them, which would make it a very attractive target for physics simulation processing.

I don't think that forcing some kind of migration path on game devs would work... we went from software to hardware accelerated graphics (a subset of physics) during the late 90s, a process which took many years and really only completed the first years of this century. It'd be hopelessly optimistic to expect HW-accelerated physics to just wipe away 'the way it used to be' (tm). Look at some threads on this forum especially those related to the new Ghostbusters game to see what I mean with 'resistance'. I honestly don't expect HW physics to get really big until there's only one standard (not PhysX/Havok/a few dozen smaller ones) and over 90% of potential gamers have a decent or great physics card in their systems.

That makes sense, I was just hoping they would speed up the process by consolodating. But I think physics maybe come faster, simply because before Hardware Acceleration, no one was used to the concept, nowadays we have SLI, Crossfire, etc. So I think if they can prove the technology is "Cool", people would adopt it faster. But I agree the rift between Havok, and PhysX, etc, is definitely a problem :(
 
I am thinking of getting SLI, and its a bit cheaper now than what I budgeted so I figured, I get a another card for PhysX only, whats the best GPU to get for dedicated PhysX? Does it max out a 9800 GTX? or a 250 GTS? etc? Should I use a 260 for PhysX or no point? And if so, do you think the next version of PhysX will max out and need a 260? etc?

Cheaper the better :D

ps. I need something with a water block readily available. 9800 GTX looks like a winner.

Best would be the most powerful one: GTX 295

Best in terms of price/performance it gives: 9600 GT or 8800GT/9800 GT
 
Considering the 9600GT and the 9800GT are both the same price on newegg ($95), I would go for the 9800GT ;)

GeForce 9800GT 512MB + Zalman VF830 Heatsink + Free FarCry2 + Free Shipping = $95
http://www.newegg.com/Product/Product.aspx?Item=N82E16814125247


Edit: Woah, I think I caught newegg in the middle of a price drop. I went back to check on the 9600GT's, and they had suddenly dropped to as low as $75
http://www.newegg.com/Product/Product.aspx?Item=N82E16814143130

I still consider the 9800GT a better value here; it includes an aftermarket heatsink as well as a free game (and free shipping to sweeten the deal).
 
Last edited:
Best would be the most powerful one: GTX 295

Best in terms of price/performance it gives: 9600 GT or 8800GT/9800 GT
I think a 285 would be the best because AFAIK you can't use the "dual" card for physx to its full capacity....but maybe I am wrong. :D
 
Nope!!

You are correct. PhysX does not support SLI, so only 1/2 of a 295 can be used for PhysX.

I did find more conformation that PhysX can only calculate on one GPU.

http://www.nvidia.com/object/physx_faq.html

"How does PhysX work with SLI and multi-GPU configurations?

When two, three, or four matched GPUs are working in SLI, PhysX runs on one GPU, while graphics rendering runs on all GPUs. The NVIDIA drivers optimize the available resources across all GPUs to balance PhysX computation and graphics rendering. Therefore users can expect much higher frame rates and a better overall experience with SLI.

A new configuration that’s now possible with PhysX is 2 non-matched (heterogeneous) GPUs. In this configuration, one GPU renders graphics (typically the more powerful GPU) while the second GPU is completely dedicated to PhysX. By offloading PhysX to a dedicated GPU, users will experience smoother gaming".

The 280 and 285 is the fastest dedicated PhysX processors we have.

http://www.hardforum.com/showthread.php?t=1420948
 
That makes sense, I was just hoping they would speed up the process by consolodating. But I think physics maybe come faster, simply because before Hardware Acceleration, no one was used to the concept, nowadays we have SLI, Crossfire, etc. So I think if they can prove the technology is "Cool", people would adopt it faster. But I agree the rift between Havok, and PhysX, etc, is definitely a problem :(

Well, the presence of vector processors (GPUs) helps a great deal in the acceptance of HW physics, you're right about that. It all comes down to how long it takes for Havok to die off (since it doesn't support HW acceleration yet) at this point. Silly thing is that AMD is licensing Havok from Intel, yet they refused to license PhysX from nVidia (instead or in addition to), nuking a possible early and potentially very beneficial (more people buying PhysX cards from AMD, instead of everyone from nVidia at this point) option for AMD.
 
What about using a 9800 GX2 as a dedicated PhysX GPU? I know it's a bit overkill, but still curious.
 
What about using a 9800 GX2 as a dedicated PhysX GPU? I know it's a bit overkill, but still curious.

Actually it wouldn't be overkill, as posted above, you can only use 1 gpu for physx, so essentially your using a 9800 GT+ as physx, which is about the standard.
 
Well, the presence of vector processors (GPUs) helps a great deal in the acceptance of HW physics, you're right about that. It all comes down to how long it takes for Havok to die off (since it doesn't support HW acceleration yet) at this point. Silly thing is that AMD is licensing Havok from Intel, yet they refused to license PhysX from nVidia (instead or in addition to), nuking a possible early and potentially very beneficial (more people buying PhysX cards from AMD, instead of everyone from nVidia at this point) option for AMD.

Very good point, I think its a most likely royalties, and cut dispute, more than anything. The other problem lies in CPU programming itself. To my knowledge even most apps only use at best 2 cores well. And most don't scale linearly, so having 1 core out of a quad core proccesor dedicated for Physics, also makes sense :(. I think thats the main problem with the PC platform right now, to many options, things need to consolidate, and become efficient. I think its high time, that all applications, should be dual threaded, or more. Photoshop, wheres my GPU acceleration, your a vector program comon!
 
Very good point, I think its a most likely royalties, and cut dispute, more than anything. The other problem lies in CPU programming itself. To my knowledge even most apps only use at best 2 cores well. And most don't scale linearly, so having 1 core out of a quad core proccesor dedicated for Physics, also makes sense :(. I think thats the main problem with the PC platform right now, to many options, things need to consolidate, and become efficient. I think its high time, that all applications, should be dual threaded, or more. Photoshop, wheres my GPU acceleration, your a vector program comon!


Yeah, I like how Ghostbusters will use a cpu core for Physics. Too bad AMD won't be able to support PhysX
 
If you boys want to see some video of:

DirectX Compute presentation by Nvidia at GDC 09

http://developer.nvidia.com/object/gdc-2009.html

Click the green Presentation Video links.
I like them all, and I get the feeling PhysX is being better supported all the time, and has some new features through APEX.

Start with: CUDA and Multi-Core Gaming: Lessons from the Trenches
or : NVIDIA APEX: From Mirror’s Edge to Pervasive Cinematic Destruction to Real-Time Fluid Simulation

I don't have a grip on all of it yet... :D
More on APEX: http://www.nvidia.com/object/io_1237979569423.html
 
Last edited:
Actually it wouldn't be overkill, as posted above, you can only use 1 gpu for physx, so essentially your using a 9800 GT+ as physx, which is about the standard.

It should be a 8800 GTS 512 actually, since the GX2 uses two fully enabled G92 chips, with 128 Stream Processors.
 
Back
Top