Too Bad PhysX Lowers FPS in Games...is it worth these numerous trade offs?

Status
Not open for further replies.
That's only because the GoldSrc engine (i.e. Quake) doesn't have a fixed tic rate for input polling (and movement simulation and so on). The "feel" changes as the frame rate changes because they aren't in direct synchronization.

If you were to play a more modern game with a fixed tic rate, you'd find that the difference in "feel" between running at a locked 60 Hz and a locked 120 Hz is non-existent. With a 60 Hz display, the experience itself would be identical.
 
That's only because the GoldSrc engine (i.e. Quake) doesn't have a fixed tic rate for input polling (and movement simulation and so on). The "feel" changes as the frame rate changes because they aren't in direct synchronization.

If you were to play a more modern game with a fixed tic rate, you'd find that the difference in "feel" between running at a locked 60 Hz and a locked 120 Hz is non-existent. With a 60 Hz display, the experience itself would be identical.

I have played more modern games and I already know all this, the point is some people still don't think more than 60FPS matters in ANY game.
 
That's only because the GoldSrc engine (i.e. Quake) doesn't have a fixed tic rate for input polling (and movement simulation and so on). The "feel" changes as the frame rate changes because they aren't in direct synchronization.

If you were to play a more modern game with a fixed tic rate, you'd find that the difference in "feel" between running at a locked 60 Hz and a locked 120 Hz is non-existent. With a 60 Hz display, the experience itself would be identical.

Modern games, such as any Source based game and CoD4 among others, all display the behavior of having superior playing characteristics when their frames are higher. Again, regardless of the user's display refresh rate.

If we had a say, a "Snipe-off" in Team Fortress 2 and you were my equal in skill, with both of us on 60hz monitors, except with you at 60fps and me at 100+, you would get destroyed.
 
Not all modern use fixed tic rates, but many do. If they don't, then they are by definition not very modern at all :)

EDIT: For whatever it's worth, I do actually believe that the Source engine does have a fixed tic for client-server updates. I don't think the "snipe-off" scenario is a valid one.
 
Not all modern use fixed tic rates, but many do. If they don't, then they are by definition not very modern at all :)

I'm not particularly concerned with the reasonings. I'm more concerned with people preaching fallacy as fact.

Edit:
EDIT: For whatever it's worth, I do actually believe that the Source engine does have a fixed tic for client-server updates. I don't think the "snipe-off" scenario is a valid one.

As a long time competitive CS:Source player, and general fan of competitive playing in FPS games, I can tell you for a fact that the difference in the Source engine is *quite* pronounced.
 
Last edited:
While this may be true, what people really dont seem to be able to understand is that a game running at 60 frames a second will feel very different from a game running at say 120 frames a second, regardless of your display refresh rate.

If you can't notice this, then you probably don't play games very seriously. Now, there's nothing wrong with that, but don't go preaching that frames over 27, 30, 60, whatever, don't matter. It's just not true. I don't care about scientific facts you dig up about the refresh rate of the human eye, reaction times, etc. You're just not seeing the whole picture, and you don't have the experience to say otherwise.

This is most especially important in multiplayer games. Player A with a 60hz monitor that is getting 60fps just is not going to be as competitive as the equally skilled Player B with a 60hz monitor that is getting 100fps. Proxy has the right of it, though input lag is not how I would describe it.

Since inevitably someone will mention it, note that I am *NOT* saying that getting a higher refresh rate display won't also affect how a game plays or "feels".

Input lag is more affected by using a USB mouse over a PS/2 mouse...or a LCD over a CRT:
http://www.hardware.fr/html/articles/lire.php3?article=632&page=1

And ignore science all you want...facts don't go away for that reason.
 
I just don't play enough games, nor do I have the personal experience with this topic to make any meaningful posts in this thread. Instead I shall resort to asinine forum tactics because you upset me. Have at thee brigand!

...What was that?

I'm done with this thread. You are wrong, but it's obvious from this point that you're just going to continue braying at anyone who disagrees with you.
 
I'm not particularly concerned with the reasonings. I'm more concerned with people preaching fallacy as fact.
If that's the case, then I think you should be quite intensely concerned with some of the things you've stated in this thread.

As a long time competitive CS:Source player, and general fan of competitive playing in FPS games, I can tell you for a fact that the difference in the Source engine is *quite* pronounced.
I haven't noticed any. There are changes that do occur at varying server tic rates, but none I've noticed with respect to movement or input polling (or client prediction).
 
...What was that?

I'm done with this thread. You are wrong, but it's obvious from this point that you're just going to continue braying at anyone who disagrees with you.

Pot, kettle, black:

kett said:
You're just not seeing the whole picture, and you don't have the experience to say otherwise.

I post links...people (try to) dismiss science.
I post links...people (try to) dimiss facts
You then try and go the network route...look at my jobtitle...I experience and expertise in that field...do you?

Don't point the finger at me, when you accuse me first (and in a own goal way)...
 
Input lag is more affected by using a USB mouse over a PS/2 mouse...or a LCD over a CRT:
http://www.hardware.fr/html/articles/lire.php3?article=632&page=1

And ignore science all you want...facts don't go away for that reason.

I beg to differ. I can go in a lock my frame rate in a game to my monitor, and still notice a difference with out changing hardware. As it was pointed out before, a "casual gamer" would probably never notice these things. My mouse movements become sluggish when I use VSync. And for the record, I have used both USB and PS/2 devices.

Here is something for you. Since LCD's have become a standard now, along with other new hardware, why is it that game devs still code the games to be able to render well past what ANY LCD or CRT can capture? Probably because it effects game play itself. Maybe describing it as input lag was not a very good comparison. The thing is, FPS effects game play, SP not as much as MP. And once again, I know how many frames the eye can see, and how many my monitor can capture. Thanks.

And to think all of this started because I stated that I'd rather have higher amounts of AA/AF, instead of ONE game that can use PhysX, and nVidia shamelessly pushing PhysX.
 
Last edited:
To get back onto the subject of PhysX...

I watched the Batman AA comparison video a couple days ago, and I think this is genuinely the first time I've seen hardware PhysX play a real role in positively contributing to the overall atmosphere of a game. There are apparently a bunch of instances in which things are implement pretty much just "because they can be", but for the most part, the effects fit within the atmosphere of the game itself and reinforce the idea that the world is physical and believable and that actions on objects have predictable and appropriate responses. Even the cloth simulation, which everyone seems to think is totally misplaced and inappropriate (?), adds a little extra dimension to the world.

Suffice to say that I'd be playing Batman with hardware physics on if I could afford the frame rate hit.
 
PrincessFrosty please research this..no, CPUs are not as fast for this as a GPU just like a CUDA kicks the crap out of any CPU when folding for example.

Proxy, I understand your point and it is your choice and preference. I feel otherwise but then again, if we were all alike there would not be an options menu. :)

I never said they were better, what I'm saying is that I'm getting an almost playable frame rates on batman using a CPU for PhysX and the CPU is barely being used at all, which suggests that if they made better use of the CPU then a playable frame rate would be possible.

You assume perfect scaling across the cores. That just doesn't happen. As others have said, the GPU is much faster than CPUs at physics processing.

I'm not saying the scaling is perfect, it wouldn't have to be to provide a playable fps.

Are you seriously telling me that to obtain an FPS that Hardocp considers playable, about 35-45 fps (lets say 40 fps average, thats an additional ~15fps, that cannot be achieved with 70% of the CPUs idle time.

I think that's very intellectually dishonest

We've got some good evidence to say that physics scales very well across multicore CPUs, valve did testing on this way back when quadcores where quite new and saw very close to linear scaling.

Source here - http://techreport.com/articles.x/11237/2

Hardocps look at ghostbusters multicore scaling having upto 80% and above balanced across 4 cores.

http://www.youtube.com/watch?v=H9boF-JZKcU

Let me ask you something, do you have any reason to believe that it won't scale over the cores in a highly non-linear way such that the additional 70% CPU time could cover the additional 15fp to get a playable frame rate.

Show me some kind of article that demonstrates highly non-linear scaling on the CPU or SOMETHING to make me understand your point of view.
 
To get back onto the subject of PhysX...

I watched the Batman AA comparison video a couple days ago, and I think this is genuinely the first time I've seen hardware PhysX play a real role in positively contributing to the overall atmosphere of a game. There are apparently a bunch of instances in which things are implement pretty much just "because they can be", but for the most part, the effects fit within the atmosphere of the game itself and reinforce the idea that the world is physical and believable and that actions on objects have predictable and appropriate responses. Even the cloth simulation, which everyone seems to think is totally misplaced and inappropriate (?), adds a little extra dimension to the world.

Suffice to say that I'd be playing Batman with hardware physics on if I could afford the frame rate hit.

The only bad effect I think is the paper. The movement of it is so overly exaggerated, like they made it that way just so it would move around a lot. The rest is pretty good.
 
And to think all of this started because I stated that I'd rather have higher amounts of AA/AF, instead of ONE game that can use PhysX, and nVidia shamelessly pushing PhysX.

well if you have the hardware, why not have both high aa and gpu physx? that's really the point. the people that have the power can have their cake and eat it too. but yeah it merely comes down to preference on what one find's acceptable based on one's own hardware limitations. the beauty of it is you have the option to use it or not use it in the first place. just like one might want to play call of juarez with the dx10 effects. it's gonna cause a massive performance hit, but some might like the option if they find the performance acceptable to them based on their setup. some might prefer the effects over higher aa in terms of performance tradeoffs, and vice-versa. the same would apply here.

if you really think it's just one game that uses physx, you would be mistaken. if you really think it's only one game that uses hardware physx, you would also be mistaken. if you think that there's only one game that you might find enjoyable with hardware physx (in this case batman aa), that would be subjective and therefore a perfectly credible statement of opinion. as more and more games come out next year utilizing the tech., this may or may not change for you.

as far as "shamelessly" pushing physx, what companies don't market their competitive advantages? dx11/ eyefinity anyone? seems to me gpu physx is merely pushing forward a new arena in which to improve immersion in games, and in doing so, hopefully provide the framework for new gameplay experiences in the future. the tech. is in it's infancy and will continue to improve as time goes by.
 
Last edited:
well i posted about this before, but that would assume the devs had time to go back and decide which effects were capable on the cpu and adjust in software accordingly. gpu physx appears to be something that was tacked on late in the development of the game. they probably weren't interested in that point in dealing with going back and trying to optimize some of the aforementioned effects for the cpu. otherwise, they would have taken the time to add them to the consoles as well, being the primary platform. yeah, one game like ghostbusters is efficient with it's physics engine, but that isn't dealing with cloth or fluids, which is certainly more taxing, so performance isn't going to be the same. that may or not matter to you, but that's some of what is being added in the game.

could some of the physics in the game been more optimized for the cpu? of course. there's always room for improvement. but again, would the devs gain anything by going back and taking the time (no matter how small) to scale and optimize the cpu physx based on what they tacked on in terms of additional physics effects (having been given an incentive by nvidia to do so in the first place)? maybe given the time frame, they didn't feel it was worth it to make the game look more similar to when gpu physx is enabled, by deciding what effects were acceptable on the cpu (or possibly implemented in a static form with scripted animations) and would scale well across not only single core - quad core but the cpus in the consoles as well. that's why it is pretty rare to see a game with scaleable physics (whether it be software or hardware based). just my 2 cents.
 
Last edited:
I never said they were better, what I'm saying is that I'm getting an almost playable frame rates on batman using a CPU for PhysX and the CPU is barely being used at all, which suggests that if they made better use of the CPU then a playable frame rate would be possible.

I don't think you have no idea what you are talking about here. CPUs get beaten pretty badly by GPUs in regard to physics processing. What your talking about is little more than a vague notion that the extra CPU power would some how be "just as good" or that it would provide similar performance as GPU physics implementations. Or at the very least, you seem to suggest that you could get playable frame rates doing the same things using a multicore CPU for physics processing. What exactly do you base this on?

I'm not saying the scaling is perfect, it wouldn't have to be to provide a playable fps.

Scaling is never perfect in any x86 or x64/EM64T implementation. (Though this is getting better all the time.)

Are you seriously telling me that to obtain an FPS that Hardocp considers playable, about 35-45 fps (lets say 40 fps average, thats an additional ~15fps, that cannot be achieved with 70% of the CPUs idle time.

I don't know exactly how demanding Batman AA's PhysX effects are and if they could be done even half as well on a CPU. Frankly, based on what you've said, you don't either. Based on what I know concerning the differences in performance between CPU and GPU's in raw computational power I seriously doubt the same level of PhysX effects we see in Batman AA can be done with multi-Core CPUs vs. GPU's. Now, that's with PhysX, I'm not sure if another physics processing engine could do it better. So far Ghost Busters is one title we can reference, but you can't really compare them directly given the fact that these two games aren't even Apples vs. Oranges. More like eggplants vs. carrots. Again GPUs are much more powerful than CPUs in this area. There is no comparison so what a Geforce GTX 295 may not break a sweat doing, might yield unplayable results using a CPU to perform the same physics effects.

I think that's very intellectually dishonest

So disagreeing with your opinion is intellectually dishonest?

We've got some good evidence to say that physics scales very well across multicore CPUs, valve did testing on this way back when quadcores where quite new and saw very close to linear scaling.

Source here - http://techreport.com/articles.x/11237/2

Way back when huh? I remember that article. What was being done physics wise in the Source 2 engine doesn't translate to what they can do now, three years later using a game like Batman Arkham Asylum. Batman Arkham Asylum makes heavy use of physics based effects. Much heavier usage of them than any game I've seen to date. You aren't just talking about the tech demos running in a given engine from three years ago either. We are talking about a new game, with tons of visual effects that probably weren't being utilized three years ago in the Source 2 Engine and again with the games engine, AI, larger levels, sound effects, all running at the same time. You are talking about Apples vs. olives.

Hardocps look at ghostbusters multicore scaling having upto 80% and above balanced across 4 cores.

http://www.youtube.com/watch?v=H9boF-JZKcU

That's Ghostbusters. Different physics engine, different game engine, different everything. Again you can't compare the two directly. YES physics effects can be done on the CPU. We've known this ever since Alan Wake was first announced. There really isn't a debate here, GPUs are better than CPUs at physics processing. Period. Now you might be arguing that CPU physics may be "good enough" and that we'd be able to get playable frame rates in these games using that method. To that I say, screw that. I don't want a "playable" method for physics processing, or a "good enough" implementation of physics processing to satisfy those who don't seem to want the evenlope pushed for visual quality. I want the best method.

Let me ask you something, do you have any reason to believe that it won't scale over the cores in a highly non-linear way such that the additional 70% CPU time could cover the additional 15fp to get a playable frame rate.

It isn't that simple. Do you know how many GFLOPS it takes to get the same PhysX effects found in Batman Arkham Asylum and maintain playable performance? I doubt that you do. (I don't know either.) All I know is that I'm dedicating a Geforce GTX 280 OC to PhysX support in Batman Arkham Asylum and two more Geforce GTX 280 OC cards for graphics processing. I get about 60FPS at 2560x1600. I seriously doubt my Core i7 920 @ 4.0GHz can do the same the same amount of effects just as well and still provide the same frame rates I'm getting today.

Show me some kind of article that demonstrates highly non-linear scaling on the CPU or SOMETHING to make me understand your point of view.

You need to look at the differences between GPUs and CPUs in terms of physics processing / computing power. GPUs aren't ideal for every type of processing out there but in areas like physics and other scientific calculations the CPU doesn't hold a candle to the GPU.

A typical CPU is measured in GFLOPS. Video cards are now breaking the TFLOP barrier all the time now. The difference is staggering.

http://www.tomshardware.com/forum/262886-28-core-gflops-benchmark
http://www.gpureview.com/GeForce-GTX-295-card-603.html

Granted the comparison isn't always 1:1, nor is it an apples to apples comparison as their individual architectures lend themselves to doing certain tasks better than the other. However, it is widely known that for physics processing, GPUs outclass CPUs, badly.

Some more articles on physics processing:

http://www.computerpoweruser.com/Ed...=articles/archive/c0610/28c10/28c10.asp&guid=

Comparisons between CPU and GPU PhysX in Cryostatis:

http://www.techarp.com/showarticle.aspx?artno=644&pgno=5
http://www.hardforum.com/showthread.php?t=1376657

This should showcase the difference between GPU and CPU physics capabilities. Yeah they can implement some physics effects via multi-core CPUs, but it won't hold a candle to what the GPU can do.

All I said was that scaling wouldn't be 100% perfect. It never will be. Sure you might be able to get close to it but I doubt that given developer's track records with that so far. It took forever to get developers to utilize two cores much less four or more. The last thing you actually want to do is max out every core at 100% or get close to it. That leaves no resources for anything else but the game. While some people consider this to be no big deal, many people have background tasks going on which may or may not include AV, torrent apps, VM's or monitoring applications. Beyond that you make a big assumption that if you utilized 70% of the idle processing power across multiple cores that it would somehow equal or be "good enough" to get similar frame rates as a dedicated GPU already provides. I don't think that's going to be the case.

I think GPU physics processing is the way to go. However, I do agree that being tied to an NVIDIA card for PhysX sucks ass when NVIDIA is trying to disable their own cards PhysX capabilites if you use an AMD card for rendering. I'd like to see a physics implementation like PhysX that isn't tied to one brand of cards or another. Through there isn't a technical restriction on running phyiscs processing off AMD cards, it hasn't been done or pushed the way PhysX on NVIDIA cards has been pushed. So hopefully game developers will start to change this.
 
I don't think you have no idea what you are talking about here. CPUs get beaten pretty badly by GPUs in regard to physics processing. What your talking about is little more than a vague notion that the extra CPU power would some how be "just as good" or that it would provide similar performance as GPU physics implementations. Or at the very least, you seem to suggest that you could get playable frame rates doing the same things using a multicore CPU for physics processing. What exactly do you base this on?



Scaling is never perfect in any x86 or x64/EM64T implementation. (Though this is getting better all the time.)



I don't know exactly how demanding Batman AA's PhysX effects are and if they could be done even half as well on a CPU. Frankly, based on what you've said, you don't either. Based on what I know concerning the differences in performance between CPU and GPU's in raw computational power I seriously doubt the same level of PhysX effects we see in Batman AA can be done with multi-Core CPUs vs. GPU's. Now, that's with PhysX, I'm not sure if another physics processing engine could do it better. So far Ghost Busters is one title we can reference, but you can't really compare them directly given the fact that these two games aren't even Apples vs. Oranges. More like eggplants vs. carrots. Again GPUs are much more powerful than CPUs in this area. There is no comparison so what a Geforce GTX 295 may not break a sweat doing, might yield unplayable results using a CPU to perform the same physics effects.



So disagreeing with your opinion is intellectually dishonest?



Way back when huh? I remember that article. What was being done physics wise in the Source 2 engine doesn't translate to what they can do now, three years later using a game like Batman Arkham Asylum. Batman Arkham Asylum makes heavy use of physics based effects. Much heavier usage of them than any game I've seen to date. You aren't just talking about the tech demos running in a given engine from three years ago either. We are talking about a new game, with tons of visual effects that probably weren't being utilized three years ago in the Source 2 Engine and again with the games engine, AI, larger levels, sound effects, all running at the same time. You are talking about Apples vs. olives.



That's Ghostbusters. Different physics engine, different game engine, different everything. Again you can't compare the two directly. YES physics effects can be done on the CPU. We've known this ever since Alan Wake was first announced. There really isn't a debate here, GPUs are better than CPUs at physics processing. Period. Now you might be arguing that CPU physics may be "good enough" and that we'd be able to get playable frame rates in these games using that method. To that I say, screw that. I don't want a "playable" method for physics processing, or a "good enough" implementation of physics processing to satisfy those who don't seem to want the evenlope pushed for visual quality. I want the best method.



It isn't that simple. Do you know how many GFLOPS it takes to get the same PhysX effects found in Batman Arkham Asylum and maintain playable performance? I doubt that you do. (I don't know either.) All I know is that I'm dedicating a Geforce GTX 280 OC to PhysX support in Batman Arkham Asylum and two more Geforce GTX 280 OC cards for graphics processing. I get about 60FPS at 2560x1600. I seriously doubt my Core i7 920 @ 4.0GHz can do the same the same amount of effects just as well and still provide the same frame rates I'm getting today.



You need to look at the differences between GPUs and CPUs in terms of physics processing / computing power. GPUs aren't ideal for every type of processing out there but in areas like physics and other scientific calculations the CPU doesn't hold a candle to the GPU.

A typical CPU is measured in GFLOPS. Video cards are now breaking the TFLOP barrier all the time now. The difference is staggering.

http://www.tomshardware.com/forum/262886-28-core-gflops-benchmark
http://www.gpureview.com/GeForce-GTX-295-card-603.html

Granted the comparison isn't always 1:1, nor is it an apples to apples comparison as their individual architectures lend themselves to doing certain tasks better than the other. However, it is widely known that for physics processing, GPUs outclass CPUs, badly.

Some more articles on physics processing:

http://www.computerpoweruser.com/Ed...=articles/archive/c0610/28c10/28c10.asp&guid=

Comparisons between CPU and GPU PhysX in Cryostatis:

http://www.techarp.com/showarticle.aspx?artno=644&pgno=5
http://www.hardforum.com/showthread.php?t=1376657

This should showcase the difference between GPU and CPU physics capabilities. Yeah they can implement some physics effects via multi-core CPUs, but it won't hold a candle to what the GPU can do.

All I said was that scaling wouldn't be 100% perfect. It never will be. Sure you might be able to get close to it but I doubt that given developer's track records with that so far. It took forever to get developers to utilize two cores much less four or more. The last thing you actually want to do is max out every core at 100% or get close to it. That leaves no resources for anything else but the game. While some people consider this to be no big deal, many people have background tasks going on which may or may not include AV, torrent apps, VM's or monitoring applications. Beyond that you make a big assumption that if you utilized 70% of the idle processing power across multiple cores that it would somehow equal or be "good enough" to get similar frame rates as a dedicated GPU already provides. I don't think that's going to be the case.

I think GPU physics processing is the way to go. However, I do agree that being tied to an NVIDIA card for PhysX sucks ass when NVIDIA is trying to disable their own cards PhysX capabilites if you use an AMD card for rendering. I'd like to see a physics implementation like PhysX that isn't tied to one brand of cards or another. Through there isn't a technical restriction on running phyiscs processing off AMD cards, it hasn't been done or pushed the way PhysX on NVIDIA cards has been pushed. So hopefully game developers will start to change this.

um, yeaaaaahhh. took all the words right out of my mouth, lol. j/k. much better explanation than i could ever hope of achieving.

just testing the physx fluids benchmark (physx power pack) on a single 8800gt vs in software on my dual core cpu (70-75% cpu usage), the fps difference is staggering. and this is just a simple simulation - not even a full fledged game.

as for the last part, i agree. well, amd was pushing havok on their gpus, but i guess now they are pushing for opencl bullet. we will see if it can gain the dev mindshare necessary to compete in the hardware physics arena to where nvidia will have to respond with how they want to continue to handle gpu physx. perhaps they will eventually port it to opencl like bullet (& havok?), though i don't know if it can perform as well as the cuda implementation. but for now, gpu physx is the only one being utilized in games today. i'm sure this will change in a few years time.
 
Well if PhysX is a better implementation than other physics processing engines (I suspect that it might be) then I would hope for it to dominate the market just because it's better. I'm not a programer or a game developer so I can't really speak to that point. However, if it is better, and it does gain more market share I would hope that AMD would embrace the technology and license it for themselves. I know that it must suck to have to get something like that from your competitor, but NVIDIA did offer the technology to ATI previously. As it is AMD already has an x86 license agreement with Intel. A PhysX license from NVIDIA would be no different.

Likewise, if Bullet, Havok or any other technology proves to be better, or at least gain more market share, then I hope NVIDIA would embrace that as well. According to them they can do other forms of PhysX processing on their GPUs as it is.

Unfortunately what may happen is that NVIDIA and AMD will each have their own solutions, and game developers will have to pick one or the other, or worse yet, they'll have to try and put in support for both physics engines into their games. This would mirror the OpenGL vs. Direct X days where a game developer would choose one or the other, and owners of one brand of card would suffer to a degree because the developer chose a technology that their card's counterpart handled better. Direct X ran better on ATI hardware while OpenGL ran better on NVIDIA cards etc. If that happens, the truth is we all lose. That could add development time for game titles and it also penalizes one camp or the other when their favorite game was optimized for the cards they didn't choose.
 
Well if PhysX is a better implementation than other physics processing engines (I suspect that it might be) then I would hope for it to dominate the market just because it's better. I'm not a programer or a game developer so I can't really speak to that point. However, if it is better, and it does gain more market share I would hope that AMD would embrace the technology and license it for themselves. I know that it must suck to have to get something like that from your competitor, but NVIDIA did offer the technology to ATI previously. As it is AMD already has an x86 license agreement with Intel. A PhysX license from NVIDIA would be no different.

Likewise, if Bullet, Havok or any other technology proves to be better, or at least gain more market share, then I hope NVIDIA would embrace that as well. According to them they can do other forms of PhysX processing on their GPUs as it is.

Unfortunately what may happen is that NVIDIA and AMD will each have their own solutions, and game developers will have to pick one or the other, or worse yet, they'll have to try and put in support for both physics engines into their games. This would mirror the OpenGL vs. Direct X days where a game developer would choose one or the other, and owners of one brand of card would suffer to a degree because the developer chose a technology that their card's counterpart handled better. Direct X ran better on ATI hardware while OpenGL ran better on NVIDIA cards etc. If that happens, the truth is we all lose. That could add development time for game titles and it also penalizes one camp or the other when their favorite game was optimized for the cards they didn't choose.

yeah, those are very interesting points. i would say that gpu physics will still remain a niche for devs for quite some time since they still have to cater to the consoles in terms of multiplatform releases. not to say that pc exclusives won't exist where it might be taken advantage of, but even then, those with capable hardware (whether ati or nvidia) would still be in the minority compared to those with much older gen hardware. so i think it may still be a long while before anything of what you suggested may take place.

if physx does overtake the marketplace for physics acceleration, then amd may have no choice. but i'm not sure how much it will make a difference at that point. let's say all the major physics engines that offer gpu acceleration become "open" and available for devs to start integrating in their games. it will still be awhile before the userbase exists to where it will the tech. will be utilized more than just effects. i think software based physics will still be the most dominant and common implementation until new consoles "catch up" by being released with gpu physics support. so if there is a "battle" between havok, physx, and bullet vying for gpu physics supremacy among devs, it will still probably mean nothing more than optional physics effects for quite some time.

otoh, i see nvidia really trying to push gpu physx in games, even if it is just for additional eye candy. and it does seem to be gaining momentum. i think physx will dominate unless opencl bullet and/or havok can gain the mindshare of devs so that they are used in their games. but the support doesn't really exist as of right now for either of them. don't know what happened to havok (havok fx pt.2?) and bullet may eventually start to gain some ground with amd backing it, but we've seen how this kind of talk has gone so far. though software based bullet has been around for quite some time to (almost as long as physx perhaps), it still is a drop in the bucket compared to how much physx is used nowadays in comparison. so if devs continue to use physx, this gap will only continue to widen, and i guess nvidia will use the middleware as their trojan horse to gain more gpu physx support from devs, thus being able to dominate this new arena.

opencl bullet does have a chance, if the implementation is on par with physx (that still remains to be seen) since the hardware support will be broader. but the clock is ticking. the longer it takes for dev support to pick up, the more time physx will be around to take advantage. i don't even know of any games of the top of my head that uses bullet as of now. though this link suggests trials hd and free realms uses it:

http://www.bulletphysics.com/wordpress/

seeing as how nvidia will support bullet and probably continue to push physx, i don't know how much that will affect things, but i guess we will see. it really will still be up to the devs to decide which physics engine will become the most popular to use. i mean directx pretty much dominates over opengl on the pc platform. things may or may not play out the same way. as far as development time goes, i'm sure it will get longer regardless as devs eventually try to think of meaningful ways to integrate the new tech. into their games. but seeing as how most games these days take years to develop, i don't see how it could get particularly worse - especially if they work on valve or blizzard time, lol.
 
Last edited:
The problem I have with PhysX is Nvidia appears to have totally half-assed the CPU implementation. I've messed around with the PhysX library, and even using a room full of rigid boxes with 8 threads CPU usage never increased above ~15% as the FPS gradually started dropping. Not only that, but only 1 of the physics threads was really doing work, 1 was doing a little, and 6 threads were idle.

Now, I'm not saying CPUs are better at physics than GPUs, far from it. What I am suggesting is that Nvidia hasn't bothered to fix the CPU scaling that is supposed to be there, thus making the gap between CPU physics and GPU physics artificially larger.

Comparing chickens to motorbikes eh?
Easy riggid bodies physics that disspear after 10 seconds are no where in the same leuage as interactive smoke.

If it where so easy, why havn't any havok API games showed this?
(Hint: Because the CPU's simply don't have the SIMD power to do so, this is another example: http://golubev.com/about_cpu_and_gpu_2_en.htm )

Havok has showed things like that, you just like to ignore them: http://www.youtube.com/watch?v=daZoXzBGea0

Also, the PhysX SDK has sample programs of collide-able particles and guess what? They can run on the CPU, too. So even the PhysX library is capable of doing interactive smoke on the CPU.
 
probably cause they don't have their own cpu to optimize for lol. though that's not to say that it can't be improved in terms of software physics. there's always room for improvement. but since they are a gpu company, that's what they are going to optimize for. though most games ive seen have scaled towards dual cores on the engine so far and not much beyond that. and yes everything running in hardware is capable on the cpu, but combine it all together collectively as a whole and one starts to see the limitations. only problem with that youtube link is it is a tech demo and not a game, while a few games that already exist allow me to interact with, rip, tear, shred, perforate, etc. the cloth.

the point is if these type of effects were able to run acceptably on the cpu, we would have seen them in games a long time ago since multicore cpus for the pc have been around for several years now along with multicore cpus in the consoles, so devs have had ample time to showcase them through software based havok (or physx) - especially on the consoles since the devs don't have to deal with older single core cpus on those platforms. the tools exist, so i don't know what they are waiting for. i mean gpu physx has only been around for like a year and a half too so that's not an excuse.

if anything, someone should tell these devs that they don't know what they're doing since they haven't been taking advantage of the resources they have available to improve the game atmosphere since the tools to utilize them in cpu physics has been around for a long time. i mean if a dev is using havok (or software physx) and they are developing a multiplatform game on the pc and the consoles with their multicore cpus, then they should be showing off all these effects already - particularly with the ps3 since the cell processor is better than other cpus in terms of physics processing. or maybe it's because these guys actually do know what they are doing, i dunno.
 
Last edited:
I don't think you have no idea what you are talking about here. CPUs get beaten pretty badly by GPUs in regard to physics processing.

No you're simply not reading what I've typed, I said SPECIFICALLY that I don't think CPU's are better, what I'm saying is that in the case of batman there is a good chance that if Nvidia made better use of the CPU that people without nvidia cards can appreciate a similar level of PhysX effects without having to go out and change their entire GPU solution to Nvidia.

What your talking about is little more than a vague notion that the extra CPU power would some how be "just as good" or that it would provide similar performance as GPU physics implementations.

In general, no. specifically for what is used in batman, yes. I've provided links to articles from game developers who have built engines which allow almost linear scaling of physics processing on multicore CPUs. I've also provided real world numbers for Batmans specific use of physics which clearly show that it only uses a tiny fraction of the CPU for PhysX, it's less than 30% and the frame rate is almost playable already.

Or at the very least, you seem to suggest that you could get playable frame rates doing the same things using a multicore CPU for physics processing. What exactly do you base this on?

Based on evidence (i've posted) from developers showing near linear scaling in physics on multicore CPUs with benchmarks for proof.

If 30% CPU usage can provide 25FPS in the game engine then even if the scaling wasn't perfect (which we both agree on) then the other 70% of CPU time could be used to provide a further FPS boost, maybe it's highly non linear and can only produce another 10fps but that would still be enough to make the game playable for people decent CPU's and without Nvidia graphics cards.

Scaling is never perfect in any x86 or x64/EM64T implementation. (Though this is getting better all the time.)

Again I've said several times it's not perfect, the links I posted show near perfect scaling but I am perfectly aware that things don't scale perfectly, but they don't need to

30% for 25fps is 0.833 FPS per 1% of CPU time

Even if that dropped to something like 0.2 FPS per percentage of CPU time which is a massive drop (over 4 times less efficient) you'd still get enough additional FPS for it to be playable. And this make several assumptions which actually offset the score against my argument, for example 30% usage is PhysX + the rest of the game code so the actual percentage being used for the PhysX is smaller which offset these numbers.

I don't know exactly how demanding Batman AA's PhysX effects are and if they could be done even half as well on a CPU. Frankly, based on what you've said, you don't either. Based on what I know concerning the differences in performance between CPU and GPU's in raw computational power I seriously doubt the same level of PhysX effects we see in Batman AA can be done with multi-Core CPUs vs. GPU's.

How does this make any sense?

You admit you don't know how demanding batmans PhysX are (I suspect quite low) but then you go on to say that you don't think a CPU can do match the GPU in batman, how can possibly know that without knowing how demanding the game is, it just doesn't make any sense.

Now, that's with PhysX, I'm not sure if another physics processing engine could do it better. So far Ghost Busters is one title we can reference, but you can't really compare them directly given the fact that these two games aren't even Apples vs. Oranges. More like eggplants vs. carrots. Again GPUs are much more powerful than CPUs in this area. There is no comparison so what a Geforce GTX 295 may not break a sweat doing, might yield unplayable results using a CPU to perform the same physics effects.
I wasn't really comparing these games directly, which again I've said in my previous post. The reason i referenced ghostbusters is that it's proof that physics engines can be written to be scaled across multicore CPUs and make use of large scale physics.

Let me stress that I'm not saying its as good or better than a GPU doing it, what Im saying is that we can make physics engines on a CPU, we can scale it to use almost all of the CPUs resources, and that PhysX doesn't scale like that in fact it has very bad CPU usage

So disagreeing with your opinion is intellectually dishonest?

I think you know that if we made use of the other 70% of the CPU we'd likely get playable frame rates in batman, you've failed to procude any kind of evidence to show that above the current 30% CPU usage that PhysX would stop scaling as well (worse than 4x slower as calculated above) I think thats intellectually dishonest, if this was anything else I suspect you'd be inclinded to at least question why such a small resource usage.

It's not as if this is highly unlikely, nvidia dont sell CPUs so it massively benefits them that you can only do the effects on their card.

Way back when huh? I remember that article. What was being done physics wise in the Source 2 engine doesn't translate to what they can do now, three years later using a game like Batman Arkham Asylum. Batman Arkham Asylum makes heavy use of physics based effects. Much heavier usage of them than any game I've seen to date.

I'm not talking about what effects are specifically being used, i'm talking about the scaling that is possible on multicore CPUs.

You aren't just talking about the tech demos running in a given engine from three years ago either. We are talking about a new game, with tons of visual effects that probably weren't being utilized three years ago in the Source 2 Engine and again with the games engine, AI, larger levels, sound effects, all running at the same time. You are talking about Apples vs. olives.

Ok so do you have any evidence or any reason at all to think that newer physics effects dont scale as well as older ones?

That's Ghostbusters. Different physics engine, different game engine, different everything. Again you can't compare the two directly.

I wasn't I've said this over and over, it wasn't a direct comparison, it was used as a proof of concept, that it COULD be done.

YES physics effects can be done on the CPU. We've known this ever since Alan Wake was first announced. There really isn't a debate here, GPUs are better than CPUs at physics processing. Period.

Yes and I agree, but this is not what I'm saying, again i've already said this before so I'm really just repeating myself again.

Now you might be arguing that CPU physics may be "good enough" and that we'd be able to get playable frame rates in these games using that method.

Yes! Finally after misrepresenting my position for god knows how many paragraphs, you finally get it.

To that I say, screw that. I don't want a "playable" method for physics processing, or a "good enough" implementation of physics processing to satisfy those who don't seem to want the evenlope pushed for visual quality. I want the best method.

This is asinine and quite frankly incredibly selfish, the world doesn't revolve around just what you want. You don't have to give up GPU processing to improve PhysX on the CPU you can do both well in one package, you can have your super duper whizbang GPU solution and for those who cannot use PhysX like ATI users get to have fallback effects.

You argument would make sense if everyone could do GPU based physics equally but thats not the case.

It isn't that simple. Do you know how many GFLOPS it takes to get the same PhysX effects found in Batman Arkham Asylum and maintain playable performance? I doubt that you do. (I don't know either.)

No idea, neither of us know. My argument doesn't rely on knowing the exactly GFLOP usage, what it relys on is knowing the current frame rate and the current CPU usage and extrapolating from there.

All I know is that I'm dedicating a Geforce GTX 280 OC to PhysX support in Batman Arkham Asylum and two more Geforce GTX 280 OC cards for graphics processing. I get about 60FPS at 2560x1600. I seriously doubt my Core i7 920 @ 4.0GHz can do the same the same amount of effects just as well and still provide the same frame rates I'm getting today.

Again you say you dont know the exact usage in batman yet doubt your CPU could handle it, what are you basing this on? In fact you the i7 architecture is better than the Q9xxx range and you've got a faster clock so im betting you get at least 30fps if not a little more in the start area of batman, you could actually try and provide some data, would only take 5 minutes.

You need to look at the differences between GPUs and CPUs in terms of physics processing / computing power. GPUs aren't ideal for every type of processing out there but in areas like physics and other scientific calculations the CPU doesn't hold a candle to the GPU.

I never said CPUs were better, i've aknowledged several times that GPUs are faster at physics calculations but not everyone can run GPU physics so until we all can then it makes sense to investigate CPU based physics.

A typical CPU is measured in GFLOPS. Video cards are now breaking the TFLOP barrier all the time now. The difference is staggering.

http://www.tomshardware.com/forum/262886-28-core-gflops-benchmark
http://www.gpureview.com/GeForce-GTX-295-card-603.html

I am aware of this, but I cannot run batman on my GPU so this is a moot point

My options in batman currently are,

1) No PhysX
2) Switch to Nvidia

Why can't there be a 3rd?

3) Run physX on the CPU and nvidia make it use more than 30% usage and get at least some PhysX effects

Some more articles on physics processing:

http://www.computerpoweruser.com/Ed...=articles/archive/c0610/28c10/28c10.asp&guid=

Comparisons between CPU and GPU PhysX in Cryostatis:

http://www.techarp.com/showarticle.aspx?artno=644&pgno=5
http://www.hardforum.com/showthread.php?t=1376657

This should showcase the difference between GPU and CPU physics capabilities. Yeah they can implement some physics effects via multi-core CPUs, but it won't hold a candle to what the GPU can do.

Yes I understand GPUs are better at physics than CPUs, but doing physics on the CPU is better than not having them at all. For people without the option of doing GPU physics then why not do what we can on the CPU, which in batman it's obvious that a lot of the effects could be done on the CPU.

All I said was that scaling wouldn't be 100% perfect. It never will be. Sure you might be able to get close to it but I doubt that given developer's track records with that so far. It took forever to get developers to utilize two cores much less four or more. The last thing you actually want to do is max out every core at 100% or get close to it. That leaves no resources for anything else but the game. While some people consider this to be no big deal, many people have background tasks going on which may or may not include AV, torrent apps, VM's or monitoring applications.
Right, so make it an option, Physics level 1 with 30% usage, physics level 2 60% usage, physics level 3 90% usage, we have options for various speed GPUs already so we know its possible, and we have options for almost every other graphics options to allow us to balance frame rate vs image quality. This is just a weak argument.

Beyond that you make a big assumption that if you utilized 70% of the idle processing power across multiple cores that it would somehow equal or be "good enough" to get similar frame rates as a dedicated GPU already provides. I don't think that's going to be the case.

I never said "same frame rates" as a GPU, I said playable frame rates.

You dont think thats going to be the case, so you must have some reason for thinking that, 30% usage provides 25FPS but between the rest of the other 70% we cant muster another 10-15 FPS, do you have any specific reason to think that over a certain point we suddenly see a massive degredation in physics scaling, there must be some reason to think this, post an article, a interview with a developer, let me know why you so strongly think this.

No one has posted any reasons to think that PhysX cant scale almost linearly on the CPU like i've demonstrated has been done in the past.

I think GPU physics processing is the way to go.

And for the record I happen to think that its a good move for the industry as it gives us more options, what we need is a GPU agnostic solution, while we have Nvidia in control of PhysX we have a highly bias situation which in the long run will just bite gamers in the ass because at some point they're going to turn around and be a big bunch of gays, there's already evidence of this as you've mentioned because they've already stopped support for multi vendor solutions.

However, I do agree that being tied to an NVIDIA card for PhysX sucks ass when NVIDIA is trying to disable their own cards PhysX capabilites if you use an AMD card for rendering. I'd like to see a physics implementation like PhysX that isn't tied to one brand of cards or another.

I couldn't agree more, I'd love to run batmans physics on my GPU and play it in all it's glory but while Nvidia are in charge thats always going to be a problem, I dont blame ATI for their stance on support because they can't afford to let Nvidia corner the market like that, so for the time being there is no one solution fits all.

Through there isn't a technical restriction on running phyiscs processing off AMD cards, it hasn't been done or pushed the way PhysX on NVIDIA cards has been pushed. So hopefully game developers will start to change this.

I'm not sure if this is down to game developers, I think AMD need to help intergrate support and they're being fairly stubborn about it, but while I'd like to play batman with GPU physics if it means Nvidia being the sole people in the position of power for GPU physics thats VERY BAD, they've already proven they're capable of screwing over AMD users.

Had Agiea remained independent and not been bought by Nvidia im sure that AMD would be rushing to impliment PhysX support, we'd all be able to use GPU PhysX and the whole CPU thing would be moot.
 
The problem I have with PhysX is Nvidia appears to have totally half-assed the CPU implementation. I've messed around with the PhysX library, and even using a room full of rigid boxes with 8 threads CPU usage never increased above ~15% as the FPS gradually started dropping. Not only that, but only 1 of the physics threads was really doing work, 1 was doing a little, and 6 threads were idle.

Now, I'm not saying CPUs are better at physics than GPUs, far from it. What I am suggesting is that Nvidia hasn't bothered to fix the CPU scaling that is supposed to be there, thus making the gap between CPU physics and GPU physics artificially larger.



Havok has showed things like that, you just like to ignore them: http://www.youtube.com/watch?v=daZoXzBGea0

Also, the PhysX SDK has sample programs of collide-able particles and guess what? They can run on the CPU, too. So even the PhysX library is capable of doing interactive smoke on the CPU.

That demo is run via Havok OpenCL GPU physics, not CPU physics *ding-*ding*:
http://www.vizworld.com/tag/havok/


But AMD is not pimping Havok(owned by Intel) anymore.
They are now pimping Bullet...with nothing to show ;)
 
Last edited:
No you're simply not reading what I've typed, I said SPECIFICALLY that I don't think CPU's are better, what I'm saying is that in the case of batman there is a good chance that if Nvidia made better use of the CPU that people without nvidia cards can appreciate a similar level of PhysX effects without having to go out and change their entire GPU solution to Nvidia.

You still don't understand. You have provided absolutely zero proof that Batman Arkham Asylum's phyiscs effects can be done utilizing multi-core CPUs and get a similar level of performance and effects that we see using GPU PhysX. I'm not convinced the performance would even be "good enough" though that term is somewhat relative as well. You seem to base your theory on the fact that the CPU usage is low while running the game. That's not proof, that's almost baseless conjecture.

In general, no. specifically for what is used in batman, yes. I've provided links to articles from game developers who have built engines which allow almost linear scaling of physics processing on multicore CPUs. I've also provided real world numbers for Batmans specific use of physics which clearly show that it only uses a tiny fraction of the CPU for PhysX, it's less than 30% and the frame rate is almost playable already.

Again, different engines, different physics effects, and difference circumstances aren't directly comparable.

Based on evidence (i've posted) from developers showing near linear scaling in physics on multicore CPUs with benchmarks for proof.

And I've shown you proof that the performance you get with CPU physics performing comparable effects is often unacceptable. PhysX can be run on CPUs. So far the performance in even the tech demos is pretty aweful. That's not even adding in a game engine, audio effects, player input, etc.

If 30% CPU usage can provide 25FPS in the game engine then even if the scaling wasn't perfect (which we both agree on) then the other 70% of CPU time could be used to provide a further FPS boost, maybe it's highly non linear and can only produce another 10fps but that would still be enough to make the game playable for people decent CPU's and without Nvidia graphics cards.

You base this on what? Games are mostly GPU bound rather than CPU bound. Given the massive difference in performance between CPU and GPUs in terms of running games, your math doesn't remotely check out. Again you base this on a very vague notion and understanding of how game engines work.

Again I've said several times it's not perfect, the links I posted show near perfect scaling but I am perfectly aware that things don't scale perfectly, but they don't need to

Scaling is one thing, actual performance is another. Again I don't think you would get acceptable performance in Batman Arkham Asylum using CPU cores instead of GPU processing for physics.

30% for 25fps is 0.833 FPS per 1% of CPU time

You make a ton of assumptions about how the game engine works. Again I don't think that your math checks out. This is baseless theoretical nonesense. Again games are more GPU bound than CPU bound at this point.

Even if that dropped to something like 0.2 FPS per percentage of CPU time which is a massive drop (over 4 times less efficient) you'd still get enough additional FPS for it to be playable. And this make several assumptions which actually offset the score against my argument, for example 30% usage is PhysX + the rest of the game code so the actual percentage being used for the PhysX is smaller which offset these numbers.

How does this make any sense?

What you are talking about here, doesn't.

You admit you don't know how demanding batmans PhysX are (I suspect quite low) but then you go on to say that you don't think a CPU can do match the GPU in batman, how can possibly know that without knowing how demanding the game is, it just doesn't make any sense.

Given that the game recommends a dedicated Geforce GTX 260 for PhysX processing, I'm inclined to believe that the PhysX effects are quite demanding. I also base this statement on the performance differences between GPU and CPUs in regard to PhysX processing.

Now, that's with PhysX, I'm not sure if another physics processing engine could do it better. So far Ghost Busters is one title we can reference, but you can't really compare them directly given the fact that these two games aren't even Apples vs. Oranges. More like eggplants vs. carrots. Again GPUs are much more powerful than CPUs in this area. There is no comparison so what a Geforce GTX 295 may not break a sweat doing, might yield unplayable results using a CPU to perform the same physics effects.

I wasn't really comparing these games directly, which again I've said in my previous post. The reason i referenced ghostbusters is that it's proof that physics engines can be written to be scaled across multicore CPUs and make use of large scale physics.

I never said they couldn't. What I did say is that the level of physics effects that can be performed on GPUs far exceeds what can be done using multi-core CPUs.

Let me stress that I'm not saying its as good or better than a GPU doing it, what Im saying is that we can make physics engines on a CPU, we can scale it to use almost all of the CPUs resources, and that PhysX doesn't scale like that in fact it has very bad CPU usage

The lack of CPU usage in Batman Arkham Asylum is NOT necessarily due to PhysX implementation. The game, like most others, is far more GPU bound than CPU bound.

I think you know that if we made use of the other 70% of the CPU we'd likely get playable frame rates in batman, you've failed to procude any kind of evidence to show that above the current 30% CPU usage that PhysX would stop scaling as well (worse than 4x slower as calculated above) I think thats intellectually dishonest, if this was anything else I suspect you'd be inclinded to at least question why such a small resource usage.

No, it's just that games are more GPU bound than CPU bound. This is not a hard concept to understand. It could be that CPU usage was kept low for a reason, but offloading physics processing to the CPU probably won't do what you think it will. Again I have my doubts that the CPU's extra 70% CPU usage per core would be capable of matching the performance of GPU physics, (or even deliver playable frame rates) under the same circumstances. You've provided no proof that it could.

It's not as if this is highly unlikely, nvidia dont sell CPUs so it massively benefits them that you can only do the effects on their card.

It does benefit NVIDIA greatly to have things this way. It would also benefit them if AMD embraced PhysX too which would make the API more relevant than it is today. But sadly that would mean that NVIDIA would always have to have an architecture that ran PhysX better than anything AMD has. I doubt they are willing to risk that. This way the two can't be compared directly quite so easily. They can claim it as a feature that AMD GPUs can't do. AMD used to embrace Havok and now Bullet physics APIs. Again NVIDIA GPUs can supposedly do this too. This will be great for the industry. I agree that having to use an NVIDIA card for PhysX sucks, but if they'd at least stop trying to disable their cards when AMD cards are being used as the primary rendering device then AMD card owners could simply add a relatively low cost NVIDIA card to their systems and NVIDIA would still benefit. I simply don't understand their thinking here. This would allow them to sell additional GPUs even when the primary card in the system is an AMD part. That's really another topic.

I'm not talking about what effects are specifically being used, i'm talking about the scaling that is possible on multicore CPUs.

Again I don't think their performance would be sufficient for the sheer amount of effects seen in Batman Arkham Asylum.

Ok so do you have any evidence or any reason at all to think that newer physics effects dont scale as well as older ones?

It isn't really just a question of scaling. I brought it up because scaling is never 100% perfect, but really that's not even the heart of the matter. It is a question of performance. CPUs don't hold a candle to GPUs in regard to physics effects processing. I am not sure I can make that anymore clear.

I wasn't I've said this over and over, it wasn't a direct comparison, it was used as a proof of concept, that it COULD be done.

Yes, physics processing can be done on multi-core CPUs. I've never said otherwise. Again what I am saying is that the same level of physics won't necessarily yield acceptable performance. You seem to believe otherwise.

Yes and I agree, but this is not what I'm saying, again i've already said this before so I'm really just repeating myself again.

Yes! Finally after misrepresenting my position for god knows how many paragraphs, you finally get it.

I understand your position. I just don't believe you are right. Yes CPU physics processing is possible and it's already been done. Can we get the same levels of performance in Batman Arkham Asylum using it? I doubt it.

This is asinine and quite frankly incredibly selfish, the world doesn't revolve around just what you want. You don't have to give up GPU processing to improve PhysX on the CPU you can do both well in one package, you can have your super duper whizbang GPU solution and for those who cannot use PhysX like ATI users get to have fallback effects.

You argument would make sense if everyone could do GPU based physics equally but thats not the case.

Call it selfish or not, but I don't want technology held back by the lowest common denominator. I don't want Mercedes or BMW to stop innovating just because I can't afford to spend $80,000 on a car. Who's actually selfish here?

No idea, neither of us know. My argument doesn't rely on knowing the exactly GFLOP usage, what it relys on is knowing the current frame rate and the current CPU usage and extrapolating from there.

I think your argument is flawed. CPU usage doesn't indicate a frame rate by itself. I'd bet my frame rates are greater given the same settings, and my CPU usage is probably the same. Your math doesn't check out.

Again you say you dont know the exact usage in batman yet doubt your CPU could handle it, what are you basing this on? In fact you the i7 architecture is better than the Q9xxx range and you've got a faster clock so im betting you get at least 30fps if not a little more in the start area of batman, you could actually try and provide some data, would only take 5 minutes.[/QUOTE]

I base this on the fact that the CPU is a poor substitute of GPUs in terms of physics processing. You are flat out ignoring the gulf between their respective performance. Yes you could get some of the effects with CPU physics processing, but I doubt you'd get them all and I doubt they'd run just as well.

I never said CPUs were better, i've aknowledged several times that GPUs are faster at physics calculations but not everyone can run GPU physics so until we all can then it makes sense to investigate CPU based physics.[/QUOTE]

Actually they can. It's a matter of game developers implementing physics engines that can run on both AMD and NVIDIA hardware.

I am aware of this, but I cannot run batman on my GPU so this is a moot point

My options in batman currently are,

1) No PhysX
2) Switch to Nvidia

Why can't there be a 3rd?

3) Run physX on the CPU and nvidia make it use more than 30% usage and get at least some PhysX effects

I see your point, and on a technical level I agree that it would be possible though I'm not 100% sure what the performance would be like. That would be up to a variety of factors. However there is a better option:

Game developers need to use a physics processing engine that can run on both AMD and NVIDIA hardware. Since AMD refuses to adopt PhysX it will have to be Bullett or something else. Hopefully games developers will start taking advantage of OpenCL, Direct Compute, and start using physics engines that can be accelerated on both company's GPUs.

Yes I understand GPUs are better at physics than CPUs, but doing physics on the CPU is better than not having them at all. For people without the option of doing GPU physics then why not do what we can on the CPU, which in batman it's obvious that a lot of the effects could be done on the CPU.[/QUOTE]

I don't know what could be done on the CPU without taking a massive performance hit and frankly neither do you. All we can do is speculate. You seem to believe more can be done in this regard than I do. At least in regard to Batman Arkham Asylum. I say the performance hit for some of it would be too massive. If you are only getting 30FPS now then I think almost any performance hit would be unacceptable. You can disagree all you like and this is really a matter of opinion. Neither one of us can pretend to present facts here. I showed data that I believe supports my opinion and you did the same.

Who knows? A game developer that has worked with the PhysX API in detail would be better suited to answer this question. I think the smoke and fog effects would probably cause too much of a hit, paper might be ok, as would sparks and even the cloth effects, but I don't think the bricks and massive pieces of the level in the Scarecrow nightmare sequences could be run on the CPU with acceptable performance. Any one of the effects excluding the "rigid bodies" in the nightmare sequences can probably be done ok for the most part. That is, one effect at a time. Combine two or more of them and I think the system would be brought to it's knees.

All I said was that scaling wouldn't be 100% perfect. It never will be. Sure you might be able to get close to it but I doubt that given developer's track records with that so far. It took forever to get developers to utilize two cores much less four or more. The last thing you actually want to do is max out every core at 100% or get close to it. That leaves no resources for anything else but the game. While some people consider this to be no big deal, many people have background tasks going on which may or may not include AV, torrent apps, VM's or monitoring applications.
Right, so make it an option, Physics level 1 with 30% usage, physics level 2 60% usage, physics level 3 90% usage, we have options for various speed GPUs already so we know its possible, and we have options for almost every other graphics options to allow us to balance frame rate vs image quality. This is just a weak argument.

I have no problem with that, but again I don't think your CPU usage calculations are correct. Your idea has merrit though. And actually PhysX in Batman Arkham Asylum is done this way already. There are levels of PhysX processing that are adjustable. They show what GPU is recommended for your configuration and settings. Mine recommends a Geforce GTX 295 and a Geforce GTX 260 for PhysX at 2560x1600 with everything but AA maxed.

I never said "same frame rates" as a GPU, I said playable frame rates.

That's a matter of opinion. I asked you to back that up with data, which I have refuted. I do not believe the same level of effects rendered by a CPU would be playable. Certainly some of them might be, but it wouldn't be the same.

You dont think thats going to be the case, so you must have some reason for thinking that, 30% usage provides 25FPS but between the rest of the other 70% we cant muster another 10-15 FPS, do you have any specific reason to think that over a certain point we suddenly see a massive degredation in physics scaling, there must be some reason to think this, post an article, a interview with a developer, let me know why you so strongly think this.

No one has posted any reasons to think that PhysX cant scale almost linearly on the CPU like i've demonstrated has been done in the past.

The CPU scaling isn't like that. I don't think 30FPS = 30% CPU usage. This is so variable across systems and game engines. The performance offered by two difference systems at 30% CPU usage at 4.0GHz vs. 3.0GHz varies. It also varies by CPU architecture.

And for the record I happen to think that its a good move for the industry as it gives us more options, what we need is a GPU agnostic solution, while we have Nvidia in control of PhysX we have a highly bias situation which in the long run will just bite gamers in the ass because at some point they're going to turn around and be a big bunch of gays, there's already evidence of this as you've mentioned because they've already stopped support for multi vendor solutions.

This I can agree with and I've already said as much in the post you just responded to.

I couldn't agree more, I'd love to run batmans physics on my GPU and play it in all it's glory but while Nvidia are in charge thats always going to be a problem, I dont blame ATI for their stance on support because they can't afford to let Nvidia corner the market like that, so for the time being there is no one solution fits all.

I'm not sure if this is down to game developers, I think AMD need to help intergrate support and they're being fairly stubborn about it, but while I'd like to play batman with GPU physics if it means Nvidia being the sole people in the position of power for GPU physics thats VERY BAD, they've already proven they're capable of screwing over AMD users.

Had Agiea remained independent and not been bought by Nvidia im sure that AMD would be rushing to impliment PhysX support, we'd all be able to use GPU PhysX and the whole CPU thing would be moot.

Agreed.
 
more advanced physics effects - lower fps
I'm sorry but rustling papers, cloth banners and dust effects should not cause a 50% fps drop, especially since they are supported by a dedicated $100+ piece of hardware. All that has been done on the CPU for years without significant drop. Plus, the difference between CPU and GPU PhysX so far has not been that significant in terms of better IQ. In the games where it does look a little better with PhysX, it's painfully obvious those things are simply purposely lacking from software PhysX and would of been totally doable without the hardware.

PhysX is probably the most under performing/unoptimized API to ever exist.
 
Hopefully games developers will start taking advantage of OpenCL, Direct Compute, and start using physics engines that can be accelerated on both company's GPUs.

Do you believe they are able to do that? That would be something opposite of Money talks, Bullshit walks. Hopefully.
 
I'm sorry but rustling papers, cloth banners and dust effects should not cause a 50% fps drop, especially since they are supported by a dedicated $100+ piece of hardware. All that has been done on the CPU for years without significant drop. Plus, the difference between CPU and GPU PhysX so far has not been that significant in terms of better IQ. In the games where it does look a little better with PhysX, it's painfully obvious those things are simply purposely lacking from software PhysX and would of been totally doable without the hardware.

PhysX is probably the most under performing/unoptimized API to ever exist.

You have had:
-Interactive, tearable cloth.
-Interactive smoke
-Interactive paper

For years...in the same game...at the same scale?
That is BS...prove me wrong...and not with empty words.
You know links to game, videos or other documentation.

And you might wanna look into why physics is so computional heavy...you do know folding @ home right? :rolleyes:

Your highly subjective and factuial flawed opinion is not a fact...if I didn't know better, I would ask if you where related to Charlie?

EDIT:
Prediction
Wind the clock forward to a unkown date.
AMD (finally) launches GPU Physics.
It shows that AMD's implementaion causes the same performance hit.
(or even bigger as it's takes 2 AMD GPU FLOPS to do the same as 1 NVIDIA GPU FLOPS in folding @ home)
What will the excuse then be?

I bookmarked youe post, to reference on that future date.
 
Last edited:
I may be the exception, but if I don't know a game is running the physx engine, I not only wouldn't realize my FPS were lower, but I wouldn't know that there were physics enhancements within the game. It seems like such a minuscule benefit that unless you look for it in most games, it will go unnoticed. Hell, since buying my two GTX 280s, I haven't payed a bit of attention to physics.

The batman game looks like it takes decent advantage. some of the effects in the hallways looked cool, but some of them looked like generic over-blown physx demonstrations with 100's of large polygons flying around.
 
I'm sorry but rustling papers, cloth banners and dust effects should not cause a 50% fps drop, especially since they are supported by a dedicated $100+ piece of hardware. All that has been done on the CPU for years without significant drop. Plus, the difference between CPU and GPU PhysX so far has not been that significant in terms of better IQ. In the games where it does look a little better with PhysX, it's painfully obvious those things are simply purposely lacking from software PhysX and would of been totally doable without the hardware.

PhysX is probably the most under performing/unoptimized API to ever exist.

This post is full of FAIL. After having played batman through with physx, there is NO way I could ever play it again with ATI hardware. The difference is astounding and night and day. On my rig I average 50fps at 1920x1200 with 4xAA and physx effects on high. It never slows down below 32 with the xfire frame rate counter turned on. The game is simply a marvel, no pun intended.
 
of course physx lowers fps it's taking resources from the GPU to do it- resources that could be used for rendering now if the the GPU had a dedicated physics part in the core............the hit would not be there assuming that the bandwidth to ram was sufficient.......
 
Well if PhysX is a better implementation than other physics processing engines (I suspect that it might be) then I would hope for it to dominate the market just because it's better. I'm not a programer or a game developer so I can't really speak to that point. However, if it is better, and it does gain more market share I would hope that AMD would embrace the technology and license it for themselves. I know that it must suck to have to get something like that from your competitor, but NVIDIA did offer the technology to ATI previously. As it is AMD already has an x86 license agreement with Intel. A PhysX license from NVIDIA would be no different.

Likewise, if Bullet, Havok or any other technology proves to be better, or at least gain more market share, then I hope NVIDIA would embrace that as well. According to them they can do other forms of PhysX processing on their GPUs as it is.

Unfortunately what may happen is that NVIDIA and AMD will each have their own solutions, and game developers will have to pick one or the other, or worse yet, they'll have to try and put in support for both physics engines into their games. This would mirror the OpenGL vs. Direct X days where a game developer would choose one or the other, and owners of one brand of card would suffer to a degree because the developer chose a technology that their card's counterpart handled better. Direct X ran better on ATI hardware while OpenGL ran better on NVIDIA cards etc. If that happens, the truth is we all lose. That could add development time for game titles and it also penalizes one camp or the other when their favorite game was optimized for the cards they didn't choose.
I honestly don't think that we're going to see a single mainstream implementation of a physics API any time soon (unless someone like Microsoft does step in and finally helps to make a decision on it).

The thing I fear the greatest (and sadly that which I think is most likely to come to fruition), is that we'll actually see three competing types: PhysX with nVidia's support, Havok with Intel's support, and then Bullet (or whatever AMD is pushing at that point) with AMD's support.

It's funny, people complain now about the nightmare that is nVidia controlling PhysX and trying to push that as the top standard (and I'm in this camp, as I don't want any one company to have completely control over whatever the top physics API ends up being). But just imagine what type of cross-linking Intel can do in regards to pushing Havok as the advantageous physics API to use (combining a Intel GPU, with a Intel-supported physics API, on an Intel processor). A lot of people here (seemingly mostly nVidia fans) scoff at the idea of Larrabee competing against the top GPUs from nVidia or AMD. While I agree it's still a bit of a ways off, I think a lot of people underestimate Intel's determination or ability to compete, lol.
 
Status
Not open for further replies.
Back
Top