How PhysX Makes Batman: Arkham Asylum Better

Listen you two.
This is the internet. Sane, legitimate, and intelligent discussions DO NOT belong here.
Now.

Get to flaming.:p
 
as far as cpu "gimping", i believe i already offered a compelling argument for this earlier, but i will try to reiterate the best i can. batman was designed to be able to run at least on a single core cpu (meaning not a core i7) per the system requirements. this means that whatever cpu physics they had in mind had to be able to run comfortably on a single core cpu without bogging down the game. so let's continue along this train of thought and go to what you might find to be a feasible situation. let's say rocksteady was humming along with the game. and along comes nvidia knocking on the door. they drop off a sack of cash to eidos and tell them to add gpu physx into the game. eidos accepts. now to backtrack a bit, this game had already been in development for almost two years prior to release according to this link:

Except that argument doesn't work. That isn't a reason to forcefully lock PhysX to a single core as by default it scales itself out to as many cores as there are. Also, that is why there are game options. The minimum requirements are only good enough to play on the lowest quality at a low resolution, but people would be crazy pissed if that was the *only* quality level. It, of course, isn't. Thus, there is no reason to forcefully lower the physics quality to accommodate those with slower CPUs. Instead they should do the same thing that other games do, and add a Physics Quality game option. They already have one internally (for the change from CPU to GPU accelerated PhysX), there isn't any reason they couldn't have made it a user option. And that STILL isn't an excuse for locking PhysX to one core, even if they didn't make the physics quality adjustable. It would at least give those with slower dual cores better performance (which are more common than fast single cores ;) )
 
Except that argument doesn't work. That isn't a reason to forcefully lock PhysX to a single core as by default it scales itself out to as many cores as there are. Also, that is why there are game options. The minimum requirements are only good enough to play on the lowest quality at a low resolution, but people would be crazy pissed if that was the *only* quality level. It, of course, isn't. Thus, there is no reason to forcefully lower the physics quality to accommodate those with slower CPUs. Instead they should do the same thing that other games do, and add a Physics Quality game option. They already have one internally (for the change from CPU to GPU accelerated PhysX), there isn't any reason they couldn't have made it a user option. And that STILL isn't an excuse for locking PhysX to one core, even if they didn't make the physics quality adjustable. It would at least give those with slower dual cores better performance (which are more common than fast single cores ;) )

yeah but i already mentioned that unless there is monetary incentive (usually) involved for a dev to go and make a game have more scaleable cpu physics, they aren't going to. in the majority of cases, they will usually prefer to cut down on development time and costs by having a one size fits all level of physics (from single core on up). that is why you see very few games even have an option for scaling cpu physics in a game. the only reasonable assumption here is that nvidia offered the necessary monetary incentive (motivation) for a dev to go back and expend additional time and effort to add what some might consider to be superfluous physics effects. it is only being assumed by some that the devs intentionally castrated these additional effects, when it could be just as reasonable that the devs only decided to add these additional effects as a result of nvidia asking them by offering them compensation to do so, which would already cost them additional development time/ costs. so the real legitimate complaint might be why the dev didn't go back and integrate said additional effects in a manner that would work on the baseline cpu requirement (considering there is no scaling option for cpu physics in the game), of which i think i already responded too earlier in the thread. i think only in the future with either the porting of physics engines to opencl or perhaps more prevalent physics engines with scaleable cpu/ gpu physics that are easy for a dev to implement will this be more commonplace in games.
 
Last edited:
But the devs DID create scalable physics effects. They just locked it to only work and show up in the options menu when you have an Nvidia GPU. Running the physics effects you see when using an Nvidia GPU on the CPU would require basically zero work on the part of the developers, as that is the whole point of the PhysX API. They intentionally locked the CPU out. It was very much a deliberate decision to limit the advanced physics effects to GPUs only.
 
But the devs DID create scalable physics effects. They just locked it to only work and show up in the options menu when you have an Nvidia GPU. Running the physics effects you see when using an Nvidia GPU on the CPU would require basically zero work on the part of the developers, as that is the whole point of the PhysX API. They intentionally locked the CPU out. It was very much a deliberate decision to limit the advanced physics effects to GPUs only.

they created scaleable gpu physx, which is different, considering that they were probably given incentive to do so by nvidia in the first place. i thought the point of the physx api was to offer a simple alternative to integrate a physics engine into a game, is it not? it doesn't mean that a dev just "throws" it in there and that's that. if that were true, every single game would have the same exact physics simulations and the same exact and type of effects being used. additionally, if it was so easy to duplicate these gpu based physx effects into cpu physics effects and never have to worry about performance differences, then every game out there would have them running in software by now. since you seem to believe this is case, you should go complain to all the devs making havok based games and demand them to make it happen in order to properly reflect this "reality".

as far as scaling on the cpu, i already mentioned they would have to go back and make those effects compatible with the cpu in the first place, which does require effort. cloth based physx isn't going to work exactly the same and perform the same on the cpu as it is on the gpu. it's not just as simple as flicking on a switch. the devs are going to have to rework it based on software physx, and scale the effects like you want across a wide variety of hardware, from single core cpu to multi core cpu users (something like off, low, medium, high). you can't possibly have me believe that doesn't cost time and money to do so by lengthening the development cycle. who is going to compensate them for all that additional effort? noone unless there is a cpu physics lobby group i don't know about.

furthermore, if what you are saying was indeed the case, then why delay the game by almost a month to integrate gpu physx by patching if it takes zero work at all to do so? mirrors edge was delayed two months later than the console releases. why the delay if these things are so easy to integrate, manipulate and test?

if they intentionally "locked" the cpu out, then you should go accuse a crapload of devs for doing the same thing given the evidence of so few games offering scaleable cpu physics. (you can also ask why don't all devs offer this so users with single core cpus can lower physics in a game so it performs better on their older hardware.) because, they are not going to take the time without incentive to do so to scale cpu physics. a few might perchance, but you cannot reasonably expect every single developer to do so, especially those coding multiplatform games who already have their hands full. if this wasn't the case, then practically every friggin game out there would have this option. given the dearth of even normally available graphic options in multiplatform games or console ports in comparison to pc exclusive games, then you would see that this is more than likely the case.

you do realize that if no gpu physx effects were ever even included in the first place, then the game probably wouldn't even have those effects in the game at all. they would never have existed in the first place and be gone - kaput. and then no one would enjoy them. because these devs aren't going to work for free. plain and simple. that is why they were offered incentive to add in these effects. otherwise, why would they waste any additional effort?

the game didn't "need" paper in it. but now with the incentive to do so, they went back in the game and added stuff like paper in. in doing so, they had to delay the game. but that's okay because they got compensated for their efforts. now you want them to go back in and put in "zero work" that doesn't benefit them at all, costing them time and money, delaying the game even further, missing their target date to market, and having to face competition against more highly anticipated games and costing them sales as a result? that's quite a bit much to ask, imo.

so unless you plan on offering something else besides just reiterating your stance, i don't see a point in continuing because i'm just going to offer mine and the thread will just be at a "stalemate". either way, nothing is going to change. we both are entitled to our views, and just like i responded to k6, we can just agree to disagree. i don't think i have anything else to add to this discussion anymore.
 
Last edited:
they created scaleable gpu physx, which is different, considering that they were probably given incentive to do so by nvidia in the first place.

Ah, but it really isn't any different. The PhysX API abstracts the details of how the calculations are done away. To the developer, there isn't "gpu physx" or "cpu physx" - it is just "physx".

additionally, if it was so easy to duplicate these gpu based physx effects into cpu physics effects and never have to worry about performance differences, then every game out there would have them running in software by now.

When using PhysX, every effect that you see when run on the GPU will work on the CPU - it'll just be slower. Your argument was that the dev coded for the lowest common CPU denominator. I said that they should just expose the scalability that is already in place in the game. Most people won't be able to increase the physics effects because they have slow CPUs, but some will. I have no idea where you go the ridiculous idea that I claimed that there wouldn't be a performance difference :rolleyes:

as far as scaling on the cpu, i already mentioned they would have to go back and make those effects compatible with the cpu in the first place, which does require effort.

And you are, quite simply, wrong. The effects aren't coded for a CPU or for a GPU, they are simply actors in a physics simulation. PhysX handles where the code is run, not the developers. So there isn't any need to make the effects compatible with the CPU, as that is the entire point of PhysX.

it's not just as simple as flicking on a switch.

It really is. Telling PhysX to run in hardware or software is as simple as setting a *single* variable on the master scene. I am not even joking.

the devs are going to have to rework it based on software physx, and scale the effects like you want across a wide variety of hardware, from single core cpu to multi core cpu users (something like off, low, medium, high).

From the PhysX SDK Documentation:

By default, fine grained division of the simulation is not enabled. To enable fine grained threading, the user must specify the NX_SF_ENABLE_MULTITHREAD flag in NxSceneDesc.

When performing each simulation using a number of threads, there is a main simulation thread which controls the division of work and performs tasks which must be executed in serial.

To enable PhysX to scale across multiple cores simply requires specifying a flag telling it to do exactly that.

And again, from the developers perspective there is no "software physx" or "hardware physx" - it is all just PhysX.

if they intentionally "locked" the cpu out, then you should go accuse a crapload of devs for doing the same thing given the evidence of so few games offering scaleable cpu physics. (you can also ask why don't all devs offer this so users with single core cpus can lower physics in a game so it performs better on their older hardware.) because, they are not going to take the time without incentive to do so to scale cpu physics. a few might perchance, but you cannot reasonably expect every single developer to do so, especially those coding multiplatform games who already have their hands full. if this wasn't the case, then practically every friggin game out there would have this option. given the dearth of even normally available graphic options in multiplatform games or console ports in comparison to pc exclusive games, then you would see that this is more than likely the case.

But now you are arguing an entirely different point. The fact is that B:AA has scalable physics. The fact is that those scalability settings don't exist without an Nvidia GPU present. Thus, they are locking out the CPU.
 
I think you are misunderstanding me and I apologize if I am not communicating my thoughts as well as I should. When I mean different, I mean the effect won't be the same because they perform differently on the cpu vs the gpu. Therefore they will have to change the effect to work better software - like static vs dynamic due to performance differences. And if you want to just talk about scaling dynamic effects, what exactly would one consider to be enough to have acceptable performance? My question is if what you are saying is completely true, why do so few games even offer physics scaling in the first place if it takes zero effort and time to do? Why dont' all games have the option so that different systems can achieve different levels of performance? Sorry, but I just don't see it. If you want the exact same effects that perform at the same peformance level, then something has to change to achieve the same level. Even with some effects if you try to scale it won't perform as well. so sure, I see what you're saying about exposing scaleability. Lets just say they have the gpu effects available on the cpu. Now the same level and quantity of effects are running on the cpu but now just much slower. What if that performance is too slow to be considered playable. So now they have to scale the physics like you say for different levels of hardware. They still have to optimize it for acceptable performance for different levels of cpu, just like they have done with different levels of gpu. Optimizing for different levels of hardware due to scaling differences in performance doesn't take any time or effort? If it doesnl then I have been mislead and apologize. Otherwise if it is true, then that goes back to my question of compensation for additional work. Btw, thanks for discussing and offering your own insight and perspective on this.
 
You are getting caught up in things that are irrelevant. The work that a developer must do for scalable physics is to include optional things that aren't necessary to gameplay that can become physics objects. In the case of B:AA, that basically amounts to paper you can kick. They already did the hard part of making scalable physics options. THAT is the part most games don't have, THAT is the hard part, the extra physics objects that can be enabled is where the work is. B:AA has various levels of physics. Its already there, in the code. They did the work.

The shitty bit is that they only expose those levels if you have hardware PhysX. Those with more powerful CPUs (and kicking paper is something quad core CPUs can easily handle) should have the option to enable those. Will most be able to enable it? Of course not. It is irrelevant if its playable for most people, just like its irrelevant that 8xAA isn't playable for most people, or high resolutions, or everything maxed, etc... To enable those extra "GPU Only" physics levels to work on the CPU is what requires zero developer work. The PhysX library takes care of that.

What is playable for most people is completely irrelevant, and you seem to be stuck on that.
 
alright, i think what you are saying seems plausible. so i will just take your word for it since ultimately neither of us can know 100% for sure if that is what the intent was. other than that, i really don't have anything more to add to this discussion, so feel free to linger in the thread. peace out.
 
i wonder how the stand alone hardware ppu's do with this game and if they suffer form the nvidia lockout issue im talking about the cards with dedicated processes that came out before nv gobbled up ageia
 
Not as well, I think. The new physX requirements say that an 8600gt is minimum, and the BFG ppu was almost exactly 1/2 of that in terms of rop's etc...
 
Sure they can. The Nvidia PhysX SDK has examples of exactly that running just fine on my i7 920 :)

Yes, but if you make it a room full of them? Overkill is where hardware accelerated physics shines :D
 
personally, i think all this nvidia hate is kinda of silly. i mean, it's a business, yeah? if you're going to hate on people for conducting business, why not hate on rocksteady/eidos for taking nvidia's money to put physx in the game? i don't see how nvidia is more at fault since they're trying to run a business and make a buck as much as the next company...
 
personally, i think all this nvidia hate is kinda of silly. i mean, it's a business, yeah? if you're going to hate on people for conducting business, why not hate on rocksteady/eidos for taking nvidia's money to put physx in the game? i don't see how nvidia is more at fault since they're trying to run a business and make a buck as much as the next company...
Well, you can be annoyed that they're stagnating the market, like proprietary solutions always do. That said, you're correct, Eidos is just as guilty, and I won't be buying the Batman game until it's <$15 :).
 
Maybe should title this thread:
How does, AntiAliasing, Aniso, 32bit Color, Shadows, High Res Textures, etc make X game better?

Every time is the same boring things...when 32 bit color came out, some morons start arguing that was not worth it, performance hit for eye candy..blah blah blah. Same now with Phsyx...go back to playing pong or disable all your eye candy on your GTX 295 so you can get max FPS...I mean, it seem that is all you care for...
 
Maybe should title this thread:
How does, AntiAliasing, Aniso, 32bit Color, Shadows, High Res Textures, etc make X game better?

Every time is the same boring things...when 32 bit color came out, some morons start arguing that was not worth it, performance hit for eye candy..blah blah blah. Same now with Phsyx...go back to playing pong or disable all your eye candy on your GTX 295 so you can get max FPS...I mean, it seem that is all you care for...
Someone didn't read the thread...
 
Yes, but if you make it a room full of them? Overkill is where hardware accelerated physics shines :D
made me lol

Maybe should title this thread:
How does, AntiAliasing, Aniso, 32bit Color, Shadows, High Res Textures, etc make X game better?

Every time is the same boring things...when 32 bit color came out, some morons start arguing that was not worth it, performance hit for eye candy..blah blah blah. Same now with Phsyx...go back to playing pong or disable all your eye candy on your GTX 295 so you can get max FPS...I mean, it seem that is all you care for...

seriously, who let this guy in here?
 
I didn't notice any difference in Batman with physx on high vs disabled, except for a few papers not flying around with physx disabled. I was using a 9600gt for dedicated physx with 5850 as render gpu. Have since removed the 9600gt and plan to sell it or keep it as backup.

I have to give nvidia's marketing team credit for hyping up physx.
 
Most PhysX titles suck. They remove the ability to boost the Resolution and AA levels (as the added crap from the PhysX engine makes most of them unplayable).

I have a 9800GT dedicated to PhysX therefore I am not bashing PhysX because I don't own a card that supports the wretched technology. None of the things we see in Batman Arkham Assylum warrant needing a GPU. They would have been possible with a Multi Core processor.

In fact, paper and cloth simulations would have been easily available to multi core users.

Leave the GPU for Graphics goodness and start using our multiple cores damn it!


*grin*
 
Last edited:
wow he complains about heavier processing loads causing lower performance, yet doesn't realize currently available multicore cpus would offer even exponentially lower performance doing the same thing. of course the effects are possible on a multicore cpu (and are actually available in physx for devs to implement in software if they wanted), if you want to play at <5 fps. i'd like to see a list of games that offer the same level of interactive physics simulations done on the cpu. sorry but gpus are providing a better solution for this type of parallel processing at this time and are moving more towards becoming gpgpus. just running a fluid simulation can illustrate this point.

http://downloads.guru3d.com/PhysX-FluidMark-v1.0.0-download-2022.html

running in software on my dual core at about 60-70% cpu usage, the benchmark drops down to 3fps, while in hardware on an 8800gt it only goes as low as 30fps. so hypothetically, maybe if someone had maxed out cpu usage at 100% on a quadcore, someone could possibly get 10fps in software. that still pales in comparison to the performance of what could be now considered to be a sub $100 low midrange card. since the majority of people probably have a dual core cpu, i don't see how enabling this level of physics in software will help at all (not saying cpu physics can't continue to be improved in general though). otherwise, devs would have implemented them in games a long time ago, especially with multicore processors being available in consoles for several years.

again if you don't have the hardware capable to play at the settings and resolution you want, then you lower them. i play at 1080p and i can still enable hardware physx with some aa on my setup in games. if someone plays at 2560x1600, they better have high end hardware just to enable aa in some games just to have playable frames, much less enabling heavy physics processing. or they could just have a dedicated physx accelerator to help offload the processing to improve performance, as shown in the h batman & physx review.
 
Last edited:
On my computer, main rig in signature, 1680X1050, 2x MSAA:
With physX on, running on 9800physX card w/GTX260 graphics: Min:27, max: 436, Avg: 62

PhysX on CPU: Min: 5, Max 234, Avg: 16

Yea... Not good.
 
wow he complains about heavier processing loads causing lower performance, yet doesn't realize currently available multicore cpus would offer even exponentially lower performance doing the same thing. of course the effects are possible on a multicore cpu (and are actually available in physx for devs to implement on the cpu if they wanted), if you want to play at <5 fps. i'd like to see a list of games that offer the same level of interactive physics simulations done on the cpu. sorry but gpus are providing a better solution for this type of parallel processing at this time and are moving more towards becoming gpgpus. just running a fluid simulation can illustrate this point.

http://downloads.guru3d.com/PhysX-FluidMark-v1.0.0-download-2022.html

running in software on my dual core at about 60-70% cpu usage, the benchmark drops down to 3fps, while in hardware on an 8800gt it only goes as low as 30fps. so hypothetically, maybe if someone had maxed out cpu usage at 100% on a quadcore, someone could possibly get 10fps in software. that still pales in comparison to the performance of what could be now considered to be a sub $100 low midrange card. since the majority of people probably have a dual core cpu, i don't see how enabling this level of physics in software will help at all (not saying cpu physics can't continue to be improved in general though). otherwise, devs would have implemented them in games a long time ago, especially with multicore processors being available in consoles for several years.

again if you don't have the hardware capable to play at the settings and resolution you want, then you lower them. i play at 1080p and i can still enable hardware physx with some aa on my setup in games. if someone plays at 2560x1600, they better have high end hardware just to enable aa in some games just to have playable frames, much less enabling heavy physics processing. or they could just have a dedicated physx accelerator to help offload the processing to improve performance, as shown in the h batman & physx review.

On my i7 (sig) I had a min FPS of 7. CPU utilization? 14%. That PhysX Fluid mark appeared to have had the multithreading capabilities of PhysX not enabled. Meaning even on your dual core, you were still only using one core. The other ~20% was probably coming from other apps (or was your 60-70% for just the physx fluid mark process?). Of course, that was also ~30,000 particles. At around ~5,000 particles my FPS was being reported as 80-100 fps.

My own testing has found, however, that even telling PhysX to use 8 threads, it still only loads a single thread of the 8 it creates (the other 7 are idle, even as the FPS plummets to single digits). CPU PhysX basically has zero scaling, which, of course, is absolutely pathetic.
 
On my i7 (sig) I had a min FPS of 7. CPU utilization? 14%. That PhysX Fluid mark appeared to have had the multithreading capabilities of PhysX not enabled. Meaning even on your dual core, you were still only using one core. The other ~20% was probably coming from other apps (or was your 60-70% for just the physx fluid mark process?). Of course, that was also ~30,000 particles. At around ~5,000 particles my FPS was being reported as 80-100 fps.

My own testing has found, however, that even telling PhysX to use 8 threads, it still only loads a single thread of the 8 it creates (the other 7 are idle, even as the FPS plummets to single digits). CPU PhysX basically has zero scaling, which, of course, is absolutely pathetic.

well i tested again and i got the same results. cpu usage was 5% or lower before starting the app meaning the 60-70% was pretty much the app. so if you have 15% (so maybe a quad core gets 30% and my dual gets 60-70%), it must be that it wasn't made to scale higher unfortunately. i doubt hyperthreading would have been supported since this was made before i7 was even out and quad cores users are still a small percentage compared to duals and singles. try the other program i listed from softpedia and see what kind of results you get. press s to toggle the scene.
 
Last edited:
well i tested again and i got the same results. cpu usage was 5% or lower before starting the app meaning the 60-70% was pretty much the app. so if you have 15% (so maybe a quad core gets 30% and my dual gets 60-70%), it must be that it wasn't made to scale higher unfortunately. i doubt hyperthreading would have been supported since this was made before i7 was even out and quad cores users are still a small percentage compared to duals and singles. try the other program i listed from softpedia and see what kind of results you get. press s to toggle the scene.

There is only 1 active PhysX thread in the first app. I'm guessing you are getting an extra 10-15% from the render thread running along with the PhysX thread (on my system the render thread was using ~2-3% of the CPU, and the PhysX thread ~11-12%). I am therefore going on the assumption that they just didn't enable PhysX multithreading (which is literally as simple as setting a flag and telling it how many threads to use). However, as I said, CPU PhysX doesn't scale *at all*, its completely broken. And I can't run the second one you posted as I don't have an Nvidia card, so it just dies saying "this needs an 8800 series or better blah blah blah".
 
There is only 1 active PhysX thread in the first app. I'm guessing you are getting an extra 10-15% from the render thread running along with the PhysX thread (on my system the render thread was using ~2-3% of the CPU, and the PhysX thread ~11-12%). I am therefore going on the assumption that they just didn't enable PhysX multithreading (which is literally as simple as setting a flag and telling it how many threads to use). However, as I said, CPU PhysX doesn't scale *at all*, its completely broken. And I can't run the second one you posted as I don't have an Nvidia card, so it just dies saying "this needs an 8800 series or better blah blah blah".

interesting. so how do you know if something has the physx multithreading enabled in software? like in a game for instance. i know my dual core cpu at peak usage is maxed out at 90-100% when i play mirrors edge or batman in software physx mode and stays pretty high throughout. so if the option is available for multithreading, yet this app and others don't scale "at all", then why isn't it just enabled all the time? do you know of any apps that do have it enabled and working properly? wouldn't the xbox360 (or ps3) make use of it since it can handle up to six threads? just curious. so software physx is supposed to scale, but even when you tell it to, it doesn't? i don't quite understand.

as far as the second app, yeah i guess it defaults to hw mode. i could probably send it to you with sw mode enabled, but i dunno if it's worth the hassle.

still, it is hard to say how much of a performance increase would be gained if a couple more threads were actually put to use in this app since the scaling probably wouldn't be remotely perfect.
 
Last edited:
interesting. so how do you know if something has the physx multithreading enabled in software? like in a game for instance. i know my dual core cpu at peak usage is maxed out at 90-100% when i play mirrors edge or batman in software physx mode and stays pretty high throughout. so if the option is available for multithreading, yet this app and others don't scale "at all", then why isn't it just enabled all the time? do you know of any apps that do have it enabled and working properly? wouldn't the xbox360 (or ps3) make use of it since it can handle up to six threads? just curious. so software physx is supposed to scale, but even when you tell it to, it doesn't? i don't quite understand.

Use Process Explorer ( http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx ) and look at the threads of the game. The PhysX ones will have a start address somewhere in PhysXCore.dll. If there are a couple of PhysX threads and only 1 is active, then multithreading is probably disabled. As far as Mirrors Edge and Batman are concerned, they probably have their own threads as well hence the higher CPU usage you see (in other words, the extra load isn't coming from PhysX being multithreaded, but the game). Both ATI and Nvidia have multithreaded drivers now, as well, which will also further load your dual core CPU. In my own testing with PhysX, enabling multithreading and telling it to use 8 threads results in 8 PhysX threads being created, but only 1 was really being utilized. The others were either idle or so close to idle as to make no difference (~0.5%). So the problem with not scaling is *internal* to PhysX. Meaning there aren't any games or apps that work properly as the library itself doesn't work properly.

as far as the second app, yeah i guess it defaults to hw mode. i could probably send it to you with sw mode enabled, but i dunno if it's worth the hassle.

still, it is hard to say how much of a performance increase would be gained if a couple more threads were actually put to use in this app since the scaling probably wouldn't be remotely linear.

No, the second app uses Nvidia's proprietary OpenGL extensions, which my ATI card of course doesn't have. Nvidia loves to pull that shit with their demos, even though the standard extension does the same damn thing. So it wasn't a problem with lacking hw PhysX, it was a problem initializing OpenGL.
 
Use Process Explorer ( http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx ) and look at the threads of the game. The PhysX ones will have a start address somewhere in PhysXCore.dll. If there are a couple of PhysX threads and only 1 is active, then multithreading is probably disabled. As far as Mirrors Edge and Batman are concerned, they probably have their own threads as well hence the higher CPU usage you see (in other words, the extra load isn't coming from PhysX being multithreaded, but the game). Both ATI and Nvidia have multithreaded drivers now, as well, which will also further load your dual core CPU. In my own testing with PhysX, enabling multithreading and telling it to use 8 threads results in 8 PhysX threads being created, but only 1 was really being utilized. The others were either idle or so close to idle as to make no difference (~0.5%). So the problem with not scaling is *internal* to PhysX. Meaning there aren't any games or apps that work properly as the library itself doesn't work properly.



No, the second app uses Nvidia's proprietary OpenGL extensions, which my ATI card of course doesn't have. Nvidia loves to pull that shit with their demos, even though the standard extension does the same damn thing. So it wasn't a problem with lacking hw PhysX, it was a problem initializing OpenGL.

if it's designed to be multithreaded, then what is wrong with the library not working for scaling to properly function? so even with ppu or gpu enabled physx, there is no multithreaded processing happening either? so if multithreading functioned properly in cpu physx, how much better do you think it perform in terms of this kind of physics processing?

as for the second part - ah okay, gotcha.

thanks for the link - very useful app.
 
Last edited:
if it's designed to be multithreaded, then what is wrong with the library not working for scaling to properly function?

Huh? The *game* can be multithreaded without *physx* being multithreaded. Physics is only a small, small part of the work a game engine does. The problem is that CPU PhysX stubbornly refuses to scale across multiple threads (and thus multiple cores), meaning it is limited to, at best, a single core. Most of the times, though, its going to be limited to a fraction of a single core.

so even with ppu or gpu enabled physx, there is no multithreaded processing happening either?

Yes and no. There isn't any multithreaded CPU processing (because that would serve no point since the CPU isn't doing the work). There is, however, PPU/GPU multithreading, as the PPU/GPU basically only work with multithreaded code (while you could run a "single thread" on a GPU, it would be slow as balls). Physics can be broken up into parallel tasks rather easily, hence why you can run it on a PPU/GPU and get massive speed increases. For whatever reason, CPU PhysX seems to have forgotten how.

so if multithreading functioned properly in cpu physx, how much better do you think it perform in terms of this kind of physics processing?

Look at what other CPU physics engines are capable of (eg, Havok, Velocity, etc...).
 
Huh? The *game* can be multithreaded without *physx* being multithreaded. Physics is only a small, small part of the work a game engine does. The problem is that CPU PhysX stubbornly refuses to scale across multiple threads (and thus multiple cores), meaning it is limited to, at best, a single core. Most of the times, though, its going to be limited to a fraction of a single core.



Yes and no. There isn't any multithreaded CPU processing (because that would serve no point since the CPU isn't doing the work). There is, however, PPU/GPU multithreading, as the PPU/GPU basically only work with multithreaded code (while you could run a "single thread" on a GPU, it would be slow as balls). Physics can be broken up into parallel tasks rather easily, hence why you can run it on a PPU/GPU and get massive speed increases. For whatever reason, CPU PhysX seems to have forgotten how.


Look at what other CPU physics engines are capable of (eg, Havok, Velocity, etc...).

1st paragraph - i meant physx itself, not any particular game. sorry should have specified.

yeah i'm familiar with havok and others. i've seen what they've offered up to this point. i just meant specifically physx and what you think the performance boost would have been like in something like batman if multithreaded cpu processing (with your 8 threads example, per se) was used to handle the hardware accelerated physx stuff in the game, hypothetically speaking of course.
 
While I admitted skimmed the last part of the thread, I'd like to point out how its hilarious that Silus completely avoided the topic of "PhysX can work on a CPU in Batman just as well or better" like it was the black goddamn plague.

No snazzy reply, no :rolleyes:, no nothing.

Not really surprised, but certainly entertained.
 
I just tried the game with hardware PhysX enabled but to my suprise, when playing at the level in this video, the water doesn't interact with the character at all. I thought that PhysX is about adding realism to the game, I guess that you can't have them all.
 
While I admitted skimmed the last part of the thread, I'd like to point out how its hilarious that Silus completely avoided the topic of "PhysX can work on a CPU in Batman just as well or better" like it was the black goddamn plague.

No snazzy reply, no :rolleyes:, no nothing.

Not really surprised, but certainly entertained.


You are not one of those uninformed people that thinks CPU can hold a candle to the GPU in physics?
Or that scaling on CPU's is linear woth added numbers of cores?

In that case you really need to read up...like some other people in this post.
 
Back
Top