Batman: Arkham Asylum and PhysX Gameplay @ [H]

Watching the video in the conclusion where epic is running 3500 boxes and 200 rag dolls its hard to see way this has to be a Nvida exclusive. :confused: And if that can be done with the the built in option on a cpu you have to wonder how much they got paid off to restrict it to just nvidia card...
 
Just got this from NVIDIA:

Regarding the lack of scaling you saw on GTX 295—this is due to a issue in the current driver.

When PhysX is enabled in the driver, the second GPU is dedicated to PhysX. To get the regular ‘graphics SLI scaling’ in the game, disable PhysX in the control panel, but enable Multi-GPU.

In our upcoming driver, the driver will automatically resolve these differences.
 
Nvidia is not doing that because it doesn't benefit them. They sell video cards to do that and not cpus

ATI is working (has?) on that capability in their SDK but nothing done on it yet (I think it just came out). and its hard telling where ATI is going with this, first they were going to support Havok, (maybe still are?) and know they are going to support an open source physics project

I wasn't thinking of just Nvidia, but physics middleware in general. Why didn't any of them pay attention to the GPU when we all knew how powerful they could be in specific situations. Hopefully with Bullet and Havok jumping on the OpenCL bandwagon, we'll get a completely hardware agnostic hybrid solution. One can dream at least....
 
That assumes linear and gradual scaling. Both are false assumptions. Take a particle system of 10 elements for which you have to calculate the effect on each particle of all the others. Number of interactions is then 10x9. Increase the number of particles 10x to 100. Number of interactions is now 100x99.

So you only increase the number of particles 10x, but you increase your computational workload by 100x. Increase the number by 10x again, workload goes up another 100x. With that kind of exponential increase you can see why the jump from simple rigid body physics up to the advanced stuff isn't as simple as just using one or two idle CPU cores.

It doesn't work quite like that either, though. Particle collision is simpler than box collision, as particles are generally spheres (for smoke, fluid, etc..). 2,000 boxes colliding is a *larger* workload than 2,000 particles colliding, for example. Hell, rigid bodies often use bounding spheres or boxes to quickly check if two bodies are close enough to even collide, and then use more advanced methods to determine if they have actually collided. So the best case for rigid bodies (none are close) is the same number of calculations as it is for an identical number of particles.

And that isn't even taking into account other optimizations and tricks.

Also keep in mind that you are going from physics getting a budget of, say, ~25% of 1 core to 100% of 3+ cores.
 
I wasn't thinking of just Nvidia, but physics middleware in general. Why didn't any of them pay attention to the GPU when we all knew how powerful they could be in specific situations. Hopefully with Bullet and Havok jumping on the OpenCL bandwagon, we'll get a completely hardware agnostic hybrid solution. One can dream at least....

because its not really all that practical, esp in the past. the gpu really has plenty to do rendering the game, its only recently that cards with that much overhead have come around and even then its arguable that the horsepower is better spent toward higher graphics. look at crysis. even now the game looks good, can you imagine running physx with it? The other side of the coin is that gpu aren't necessarily that good at processing certain types of problems. or at least its not very efficient. from what I can understand of the new OpenCL AMD is working on is that its going utilize both the CPU and the GPU.

then again with the GF100 it may be a mute point, at least as far a physic processing is concerned. I will be interested in how this works out.
 
Registered just to give my 2 cents.

Ive been pc gaming for a good 15 years now, and dont consider myself a nvidia fan boy. I recently upgraded my pc for the first time in 8 years (i was very busy for most of those 8 years!), and when it came time to choose a video card, I went for a gtx275. I figured that, even though the radeon line had better fps numbers, nvidia had better drivers and more clout with developers. After putting more research into it, it seems amd has improved the drivers for the radeon line, but i dont think i was wrong about nvidia's developer support. The gist of my comment is, that I want two things from my high end gaming pc- I want it to look good and I want it to work easily, because my spare time is for gaming and not testing driver configurations. Many reviews I've seen show that the radeon render path doesnt draw games the same way as the nvidia render path, and that is probably down to developers using nvidia as the standard (I admit the only specific game I can remember as an example is the new wolfenstein, the radeon was much more glossy lighting effects than on a nvidia card). But on to phsyx.

I agree physx doesnt have a "killer app". But it does enhance the visuals in some games, and thats the whole point of a fancy video card right? Batman AA came with my card and after reading this i tried it w/o physx. its still a good game, but it looks a hell of a lot better with physx enabled. I tried running phsyx on the CPU and the framerate stuttered badly. I think you'd need a really beefy cpu to be able to offload it there, but it seems to me that there isnt ANYTHING optimized for 3+ cores out there, and that's on intel/amd, not nvidia.

as for the closed standard... I remember back when the first 3dfx card came out. Remember GLIDE? just like physx, it was a propietary API and you had to have a 3dfx chip to run it. Was it detrimental to gaming and consumers, or was it responsible for creating a market and forcing the competition to make an alternative? I remember being pretty skeptical at the time, and like everyone else wanted openGL to win out. But how long did that take? 4, 5 years? my skepticism didnt last that long and I ended up getting a 3dfx chip, and looking back I dont regret it at all. And even if physx isnt as widely implemented as Glide was (or even EAX or A3D, closed standards gamers adopted because they improved the gaming experience), I'd rather have the option than not. My graphics card probably cost me 30-50$ more than a performance comparable radeon, but I get prettier graphics and better compatibility from it. If that's not worth the extra money to you, fine, but understand that is why your card cost less.

and ps: its a tiny bit naive to expect nvidia to give away patented research so gamers can play when pharmaceutical companies dont give away patented research so people can live. Sure it would be nice, but its just not realistic to think like that. they're businesses, not charities. I agree they could leave the unsupported option to pair a radeon on graphics with a nvidia on physx, but to just give it away and run it on CPUs (the manufacturers of which happen to be direct rivals)... not going to happen.
 
as for the closed standard... I remember back when the first 3dfx card came out. Remember GLIDE? just like physx, it was a propietary API and you had to have a 3dfx chip to run it.
...
and ps: its a tiny bit naive to expect nvidia to give away patented research

Glide wasn't necessarily as much of an issue because there were other options out there. I don't recall any games that I owned back then that were glide *only*. It may have run better with Glide -Unreal is a good example- but most of those games also had OpenGL at least, some even had PowerVR options as well. There are other physics options out there, and in the next year or so I think we'll see devs start to favor the more open standards because they will be accessible to more systems than a locked-down PhysX implementation.

The problem isn't with NVidia giving this tech away, I don't think anyone is complaining over how NVidia isn't giving away the PhysX source code, SDKs, etc. The problem is that NVidia isn't allowing PhysX to be run on its OWN hardware just because they find another GPU vendor's hardware in the system. This would be like Reebok disabling the use of its pump shoes just because it found out you were wearing an Adidas shirt. It limits consumer choices and is a plain skeezy move all around. For them to try and hide behind excuses like "we can't possibly test compatibility with everyone's hardware so we just decided to disable it" is a steaming pile of crap. PhysX was originally made to be run with an add-on card and to be used with whatever other hardware the user was running. There is no reason for what they are doing besides trying to lock out competition and screw the consumer over by forcing their hand to buy NV hardware.
 
Last edited:
Glide wasn't necessarily as much of an issue because there were other options out there. I don't recall any games that I owned back then that were glide *only*. It may have run better with Glide -Unreal is a good example- but most of those games also had OpenGL at least, some even had PowerVR options as well. There are other physics options out there, and in the next year or so I think we'll see devs start to favor the more open standards because they will be accessible to more systems than a locked-down PhysX implementation.

The problem isn't with NVidia giving this tech away, I don't think anyone is complaining over how NVidia isn't giving away the PhysX source code, SDKs, etc. The problem is that NVidia isn't allowing PhysX to be run on its OWN hardware just because they find another GPU vendor's hardware in the system. This would be like Reebok disabling the use of its pump shoes just because it found out you were wearing an Adidas shirt. It limits consumer choices and is a plain skeezy move all around. For them to try and hide behind excuses like "we can't possibly test compatibility with everyone's hardware so we just decided to disable it" is a steaming pile of crap. PhysX was originally made to be run as an add-on card and to be used with whatever other hardware the user was running. There is no reason for what they are doing besides trying to lock out competition and screw the consumer over by forcing their hand to buy NV hardware.

AFAIK there are no physx ONLY games either. Just like glide games back then, there are physx games that run better on physx hardware. The difference between software rendering and hardware rendering is comparable to the difference between batman/mirror's edge with and without physx. IE, you can still play without it but it looks better with it. I'm all for an open standard winning out, and I agree its just a matter of time. But until then, why should I not use physx? If everyone decided to buy radeon cards to "send nvidia a message about closed standards", what incentive would AMD have to develop hardware physics standards?

as for allowing phsyx to run in a mixed configuration, i agree they should have left it as an unsupported option and said so in my post. But the fact that it WAS designed as an add-on card has nothing to with the current GPU powered version. Different hardware, different drivers. Would you install a xonar dx for movies and an x-fi gamer for eax 5.0 sound in games? I suppose it should be possible, but to me it seems like overkill and probably would run into driver issues and so you choose one over the other. But like i said, they shouldnt have blocked it, especially since people already had it working, but I would have had no problem with them saying "do this at your own risk, we can't be changing our drivers just because AMD changed theirs".


-and for the guy questioning my pc gamer credentials based on my rig... I have other things to spend money on besides my gaming pc (like, you know, education, housing, kids and other petty stuff like that). Id rather have a lot of games to run in low quality than an up to date computer with nothing to play. I've probably played and beat every major FPS to come out since doom and have competed in quake 1 and quake 3 tournaments, even travelling outside of my country to get to them. I had a athlon 2800 with 768 of ram and a geforce 5200 mx! and it did just fine for me (though i did receive a 6600 GT hand me down from a friend somewhere along the way). its not a requirement to have a quad core 8gb multi gpu setup to enjoy video games.
 
AFAIK there are no physx ONLY games either. Just like glide games back then, there are physx games that run better on physx hardware. The difference between software rendering and hardware rendering is comparable to the difference between batman/mirror's edge with and without physx. IE, you can still play without it but it looks better with it. I'm all for an open standard winning out, and I agree its just a matter of time. But until then, why should I not use physx? If everyone decided to buy radeon cards to "send nvidia a message about closed standards", what incentive would AMD have to develop hardware physics standards?

as for allowing phsyx to run in a mixed configuration, i agree they should have left it as an unsupported option and said so in my post. But the fact that it WAS designed as an add-on card has nothing to with the current GPU powered version. Different hardware, different drivers. Would you install a xonar dx for movies and an x-fi gamer for eax 5.0 sound in games? I suppose it should be possible, but to me it seems like overkill and probably would run into driver issues and so you choose one over the other. But like i said, they shouldnt have blocked it, especially since people already had it working, but I would have had no problem with them saying "do this at your own risk, we can't be changing our drivers just because AMD changed theirs".

Absolutely, I think everyone who wants to run PhysX and wants to buy an NV GPU to run it on should be able to. I just feel that NVidia is overreaching in their control of where they allow GPU accelerated PhysX to be run. This is extremely time-specific being that AMD has the fastest single GPU core out there for the moment and their pricing is extremely attractive, but the message behind it stands in any time frame. People are not buying Radeons to "send NVidia a message", they are buying Radeon's because they kick ass and that is their choice for graphics hardware. Why NVidia cares about what graphics hardware someone is running when NV are in the physics hardware business and stand to make a profit regardless of GPU rubs me the wrong way.

AMD has plenty of incentive to help in the development of open hardware-based physics utilizing their GPUs because that is the gameplan moving forward. To not support and foster its growth would be suicidal. They realize that NV supports OpenCL also and will fight to match them on that front just as they have with graphics APIs thus far. AMD's goal seems to be to make sure their hardware supports OpenCL standards and if there are roadblocks during software development then help out those devs (same way IHV programs work with games). The community and other corporations will take care of creating the software, be it physics or otherwise, AMD has incentive to make sure their hardware runs that software or they miss out on potential customers.

Also a little off topic, but I know plenty of self-proclaimed audiophiles that game who have X-fi's installed for the best EAX support but think the sound quality is shite or don't like the output support so they have other cards for movies, music, etc. Might take a little bit of troubleshooting to get it to play nice, but at least the sound card makers don't forcibly block out the use of other cards should the user choose to run such a setup.

PS- Welcome to the forums! Don't feel like I'm aiming any of this angst towards you directly, but stunts like what NV is pulling really Grind My Gears. I just like to debate about topics like this and stir the pot, that's what forums are all about.
 
AMD has plenty of incentive to help in the development of open hardware-based physics utilizing their GPUs because that is the gameplan moving forward. To not support and foster its growth would be suicidal. They realize that NV supports OpenCL also and will fight to match them on that front just as they have with graphics APIs thus far. AMD's goal seems to be to make sure their hardware supports OpenCL standards and if there are roadblocks during software development then help out those devs (same way IHV programs work with games). The community and other corporations will take care of creating the software, be it physics or otherwise, AMD has incentive to make sure their hardware runs that software or they miss out on potential customers.

http://www.hitechlegion.com/our-news/1411-bullet-physics-ati-sdk-for-gpu-and-open-cl-part-3?start=1

i hope amd can get this physics support going as soon as possible.
 
no prob, not offended by the nvidia / amd talk, though i didn't like the "not much of a pc gamer" thing much. I play a lot of games damn it!

Maybe I should have focused my original post a little better. A lot of posts I read gave me the impression people are against physx, either because A) it's a closed API, B) it's work that should be done by the CPU, or C) the mixed configuration issue.

I agree wholeheartedly on C, and feel that nvidia should have left it as an unsupported, experimental deal. But positions A and B just seem really silly to me. Its one thing to say "the visual improvement doesnt justify the price, and I can get better FPS from a cheaper card". But to imply that physx is wrong on some sort of moral/technological level, when there isnt another alternative from other manufacturers (read: intel and amd) is just retarded.

It's clear that right now Nvidia's main strength is their user base, not their raw performance. What i was trying to say is that for a gamer like me who wants pretty games with minimal compatibility hassles, a nvidia card makes a lot of sense.
 

I may have had an aneurysm trying to read and follow those articles. However, I think I got the gist of it: 1) AMD doesn't have a GPU based SDK that has been approved by Khronos 2) That guy wasn't given any examples of Direct Compute or OpenCL in action on AMD hardware. I really couldn't understand what he was getting at other than those points.

To #1 I know AMD just released their Stream 2.0 SDK on Oct 13, that article was written in September when there were still NDAs in place so there's that. For #2 Anandtech managed to get a DC demo (one of NVidia's) running just fine on the 58xx series, but his AMD interviewee probably wouldn't have suggested that he try to run that. I know that's not much, but OpenCL and DC are barely infants right now and I think that is the sole DC demo out there for the time being.

This line seems a tad skewed here:
So, technically, there is nothing now. What we have is a great video card that has future potential. Not the here and now.
Very true. Same could be said for any NVidia card that supports OpenCL and DC which also have little to no demonstrable software to be shown...so what was he getting at again? Hardware that supports GPGPU computing has been out for years now and there are few useful applications for it besides PhysX, folding@home, and a few encoding applications that often still don't compare favorably to CPU encoding options, altho NVidia's is pretty good at this point. Why does he seem to only care that the 5xxx series has little to show? I'm really not trying to come off as an AMD fanboy here and come to their defense. Just trying to state that the interview seems to be poorly conducted with questionable "conclusions".

Open physics platforms are coming but it won't happen over night. Hopefully the progress will be greatly accelerated now that DX11 is an official platform for developers to cater to. Anyway I'll try not to derail this thread anymore than I already have and leave it at that.
 
I think a GPU is more suited for physics calculations, much like how well they run Folding@home.

Some games do seem to tax a CPU just fine. Even World of Warcraft with its old engine can keep newer CPUs fed.

I think the problem is most games are not properly coded for multi-CPU setups.
WoW can push a single-core pretty well (enter all of the posts from a long time ago where people would see their CPU utilization at 100% while WoW.exe was running).

At the same time though, WoW is hardly optimized for multi-core gaming: from what I can recall, they would off-load some tasks to the second core, but in general that was the extent of compatibility. Things may have changed, but I still don't see it as a game that would push more than one core intensely.

For quad cores and the upcoming Gulftown, there's absolutely no reason why those additional cores can't be utilized for something else, such as PhysX. I don't think that CPU-accelerated physics would be as effective as GPU-accelerated physics, but as Kyle said, there's a lot of additional work that can be done to take greater advantage of idle cores for current software physics implementation.
 
Quick question:

If I have a GTX 275 and want to use my old 8800 GTS 640MB for physx, after I install both cards in my rig, do I need to connect them using the SLI Bridge or just leave them be without it?
 
Quick question:

If I have a GTX 275 and want to use my old 8800 GTS 640MB for physx, after I install both cards in my rig, do I need to connect them using the SLI Bridge or just leave them be without it?

no sli bridge required for a dedicated physx card.
 
Quick question:

If I have a GTX 275 and want to use my old 8800 GTS 640MB for physx, after I install both cards in my rig, do I need to connect them using the SLI Bridge or just leave them be without it?

Plug em in, tell driver which card to use, nothing more required other than a good enough PSU.
 
Thanks for the replies.

Yeah got it figured out, had to take off my Thermalright "Hot Rod Turbo" HR-03 Ultra cooling solution and replace the old Evga stock cooling in order to fit it in there, but everything turned out mostly good. Had a hell of a time getting proper contact on all the points after switching back to the generic stock solution, it's like one big brick of suck:p

Found the drop down selection window in the Nvidia control panel for the Physx option and away I go.

Funny thing, I benchmarked it a few times using the in game option, and the first time I did it all settings very high, V-Sync enabled, AA Off, and Physx either Off or Normal (dammit can't remember >.<) and the score came back at 105 FPS max, weird stuff since I vaguely remember in the article mention of an FPS cap which I have not adjusted. It was insanely high, nice one, if only that would have been the case after I switched it to High Physx.

Did have to use a 2x Molex to 6 Pin converter, taking up what I believe to be the very last open plugs on my Thermaltake 750 Watt Toughpower PSU, other than those little floppy type ones.

Is certainly a nice little bonus to be able to still squeeze some more usage out of my "replaced" card though:D
 
Last edited:
Hi,

Was wondering if physX breaks SLI mode in Batman? I have two 285s in SLI and with physx fps average is about 33. With no SLI and second card is lone physx card, fps go up to 56 on the benchmark. Why is this? You would expect them to be about the same wouldnt you?

Thanks.
 
Hi,

Was wondering if physX breaks SLI mode in Batman? I have two 285s in SLI and with physx fps average is about 33. With no SLI and second card is lone physx card, fps go up to 56 on the benchmark. Why is this? You would expect them to be about the same wouldnt you?

Thanks.

i don't believe sli physx works properly in this game. so basically in sli, your primary card is handling graphics + physics while your secondary is only handling graphics. in non-sli, you have your primary card handling graphics while your secondary card is only handling physics (dedicated ppu). so in the first scenario, the physics is causing the primary card to deal with a much heavier processing load on top of the graphics rendering than in the second scenario where the physics is completely offloaded to the secondary card. that is why you get higher performance in non-sli over sli. at least that is what i hypothesize. though in another game called mirror's edge, i think i got roughly the same performance in sli physx as i did in non-sli physx mode. in future games, i imagine it will be better optimized for this configuration in terms of balancing the physics workload across multiple gpus.
 
NVidia made a statement about Batmans PhysX in SLI and it will be fixed in the next driver.
There is a workaround but I cant remember what.
fyi
 
Seems like thats what I thought. I had a third card in there but I had to yank it becuase of fan noise issue. So with SLI and third dedicated PPU I was getting like 3 fps more than with two cards (2x285) non sli physx enabled. Thats like a 5% gain or loss now that I'm without that 3rd card. Pretty insignificant. Going to return it and get an EVGA 9800 GT at 600mhz vs the BFG 9800GT which was at 550mhz. Or maybe just forget about it.

Thanks.
 
Sorry for the bump but with the truth behind Batman AA-enabling out, could you add a little something, benchmark a 5XXX series card faking an Nvidia hardware ID, or do it without any AA - as AMD advises because of the situation?
 
Back
Top