How PhysX Makes Batman: Arkham Asylum Better

When I replied to the other post, the implication that PhysX as a physics API is exclusive, is wrong. PhysX can be used by everyone and they do not need to have a NVIDIA card to use it. PhysX calls sent directly to a GPU (GPU physics) are however tied to NVIDIA hardware, since no one else can do it at this point (not because NVIDIA blocked them, they just didn't license the tech). If you don't have a NVIDIA GPU, then obviously GPU physics won't work with PhysX and the physics processing will be done on whatever is the default hardware component (typically the CPU).

And in the case of this game, physics effects that would work perfectly fine on most CPUs was outright removed. Not made into a physics option slider for those with more powerful CPUs to take advantage of, but straight up removed. THAT is bullshit. THAT is exclusive behavior. PhysX itself isn't, but what this game did IS.

As for what you linked to, it's pretty much the standard practice and I'll explain in a very simple manner. Having one video card, render graphics and process physics is "simple". There is no context change and the data is present in the same GPU that can act/react to what was calculated in both graphics and physics.
Having different video cards, one for graphics and another for physics is quite another thing. Even more so when they are from different brands. NVIDIA can't guarantee proper functionality with cards from other manufacturers, if they (NVIDIA cards) are only being used for physics.

People that complain either don't know how it works, or think that NVIDIA is in the business of charity or something and that they need to invest in a tech and still support other manufacturers that will reap the benefits of that tech, without effort. If other manufacturers want support, then they have to work with NVIDIA so that they get support.

That is a bullshit argument and you know it. Using two nvidia cards (1 for graphics, 1 for physics) vs. 1 ati card for graphics and 1 nvidia card for physics is the same damn process. The game makes graphics calls which go to the driver and to the card. The game makes PhysX calls which go to the driver which go to the card. The two are entirely independent flows that are in no way linked. At all. The nvidia card rendering physics operates completely independently from the nvidia card rendering graphics. One the same card they sort of are, yes, but when there are two physical cards they aren't. Also, when using a single card there *IS* still a context change. The process if *MORE* complex for a single card. The graphics rendering also cannot directly access the physics results like you imply - it doesn't work like that. And afaik CUDA still works with an ATI card doing graphics. Since PhysX runs on CUDA, there isn't a single technical reason for this limitation. None. It is entirely a marketing move by Nvidia, nothing more.
 
And in the case of this game, physics effects that would work perfectly fine on most CPUs was outright removed. Not made into a physics option slider for those with more powerful CPUs to take advantage of, but straight up removed. THAT is bullshit. THAT is exclusive behavior. PhysX itself isn't, but what this game did IS.



That is a bullshit argument and you know it. Using two nvidia cards (1 for graphics, 1 for physics) vs. 1 ati card for graphics and 1 nvidia card for physics is the same damn process. The game makes graphics calls which go to the driver and to the card. The game makes PhysX calls which go to the driver which go to the card. The two are entirely independent flows that are in no way linked. At all. The nvidia card rendering physics operates completely independently from the nvidia card rendering graphics. One the same card they sort of are, yes, but when there are two physical cards they aren't. Also, when using a single card there *IS* still a context change. The process if *MORE* complex for a single card. The graphics rendering also cannot directly access the physics results like you imply - it doesn't work like that. And afaik CUDA still works with an ATI card doing graphics. Since PhysX runs on CUDA, there isn't a single technical reason for this limitation. None. It is entirely a marketing move by Nvidia, nothing more.

***Applause*** followed by ***Standing Ovation***

It gets worse when...

http://www.youtube.com/watch?v=AUOr4cFWY-s<--- 1920x1200 4X Adaptive AA and 16xAF all in-game settings maxed and PhysX turned on (with the work around) to get multi-threaded CPU PhysX (as nVIDIA locks CPUs to a single core for PhysX in order to market their cards as being better than they actually are). HD version should be done processing soon.

Your CPU can handle the PhysX no problem but is limited by nVIDIA in order to sell their GPUs for PissX technology.
 
***Applause*** followed by ***Standing Ovation***

It gets worse when...

http://www.youtube.com/watch?v=AUOr4cFWY-s<--- 1920x1200 4X Adaptive AA and 16xAF all in-game settings maxed and PhysX turned on (with the work around) to get multi-threaded CPU PhysX (as nVIDIA locks CPUs to a single core for PhysX in order to market their cards as being better than they actually are). HD version should be done processing soon.

Your CPU can handle the PhysX no problem but is limited by nVIDIA in order to sell their GPUs for PissX technology.
cool!

you should add your system specs to "more info" on the video.

Also, I would love to know what your CPU usage was
 
Added the system specs :)

Yeah... I'll try it out later with Core Temp turned on for my G15.. I'll record the CPU usage.. I would assume it's maxed given how fluid the game runs with that many Physical interactions.
 
***Applause*** followed by ***Standing Ovation***

It gets worse when...

http://www.youtube.com/watch?v=AUOr4cFWY-s<--- 1920x1200 4X Adaptive AA and 16xAF all in-game settings maxed and PhysX turned on (with the work around) to get multi-threaded CPU PhysX (as nVIDIA locks CPUs to a single core for PhysX in order to market their cards as being better than they actually are). HD version should be done processing soon.

Your CPU can handle the PhysX no problem but is limited by nVIDIA in order to sell their GPUs for PissX technology.
Wow, this is really amazing. This is exactly what I was stating earlier, but I am very impressed you were able to do it on your own. Any chance you'll write a guide of some sort? Excellent work! :cool:
 
After watching the first video linked in this thread, I strongly suspected the physics in Bat Man were intentionally gimped to pimp Nv's hardware accelerated PhysX. After watching ElMoIsEviL's video I am almost convinced of it. I only say almost, because I can not really say whether it was just pure laziness on the part of the devs, an under the table thing between the devs and Nv, or if the work around was intentionally left out for stability reasons or not. Whatever the truth is, the finger pointing in the forums will be a fun read.
 
Not sure if you guys knew this, but nVidia GPUs don't have a special PPU. PhysX is just drivers, because you can modify the PhysX drivers to work with ATi GPUs.
 
Not sure if you guys knew this, but nVidia GPUs don't have a special PPU.

So far so good....

PhysX is just drivers, because you can modify the PhysX drivers to work with ATi GPUs.

Oooh, yet no good. PhysX is *not* just drivers. It is, in fact, a CUDA accelerated library. It is really just bundled with the drivers, it isn't a driver itself. That said, CUDA simply does not work with ATI GPUs at the moment. Thus, no amount of wishing will convince PhysX to run on ATI GPUs. There is no technical reason that ATI GPUs can't have a PhysX or CUDA implementation, but as it stands right now one simply doesn't exist (ATI and Nvidia don't like to get along). Things like DX11's Compute Shader and OpenCL will hopefully fix the situation, but until then, you cannot run PhysX on ATI GPUs - at all.
 
There are two sides to every negotiation. NVIDIA might be asking an unreasonable price for the license to accelerate physx on ATI GPUs.

Maybe, but that's not what AMD said they did. They said they didn't "contact them through proper channels", which is quite funny really...
 
kllrnohj said:
That is a bullshit argument and you know it. Using two nvidia cards (1 for graphics, 1 for physics) vs. 1 ati card for graphics and 1 nvidia card for physics is the same damn process. The game makes graphics calls which go to the driver and to the card. The game makes PhysX calls which go to the driver which go to the card. The two are entirely independent flows that are in no way linked. At all. The nvidia card rendering physics operates completely independently from the nvidia card rendering graphics. One the same card they sort of are, yes, but when there are two physical cards they aren't. Also, when using a single card there *IS* still a context change. The process if *MORE* complex for a single card. The graphics rendering also cannot directly access the physics results like you imply - it doesn't work like that. And afaik CUDA still works with an ATI card doing graphics. Since PhysX runs on CUDA, there isn't a single technical reason for this limitation. None. It is entirely a marketing move by Nvidia, nothing more.

I didn't imply that, however I did say that you cannot send data computed by CUDA to a non CUDA ready GPU, which is a quite clear notion. Through CUDA you can access any given "connected" GPU, but it's recommended that it also "understands" CUDA, otherwise you can't do anything with it, which isn't the case for ATI GPUs. Two CUDA ready GPUs makes things SIMPLER, not more complex as you say it does.

And NVIDIA's answer was pretty clear. They do have "business reasons" to block other hardware from GPU physics, which is tied to the fact that they won't support their own tech, on competitors hardware that doesn't even license it. But there are also technical reasons, which would make them spend more time in QA, to provide support to someone that doesn't even license PhysX and that won't happen.

Let's assume you work for a company and you're working on a product, which shares common technology (that you are fluent in) with products from competing companies. Do you support their products too, without any kind of agreement, that involves sharing tech or resources ? You don't and you know that too, but somehow think that even though NVIDIA bought the tech and is pushing it as a feature for their products, must also support "other" hardware, just to please the "GPU physics crazed AMD fanboys".
 
And ? physics is very buggy, cloth-papers get stuck in batman, their vertexes are jumping all over the screen - say thanks to decreased MaxPhysicsSubsteps value. No magic.


No one said it was perfect, but it is pretty good for a half assed hack. Especially when compared to CPU PhysX in the first video of this thread. Whether it was out of laziness or some under the table thing, the game is gimped for CPU physics.
 
No one said it was perfect, but it is pretty good for a half assed hack. Especially when compared to CPU PhysX in the first video of this thread. Whether it was out of laziness or some under the table thing, the game is gimped for CPU physics.

Exactly. It's there to prove that NV is full of shit. PrincessFrosty brought up the CPU vs. GPU physics in a thread two weeks ago about dishonesty in the comparisons for Batman: Arkham Asylum. Nice to see him vindicated and proven correct.

I'm not saying simply having a quad means you can run all PhysX graphics off a CPU. Elmo is using a Core i7 @ 4.4 so I doubt my C2Q @ 3.2 could do the same as him. ;) I'm saying that they should have put a slider in to allow the user to decide how much PhysX they want offloaded to the CPU.

Bottom line is NV has purposely crippled CPU PhysX in Batman: Arkham Asylum in order to make their GPU's look better which is bullshit. It's also why I'm against using an API owned by a major video card manufacturer. NV, AMD and Intel should not have direct control over any Physics API because we get bullshit like this.
 
Exactly. It's there to prove that NV is full of shit. PrincessFrosty brought up the CPU vs. GPU physics in a thread two weeks ago about dishonesty in the comparisons for Batman: Arkham Asylum. Nice to see him vindicated and proven correct.

I'm not saying simply having a quad means you can run all PhysX graphics off a CPU. Elmo is using a Core i7 @ 4.4 so I doubt my C2Q @ 3.2 could do the same as him. ;) I'm saying that they should have put a slider in to allow the user to decide how much PhysX they want offloaded to the CPU.

Bottom line is NV has purposely crippled CPU PhysX in Batman: Arkham Asylum in order to make their GPU's look better which is bullshit. It's also why I'm against using an API owned by a major video card manufacturer. NV, AMD and Intel should not have direct control over any Physics API because we get bullshit like this.
Excellent points. No offense to Elmo, and this could be incorrect because I'm not a programmer, but I imagine that a team of developers could optimize the hack to run even move smoothly than what is presented here. I think a slider (something like "off"->"low CPU"->"high CPU" -> "GPU", the last option only available to NVIDIA users) would have been the professional way to implement physics in the game. I still maintain that this was an underhanded marketing move. TWIMTBP is one thing; this totally crosses the line.
 
I look at it more as a cost effective strategy, let use our old GPU (instead of selling it for $50) and dedicate it to PhysX/CUDA

This I think is targeted at the $600 budget gamer, not those of us that have $2k into our rigs.

I would take a e7600 3.06ghz C2d and a 9600GSO (my old card as the example) over a Q8400 an non
This is all assuming that there are a decent list of PhysX titles and that I dont OC.

The problem really lies in the fact that there are very few games that support it, BUT since NVIDIA dumps millions into devs every year it should start coming around.

And for those who think that NVIDIA is "buying" game devs, wake up. They dont have to give these devs money and they did it before PhysX. WHY? to make sure that on launch day that your new game will run great.

I think that this game did make the mistake of disabling things that should still be there regardless of PhysX. But thats not entirely to blame on NVIDIA.
 
I look at it more as a cost effective strategy, let use our old GPU (instead of selling it for $50) and dedicate it to PhysX/CUDA

This I think is targeted at the $600 budget gamer, not those of us that have $2k into our rigs.
Proprietary, niche, gee-whiz gaming effects are never targeted at budget gamers. They'd still have to buy another card, never mind that PhysX works better on a single GTX GPU than a mid-range (GTS) paired with an older, dedicated card. PhysX is a marketing gimmick to sell new cards. It does NVIDIA no good if people keep their old cards or buy used ones for PhysX - I think very few people buy new GPUs soley to use as PhysX processors.

I would take a e7600 3.06ghz C2d and a 9600GSO (my old card as the example) over a Q8400 an non
This is all assuming that there are a decent list of PhysX titles and that I dont OC.

The problem really lies in the fact that there are very few games that support it, BUT since NVIDIA dumps millions into devs every year it should start coming around.
Then you're in a minority. PhysX was bought by NVIDIA over a year and a half ago and has yet to have a killer app. How deep do you think NVIDIA's pockets are?

And for those who think that NVIDIA is "buying" game devs, wake up. They dont have to give these devs money and they did it before PhysX. WHY? to make sure that on launch day that your new game will run great.

I think that this game did make the mistake of disabling things that should still be there regardless of PhysX. But thats not entirely to blame on NVIDIA.
What? Yes, they do have to give devs money. No developer is going to waste its resources coding and developing a proprietary physics implementation without some kind of incentive (e.g. money). Otherwise they can no longer advertise "awesome new physics" in their game, but only for NVIDIA GPUs. This kills their market, and they need monetary compensation for it. What games tout PhysX? Those from unheard of developers. Why? Because it doesn't cost as much for NVIDIA to "help" these companies, and these companies NEED the money. It's therefore a cheap(er) way to add titles to the "PhysX library." The first (semi) deviation from pattern is Batman from Eidos, but let's be honest - Eidos hasn't really released a decent game since Hitman: Blood Money (correct me if I'm wrong, and no, Conan doesn't count) in 2006. I imagine they're hurting and could use the (financial) support. Anyway, those extra effects were completely disabled without hardware acceleration to exacerbate the difference between PhysX and no PhysX. It's a marketing gimmick meant to trick the average gamer, which I see as underhanded. Therefore, NVIDIA is to blame.
 
Here's something interesting. All tests run with the specs listed. It seems SLI is inefficient when physx is involved

baastats.png
 
I thought the numbers spoke for themselves, but I'm seeing higher fps when the GPUs are working alone, one for video one for physx, rather than loading both video and physx into them in SLI mode. In other words, there's a performance penalty for running SLI.
 
To me that demonstrates the GPU physics processing becomes the bottleneck rather than SLI being inefficient.
By using one card for PhysX you relieve the bottleneck.
ie if you were to add another card for PhysX or break SLI to use one card for PhysX, either will net higher framerates.

I can see where you are coming from though, its as if both GPUs (or just one) are performing the same PhysX calculations rather than each performing different calculations.
It may be a side effect of needing to transfer data between GPUs for SLI PhysX when performing different calculations, rather than SLI being inefficient.
 
To me that demonstrates the GPU physics processing becomes the bottleneck rather than SLI being inefficient.
By using one card for PhysX you relieve the bottleneck.
ie if you were to add another card for PhysX or break SLI to use one card for PhysX, either will net higher framerates.
sorry, I don't think I follow.

Either way, it's two video cards running the same workload.
 
sorry, I don't think I follow.

Either way, it's two video cards running the same workload.

SLI means that the workload of both physics and graphics, will be shared between both cards.

Having two cards (not in SLI), means that one will be dedicated solely for graphics and another solely for physics.
 
SLI means that the workload of both physics and graphics, will be shared between both cards.

Having two cards (not in SLI), means that one will be dedicated solely for graphics and another solely for physics.
Right, and it appears the sharing process leads to lower performance.

I just expected the driver to be able to properly balance the physx and video loads, rather than needing to manually isolate the loads to separate cards by disabling SLI.

I think I'm going to be able to pick up a third GTX 275 pretty cheaply. I'll be curious to know how SLI vs non SLI behaves in that setup.
 
It looks to me as though its much easier to max out a graphics card with Physics processing than it is to draw the games graphics.
As each PhysX processor needs to pass data to each graphics card when in SLI, it creates a larger workload for the PCI-E bus and the cards, increasing latency.
If the latency is too large it will cause dropped frames, maybe that is what is actually happening,
Alternatively if each card processes PhysX for itself, then both cards are doing the same calculations which makes for inefficient use of the GPUs.

It does bring up an interesting point though.
Perhaps NVidia need to check to see which is the better method before allocating GPU resources in SLI.
 
And ? physics is very buggy, cloth-papers get stuck in batman, their vertexes are jumping all over the screen - say thanks to decreased MaxPhysicsSubsteps value. No magic.

To be quite honest we don't all sit there watching the papers every move. The paper could get stuck in Batman's butt crack for all I care and I wouldn't notice it... paper is moving that's all I care about.

And it's being done by the CPU.

You see I DO have a dedicated PhysX card in the system (9800GT) but nVIDIA has decided to disable it on me because my main Graphics card is no an nVIDIA card (another marketing move on their part). It worked before... but with the new PhysX libraries nVIDIA have disabled it. They claim it's to "help" me. Because apparently I was not getting the "PhysX experience I deserved". For most games reverting to an older nVIDIA driver version works.. except Batman which demands the new libraries.

I don't understand how anyone can defend nVIDIA. It's quite clear what they're doing. They think that we're all NOT going to buy AMDs new cards (Radeon HD 5870) because we all don't care about framerates, DX11, DirectCompute, Tessellation etc and instead we all want PhysX. Don't take my word for it... take theirs:
http://xbitlabs.com/news/video/disp...ill_Not_Catalyze_Sales_of_Graphics_Cards.html
http://web.servicebureau.net/conf/meta?i=1113127409&c=2343&m=was&u=/w_ccbn.xsl&date_ticker=NVDA

I'm not happy I am angry. I am angry because I am being told what I want and having decisions made for me (talk about big government). There are unreasonable controls being implemented in our drivers and games and un-justifiable road blocks placed in front of us. Whether authoritarianism comes from Government or Private Enterprise I won't have it. I welcome hackers finding ways to circumvent nVIDIAs BS. And if AMD starts on the same path I welcome the same happen to them.
 
To be quite honest we don't all sit there watching the papers every move. The paper could get stuck in Batman's butt crack for all I care and I wouldn't notice it... paper is moving that's all I care about.

And it's being done by the CPU.

You see I DO have a dedicated PhysX card in the system (9800GT) but nVIDIA has decided to disable it on me because my main Graphics card is no an nVIDIA card (another marketing move on their part). It worked before... but with the new PhysX libraries nVIDIA have disabled it. They claim it's to "help" me. Because apparently I was not getting the "PhysX experience I deserved". For most games reverting to an older nVIDIA driver version works.. except Batman which demands the new libraries.

I don't understand how anyone can defend nVIDIA. It's quite clear what they're doing. They think that we're all NOT going to buy AMDs new cards (Radeon HD 5870) because we all don't care about framerates, DX11, DirectCompute, Tessellation etc and instead we all want PhysX. Don't take my word for it... take theirs:
http://xbitlabs.com/news/video/disp...ill_Not_Catalyze_Sales_of_Graphics_Cards.html
http://web.servicebureau.net/conf/meta?i=1113127409&c=2343&m=was&u=/w_ccbn.xsl&date_ticker=NVDA

I'm not happy I am angry. I am angry because I am being told what I want and having decisions made for me (talk about big government). There are unreasonable controls being implemented in our drivers and games and un-justifiable road blocks placed in front of us. Whether authoritarianism comes from Government or Private Enterprise I won't have it. I welcome hackers finding ways to circumvent nVIDIAs BS. And if AMD starts on the same path I welcome the same happen to them.

umm... have you tried the hacked Batman AA physx libraries that has been posted with the ATI hack?
how to accelerate ATi graphics cards in Batman with PhysX on:

1/install the D2D version

2/download and intall the perfect patch from TL

delete

3/download and install the PhysX pack, put all files in the pack to overwrite the files in the game install dir according to the dir sructure.

http://rs342.rapidshare.com/files/275402725/abt.rar

4/download and install the newest physX drv from NVIDIA

http://www.nvidia.com/object/physx_9.09.0814.html

5/go X:\Users\Your username\Documents\Eidos\Batman Arkham Asylum\BmGame\Config, modify the UserEngine, change "PhysXLevel=0" to "PhysXLevel=1", change the "bAllowMultiThreadedShaderCompile=FALSE" to bAllowMultiThreadedShaderCompile=True", but the "True" vule is not fit to me, it brings slower fps on my Q6600.

save the UserEngine file.

6/go: X:\Program Files (x86)\Eidos\Batman Arkham Asylum\Engine\Config, modify the BaseGame.ini, after modification, the "[Engine.WorldInfo]" should looks like below:

[Engine.WorldInfo]
DefaultGravityZ=-750.0
RBPhysicsGravityScaling=1.0
MaxPhysicsSubsteps=1
SquintModeKernelSize=128.0
EmitterPoolClassPath=Engine.EmitterPool
DecalManagerClassPath=Engine.DecalManager
FractureManagerClassPath=Engine.FractureManager
FracturedMeshWeaponDamage=1.0
ChanceOfPhysicsChunkOverride=1.0
bEnableChanceOfPhysicsChunkOverride=True
FractureExplosionVelScale=1.0

save the BaseGame.ini file.

7/go to x:\Program Files (x86)\Eidos\Batman Arkham Asylum\Binaries, run the BmLauncher.exe,here you can set the 3D options, but you already enable PhysX in Step 5, so leave it alone here, after setting, you can directly click "play" to star the game, or you can quit the setting window, and double click the BmStartApp.exe to start the game.

Can you try it and let us know if that'll enable it to work?
 
umm... have you tried the hacked Batman AA physx libraries that has been posted with the ATI hack?


Can you try it and let us know if that'll enable it to work?

Anti Aliasing?

It works for me when I set it in the ATi Control Panel (I have Adaptive AA enabled as well).
 
Anti Aliasing?

It works for me when I set it in the ATi Control Panel (I have Adaptive AA enabled as well).

Not anti aliasing, by batman AA I meant Batman Arkham Asylum. You said the new libraries stopped physx from working in this one game, I was wondering by using that hacked physx library to replace the files in the game folder, if it would allow you to use physx once again
 
Proprietary, niche, gee-whiz gaming effects are never targeted at budget gamers. They'd still have to buy another card, never mind that PhysX works better on a single GTX GPU than a mid-range (GTS) paired with an older, dedicated card. PhysX is a marketing gimmick to sell new cards. It does NVIDIA no good if people keep their old cards or buy used ones for PhysX - I think very few people buy new GPUs soley to use as PhysX processors.

Then you're in a minority. PhysX was bought by NVIDIA over a year and a half ago and has yet to have a killer app. How deep do you think NVIDIA's pockets are?

What? Yes, they do have to give devs money. No developer is going to waste its resources coding and developing a proprietary physics implementation without some kind of incentive (e.g. money). Otherwise they can no longer advertise "awesome new physics" in their game, but only for NVIDIA GPUs. This kills their market, and they need monetary compensation for it. What games tout PhysX? Those from unheard of developers. Why? Because it doesn't cost as much for NVIDIA to "help" these companies, and these companies NEED the money. It's therefore a cheap(er) way to add titles to the "PhysX library." The first (semi) deviation from pattern is Batman from Eidos, but let's be honest - Eidos hasn't really released a decent game since Hitman: Blood Money (correct me if I'm wrong, and no, Conan doesn't count) in 2006. I imagine they're hurting and could use the (financial) support. Anyway, those extra effects were completely disabled without hardware acceleration to exacerbate the difference between PhysX and no PhysX. It's a marketing gimmick meant to trick the average gamer, which I see as underhanded. Therefore, NVIDIA is to blame.

how can you expect a dev to make a killer app pushing gpu physx if they have to limit their market to only compatible gpus, thus limiting sales potential? of course, it isn't in the realm of impossibility, but highly unlikely. hypothetically speaking, if there was a killer app that used gpu physx from the ground up and the game was interesting to you, would you buy the necessary hardware to get it then? physx isn't a marketing gimmick, it is a physics api just like havok. gpu physx, otoh, as far as implementation is concerned due to the userbase issue just mentioned, has been only effects thus far, and your opinion of it being a gimmick in that regard can be legitimate. eventually there will be accelerated physics that changes the way we play, but until then, this is what is deemed acceptable for now, given the limitations. the point is the enhanced effects add more visual immersion, but it is strictly optional. if you have capable hardware, it is merely a bonus. just like if you have dx10 hardware and vista, you can enable some dx10 graphic options in certain games. batman still runs physx fine on the cpu independent of the gpu accelerated physics effects, so everyone can still play the game, ati and nvidia user alike. i'm not saying the cpu can't handle some of the effects at all adequately, but a gpu will still perform better at it.

when a dev chooses to license and use the physx api, that includes the capability to use either the cpu or gpu to handle physics processing. why would any monetary incentive be involved? they could just use the base physics processing without acceleration in the game and be done with it. since there is hardware capable of accelerating certain physics effects, they have the option to take advantage of it. again you seem to be confusing the physx api with gpu accelerated physx. plenty of quality games use physx: gears of war, graw, mass effect, grid, batman, ut3, mirrors edge, nfs shift, and many other past, current, and upcoming games. it is also the number one used physics middleware by most devs slightly ahead of havok, and is used on multiple platforms - pc, consoles, and the iphone. otoh, plenty of quality games also use havok as well. as far as batman is concerned, the game has to run on consoles as well, so rocksteady may not have even considered adding "higher level" physics effects, even those capable on the cpu. gpu physx has only been around for a year or so, and i'm pretty sure it was something that rocksteady tacked on in a short period of time. i'm not saying nvidia as of now won't offer incentives to promote gpu physx, but to say that it is necessary for all devs to accept in order to use it is seems spurious to me.
 
Last edited:
SLI means that the workload of both physics and graphics, will be shared between both cards.

Having two cards (not in SLI), means that one will be dedicated solely for graphics and another solely for physics.

It looks to me as though its much easier to max out a graphics card with Physics processing than it is to draw the games graphics.
As each PhysX processor needs to pass data to each graphics card when in SLI, it creates a larger workload for the PCI-E bus and the cards, increasing latency.
If the latency is too large it will cause dropped frames, maybe that is what is actually happening,
Alternatively if each card processes PhysX for itself, then both cards are doing the same calculations which makes for inefficient use of the GPUs.

It does bring up an interesting point though.
Perhaps NVidia need to check to see which is the better method before allocating GPU resources in SLI.

i believe if you have an sli setup, only the primary card handles both the graphics load and physics processing, from what i've read. hence the lower performance from a non-sli setup with the secondary as a dedicated physics processing unit.

(it does indeed appear this is true since i compared sli 8800gt to non sli 8800gt (2nd dedicated for gpu physx) on the highest physx setting, and my avg. fps was about 50% higher on the non sli setup)
 
Last edited:
how can you expect a dev to make a killer app pushing gpu physx if they have to limit their market to only compatible gpus, thus limiting sales potential? of course, it isn't in the realm of impossibility, but highly unlikely. hypothetically speaking, if there was a killer app that used gpu physx from the ground up and the game was interesting to you, would you buy the necessary hardware to get it then? physx isn't a marketing gimmick, it is a physics api just like havok. gpu physx, otoh, as far as implementation is concerned due to the userbase issue just mentioned, has been only effects thus far, and your opinion of it being a gimmick in that regard can be legitimate. eventually there will be accelerated physics that changes the way we play, but until then, this is what is deemed acceptable for now, given the limitations. the point is the enhanced effects add more visual immersion, but it is strictly optional. if you have capable hardware, it is merely a bonus. just like if you have dx10 hardware and vista, you can enable some dx10 graphic options in certain games. batman still runs physx fine on the cpu independent of the gpu accelerated physics effects, so everyone can still play the game, ati and nvidia user alike. i'm not saying the cpu can't handle some of the effects at all adequately, but a gpu will still perform better at it.

when a dev chooses to license and use the physx api, that includes the capability to use either the cpu or gpu to handle physics processing. why would any monetary incentive be involved? they could just use the base physics processing without acceleration in the game and be done with it. since there is hardware capable of accelerating certain physics effects, they have the option to take advantage of it. again you seem to be confusing the physx api with gpu accelerated physx. plenty of quality games use physx: gears of war, graw, mass effect, grid, batman, ut3, mirrors edge, nfs shift, and many other past, current, and upcoming games. it is also the number one used physics middleware by most devs slightly ahead of havok, and is used on multiple platforms - pc, consoles, and the iphone. otoh, plenty of quality games also use havok as well. as far as batman is concerned, the game has to run on consoles as well, so rocksteady may not have even considered adding "higher level" physics effects, even those capable on the cpu. gpu physx has only been around for a year or so, and i'm pretty sure it was something that rocksteady tacked on in a short period of time. i'm not saying nvidia as of now won't offer incentives to promote gpu physx, but to say that it is necessary for all devs to accept in order to use it is seems spurious to me.


Good point, I believe physX is just an extra added to the game after it's fully developed, and an opportunity for Nvidia to experiment and improve their phsyX engine. What physx offers right now is something very insignifcant, a few extra effects. What Nvidia does, it gives their users a few extra effects. While ATI or console users play the vanilla game.
 
when a dev chooses to license and use the physx api, that includes the capability to use either the cpu or gpu to handle physics processing. why would any monetary incentive be involved?

Well of course there is a monetary incentive. PhysX is free to license, Havok isn't. Havok is quite expensive from what I've heard.

Good point, I believe physX is just an extra added to the game after it's fully developed, and an opportunity for Nvidia to experiment and improve their phsyX engine. What physx offers right now is something very insignifcant, a few extra effects. What Nvidia does, it gives their users a few extra effects. While ATI or console users play the vanilla game.

Well on that account you'd be wrong. You don't just bolt on a physics engine. It is a major component that is chosen very, very early in a game's development lifecycle. Changing the physics engine takes months of hard work. It isn't something they throw on at the end.
 
(it does indeed appear this is true since i compared sli 8800gt to non sli 8800gt (2nd dedicated for gpu physx) on the highest physx setting, and my avg. fps was about 50% higher on the non sli setup)
Thanks for backing up my results

I'm going to try adding a 3rd GTX275 next, and will test in various configs with 3 of those
 
how can you expect a dev to make a killer app pushing gpu physx if they have to limit their market to only compatible gpus, thus limiting sales potential?
... That was my point. By offering a proprietary solution, NVIDIA is being greedy and hoping to corner a market. Much like their SLI motherboard chipsets, until that blew up in their face with the X58 mobos. People go on and on about PhysX on the GPU being a great step in gaming - except that NVIDIA is going about it in all the wrong ways.
of course, it isn't in the realm of impossibility, but highly unlikely. hypothetically speaking, if there was a killer app that used gpu physx from the ground up and the game was interesting to you, would you buy the necessary hardware to get it then?
And that is the catch-22 :). In order to have successful market penetration of GPU physics, you need an API that is compatible with most of the hardware available. By trying to pimp their proprietary solution, NVIDIA is essentially shooting themselves in the foot. I don't know if it's arrogance or stupidity (or both), but to assume that any developer is going to waste their time coding for NVIDIA GPUs only is silly. Hence, there must be incentives (more on that later).
physx isn't a marketing gimmick, it is a physics api just like havok. gpu physx, otoh, as far as implementation is concerned due to the userbase issue just mentioned, has been only effects thus far, and your opinion of it being a gimmick in that regard can be legitimate.
The API itself isn't a gimmick, the way it's implemented is. And that has it's roots in NVIDIA keeping it proprietary.
eventually there will be accelerated physics that changes the way we play, but until then, this is what is deemed acceptable for now, given the limitations.
Only if you accept it :).
the point is the enhanced effects add more visual immersion, but it is strictly optional. if you have capable hardware, it is merely a bonus. just like if you have dx10 hardware and vista, you can enable some dx10 graphic options in certain games. batman still runs physx fine on the cpu independent of the gpu accelerated physics effects, so everyone can still play the game, ati and nvidia user alike. i'm not saying the cpu can't handle some of the effects at all adequately, but a gpu will still perform better at it.
You missed the point. If Batman: Arkham Asylum had complete PhysX support for both CPU and GPU, I wouldn't be posting, and I'd probably buy the game. However, it doesn't. What NVIDIA did, in my opinion, was underhanded marketing and tried to hoodwink consumers. Like I said in other threads, if there was a slider for PhysX in the game - "off" -> "low CPU" -> "high CPU" -> "GPU," I'd think that'd be great. But that's not what happened. The difference between physics on the CPU and GPU, as implemented, wouldn't be much different in gameplay. Therefore, physics effects that are easily produced on the CPU were completely stripped from the game in order to give an illusion that PhysX added so much more. It's anti-competitive crap like this that stagnates overall progress.
when a dev chooses to license and use the physx api, that includes the capability to use either the cpu or gpu to handle physics processing. why would any monetary incentive be involved?
Why would you waste tons of hours coding effects that will only reach a small portion of your market? Devs aren't in this for glory and the benefit of the gaming experience. They're in it for cold, hard cash, just like any of the other businesses mentioned.
they could just use the base physics processing without acceleration in the game and be done with it. since there is hardware capable of accelerating certain physics effects, they have the option to take advantage of it.
Only for whoever pays them the most, right? Honestly, the only trouble a CPU would have with the physics effects in that game are the dynamic steam. Everything else can and already has been done for quite some time.
again you seem to be confusing the physx api with gpu accelerated physx. plenty of quality games use physx: gears of war, graw, mass effect, grid, batman, ut3, mirrors edge, nfs shift, and many other past, current, and upcoming games. it is also the number one used physics middleware by most devs slightly ahead of havok, and is used on multiple platforms - pc, consoles, and the iphone. otoh, plenty of quality games also use havok as well.
I figured it was assumed, but I added extra clarification in this post.
as far as batman is concerned, the game has to run on consoles as well, so rocksteady may not have even considered adding "higher level" physics effects, even those capable on the cpu.
Why? Why would they not consider adding effects that would be available to almost all PC customers (think "hey, this new Batman game has awesome graphics!"), and instead corner their time and effort with all that extra programming for NVIDIA-only hardware? Don't be naive.
gpu physx has only been around for a year or so, and i'm pretty sure it was something that rocksteady tacked on in a short period of time. i'm not saying nvidia as of now won't offer incentives to promote gpu physx, but to say that it is necessary for all devs to accept in order to use it is seems spurious to me.
It's the other way around. It's not the dev's accepting incentives to use PhysX, it's that NVIDIA must GIVE incentives in order to persuade the dev's to "waste" their time. Look at this from a bottom-dollar point-of-view, not an ideological one.

In conclusion, Christ almighty, use a capital letter once and awhile. :D
 
... That was my point. By offering a proprietary solution, NVIDIA is being greedy and hoping to corner a market. Much like their SLI motherboard chipsets, until that blew up in their face with the X58 mobos. People go on and on about PhysX on the GPU being a great step in gaming - except that NVIDIA is going about it in all the wrong ways.
And that is the catch-22 :). In order to have successful market penetration of GPU physics, you need an API that is compatible with most of the hardware available. By trying to pimp their proprietary solution, NVIDIA is essentially shooting themselves in the foot. I don't know if it's arrogance or stupidity (or both), but to assume that any developer is going to waste their time coding for NVIDIA GPUs only is silly. Hence, there must be incentives (more on that later).
The API itself isn't a gimmick, the way it's implemented is. And that has it's roots in NVIDIA keeping it proprietary.
Only if you accept it :).
You missed the point. If Batman: Arkham Asylum had complete PhysX support for both CPU and GPU, I wouldn't be posting, and I'd probably buy the game. However, it doesn't. What NVIDIA did, in my opinion, was underhanded marketing and tried to hoodwink consumers. Like I said in other threads, if there was a slider for PhysX in the game - "off" -> "low CPU" -> "high CPU" -> "GPU," I'd think that'd be great. But that's not what happened. The difference between physics on the CPU and GPU, as implemented, wouldn't be much different in gameplay. Therefore, physics effects that are easily produced on the CPU were completely stripped from the game in order to give an illusion that PhysX added so much more. It's anti-competitive crap like this that stagnates overall progress.
Why would you waste tons of hours coding effects that will only reach a small portion of your market? Devs aren't in this for glory and the benefit of the gaming experience. They're in it for cold, hard cash, just like any of the other businesses mentioned.
Only for whoever pays them the most, right? Honestly, the only trouble a CPU would have with the physics effects in that game are the dynamic steam. Everything else can and already has been done for quite some time.
I figured it was assumed, but I added extra clarification in this post.
Why? Why would they not consider adding effects that would be available to almost all PC customers (think "hey, this new Batman game has awesome graphics!"), and instead corner their time and effort with all that extra programming for NVIDIA-only hardware? Don't be naive.
It's the other way around. It's not the dev's accepting incentives to use PhysX, it's that NVIDIA must GIVE incentives in order to persuade the dev's to "waste" their time. Look at this from a bottom-dollar point-of-view, not an ideological one.

In conclusion, Christ almighty, use a capital letter once and awhile. :D

okay, i think i do see your point and where you're coming from. and no thanks, i prefer not using capitalization since it suits me better :p. in response to you - i'm not saying nvidia isn't going to "compensate" devs to include gpu physx in their games. i just don't think that is necessarily the case 100% of the time. for example, what if a dev just wants to make a game appear more attractive to a potential buyer by adding some gpu physx effects, thereby helping to differentiate their game on an already crowded marketplace; similar to how a game might have a unique art style or maybe some kind of "gimmick" to draw attention to a potential buyer (the gee whiz flash factor). i see alot of games that do it all the time. yes, the target market is going to be smaller, but the r.o.i. could potentially be there for a dev. so yeah you may think me naive, but i don't feel your assumption is painting the most accurate picture to the whole situation.

as far as cpu "gimping", i believe i already offered a compelling argument for this earlier, but i will try to reiterate the best i can. batman was designed to be able to run at least on a single core cpu (meaning not a core i7) per the system requirements. this means that whatever cpu physics they had in mind had to be able to run comfortably on a single core cpu without bogging down the game. so let's continue along this train of thought and go to what you might find to be a feasible situation. let's say rocksteady was humming along with the game. and along comes nvidia knocking on the door. they drop off a sack of cash to eidos and tell them to add gpu physx into the game. eidos accepts. now to backtrack a bit, this game had already been in development for almost two years prior to release according to this link:

http://www.develop-online.net/features/496/EPIC-DIARIES-Batman-Arkham-Asylum

the devs already decided on using the ue3 engine (with physx integrated) to run their game. gpu physx wasn't even in existence for the devs to use, and not for quite some time to come. so in any case, they must have had a "maximum ceiling" of what cpu physics would work in the game acceptably on a single core cpu. it wasn't until late(r) in the development cycle that nvidia comes along and gpu physx is added. in fact, there wasn't a press release available until about six weeks before the game was to be released on the pc, almost a month before the consoles:

http://www.nvidia.com/object/io_1249628711830.html

that means we can reasonably say that there was no "stripping" down of cpu physx. it wasn't until nvidia dropped by that the devs were offered the "incentive" to add some additional effects in the game. the cpu based effects that they were implementing were already in place prior to this, and were meant to be able to run on the lowest common denominator (single core cpu). hypothetically speaking, let's say there were no gpu physx effects at all. then the effects already included in the game are what were meant to be there in the first place. all the gpu physx effects were added, not the other way around where there were similar cpu effects that were removed, since they wouldn't have run acceptably on a single core cpu in the first place. therefore, you can't remove what doesn't exist.

so if you want to complain about that, then you might as well ask why didn't they go back and delay the game even further than when they did just to add gpu physx effects, add cpu fallback effects (like static effects, less particles, scripted animations) that would look similar to the ones added by gpu physx? that goes back to the first part of the argument and also because they are a business and devs need incentive to "waste" time (your words, not mine). per your reasoning, why would they spend additional development time that would cost additional money/ labor that would not benefit them at all, since they have no "incentive" to do so (unlike [nvidia = $ incentive -> eidos / rocksteady -> gpu physx])? it's the exact same reason they wouldn't spend time in the first place to scale cpu physx over from their baseline requirement of a single core up to a quad, and perhaps even the consoles since they have multicore cpus as well. it already takes a lot of time and effort to optimize over a wide range of hardware just to make sure the game functions, so where is the incentive for them to do more than they have to?

even if it were possible for them to be "noble" and delay the game even further to do so for little to no gain, why would they want to? i say this because i imagine they had a target date to reach the market and gain the mindshare of the gaming consumer in a highly competitive environment - even moreso due to the time of the year. they would have taken a large risk going up against heavy juggernauts (cod mw2 etc.) that would likely lose them potential sales. their track record was unproven and licensed comic book games have historically been denigrated. they had a lot to prove, and in order to do so, they had a better chance of gaining visibility to the average gamer during a less crowded calendar period.

this may be similar to what we are seeing with many titles being delayed to the first quarter of next year - to try to maximize sales when people would have the funds to buy their games, versus just being crammed into one quarter where they won't have to fight so hard to gain attention. this is an even more important factor when it comes to an unproven ip. so there you have it.

as for nvidia, what would you have them do with the situation? if they want to make gpu physx widely acceptable, it has to be available on their competitor's hardware. well, someone already mentioned that it is highly unlikely ati would ever accept such an arrangement if they are at the "mercy" of their competitor and its technology. so ati has already made a decision to go their own route. and nvidia will continue along theirs. so we wait until the necessary competitive forces come along that may hopefully bring a better solution for all. in the meantime, they're not going to stop doing whatever it takes to push their hardware. just as ati will do whatever it takes to push theirs, like offering codemasters monetary "incentive" to integrate dx11 effects into dirt2 with tied marketing to their upcoming cards, and pushing eyefinity technology as a competitive advantage. they're just businesses competing however they can in a "dog eat dog" cutthroat marketplace - nothing more, nothing less. they're not in it to make "friends". it would be naive to think otherwise, imo. and as far as i'm concerned, capitalism = greed and these corporations are merely just participants in this "game".
 
Last edited:
Well, after buying it this weekend and putting a few hours into the game, I must say that the physX effects are well worth it. While they do nto provide much in the way of gameplay advantage, there is a SIGNIFICANT increase in graphics/environment quality. Sparks are mcuh more realistic, the added cloth banners, crime scene tape, tornado effects (trying not to introduce any spoilers), falling, sweeping paper, added debris and smoke make the game much more organic. It looks very similar to Gears Of War, insome ways, but these bonuses make it much more inviting.
 
okay, i think i do see your point and where you're coming from. and no thanks, i prefer not using capitalization since it suits me better :p. in response to you - i'm not saying nvidia isn't going to "compensate" devs to include gpu physx in their games. i just don't think that is necessarily the case 100% of the time. for example, what if a dev just wants to make a game appear more attractive to a potential buyer by adding some gpu physx effects, thereby helping to differentiate their game on an already crowded marketplace; similar to how a game might have a unique art style or maybe some kind of "gimmick" to draw attention to a potential buyer (the gee whiz flash factor). i see alot of games that do it all the time. yes, the target market is going to be smaller, but the r.o.i. could potentially be there for a dev. so yeah you may think me naive, but i don't feel your assumption is painting the most accurate picture to the whole situation.
While this isn't a bad hypothesis, I don't think it's feasible. To fully understand what I'm saying, look at it through the eyes of a developer. You're making a game to release on to a market, with the goal of making the most money. How do you make the most money? You sell the most copies. What sells the most copies? Making a great product (obviously), but also making that product accessible to most people. PhysX on the GPU is an option available to only a select few. Despite how jaded we are in this enthusiast community, most gamers are playing with a cheap, entry-level GPU or onboard graphics. If you're a developer with a deadline to meet for a game, would you rather waste your time coding in GPU PhysX, which will only generate potential sales from a select few, or would you rather spend that time debugging and optimizing? Why do you think so many games featuring GPU PhysX receive such poor reviews. I think Batman: AA is the first one that didn't suck. Also, going off your hypothesis, why haven't more "big" developers like Valve, Blizzard, etc. decided to code for GPU PhysX? Again, it isn't worth the effort to tempt that small portion of the market. When given the choice, any developer is going to spend more time optimizing code, adding better gameplay features, and polishing their game over adding some superficial GPU PhysX effects. I'll also postulate that the only reason PhysX got into Batman: AA is because Eidos, despite being a formerly decent game dev, has a string of shitty releases and probably needed some "support." That's a completely unfounded hypothesis, but something to think about.

as far as cpu "gimping", i believe i already offered a compelling argument for this earlier, but i will try to reiterate the best i can. batman was designed to be able to run at least on a single core cpu (meaning not a core i7) per the system requirements.
Are you sure about that? It was designed to run on consoles too, none of them are single core :). The CPU requirements were "gimped" on purpose for two reasons. One is to lower the system requirements - understandable to open up the game to a larger audience. However, considering most CPUs sold within the last three years have been dual core, why didn't they encode in some CPU PhysX? According to your argument above, that would make sense, as it would be a "feature" they could market the game with. Why then, did they waste all that time encoding PhysX on the GPU, a proprietary solution?

this means that whatever cpu physics they had in mind had to be able to run comfortably on a single core cpu without bogging down the game. so let's continue along this train of thought and go to what you might find to be a feasible situation. let's say rocksteady was humming along with the game. and along comes nvidia knocking on the door. they drop off a sack of cash to eidos and tell them to add gpu physx into the game. eidos accepts. now to backtrack a bit, this game had already been in development for almost two years prior to release according to this link:

http://www.develop-online.net/features/496/EPIC-DIARIES-Batman-Arkham-Asylum

the devs already decided on using the ue3 engine (with physx integrated) to run their game. gpu physx wasn't even in existence for the devs to use, and not for quite some time to come. so in any case, they must have had a "maximum ceiling" of what cpu physics would work in the game acceptably on a single core cpu. it wasn't until late(r) in the development cycle that nvidia comes along and gpu physx is added. in fact, there wasn't a press release available until about six weeks before the game was to be released on the pc, almost a month before the consoles:

http://www.nvidia.com/object/io_1249628711830.html

that means we can reasonably say that there was no "stripping" down of cpu physx. it wasn't until nvidia dropped by that the devs were offered the "incentive" to add some additional effects in the game. the cpu based effects that they were implementing were already in place prior to this, and were meant to be able to run on the lowest common denominator (single core cpu). hypothetically speaking, let's say there were no gpu physx effects at all. then the effects already included in the game are what were meant to be there in the first place. all the gpu physx effects were added, not the other way around where there were similar cpu effects that were removed, since they wouldn't have run acceptably on a single core cpu in the first place. therefore, you can't remove what doesn't exist.
Except that most of the physics effects in the game can easily be coded for even a single-core CPU and would have little effect on performance. Are you really saying that sparks coming out of damaged computers was too stressful to code for? Come on, that's a particle effect that's been in games for over ten years. Same with the steam. Instead of coding in some volumetric steam, they just completely eliminated it when dynamic steam wasn't enabled by PhysX. I'm not buying this argument.

so if you want to complain about that, then you might as well ask why didn't they go back and delay the game even further than when they did just to add gpu physx effects, add cpu fallback effects (like static effects, less particles, scripted animations) that would look similar to the ones added by gpu physx? that goes back to the first part of the argument and also because they are a business and devs need incentive to "waste" time (your words, not mine). per your reasoning, why would they spend additional development time that would cost additional money/ labor that would not benefit them at all, since they have no "incentive" to do so (unlike [nvidia = $ incentive -> eidos / rocksteady -> gpu physx])? it's the exact same reason they wouldn't spend time in the first place to scale cpu physx over from their baseline requirement of a single core up to a quad, and perhaps even the consoles since they have multicore cpus as well. it already takes a lot of time and effort to optimize over a wide range of hardware just to make sure the game functions, so where is the incentive for them to do more than they have to?
The game might have been a bare piece of shit like most of Eidos's recent games. We don't know the back story. My point is that those effects (because of the lesser detailed required) are easy to produce on a CPU (coding included). It simply adds to the realism of the game with little extra effort, and contributes to the overall graphics quality. Call me cynical, I just see the situation is questionable at best. In the same time they spent coding NVIDIA's PhysX, they could have coded most of the same effects to be run through CPU PhysX (and then some), reaching a larger audience. That's the point, why did they waste the time coding in a proprietary solution that would mean anything only to a niche market?

even if it were possible for them to be "noble" and delay the game even further to do so for little to no gain, why would they want to? i say this because i imagine they had a target date to reach the market and gain the mindshare of the gaming consumer in a highly competitive environment - even moreso due to the time of the year. they would have taken a large risk going up against heavy juggernauts (cod mw2 etc.) that would likely lose them potential sales. their track record was unproven and licensed comic book games have historically been denigrated. they had a lot to prove, and in order to do so, they had a better chance of gaining visibility to the average gamer during a less crowded calendar period.

this may be similar to what we are seeing with many titles being delayed to the first quarter of next year - to try to maximize sales when people would have the funds to buy their games, versus just being crammed into one quarter where they won't have to fight so hard to gain attention. this is an even more important factor when it comes to an unproven ip. so there you have it.
No doubt. So again, why waste the time coding a proprietary solution?

as for nvidia, what would you have them do with the situation? if they want to make gpu physx widely acceptable, it has to be available on their competitor's hardware. well, someone already mentioned that it is highly unlikely ati would ever accept such an arrangement if they are at the "mercy" of their competitor and its technology. so ati has already made a decision to go their own route. and nvidia will continue along theirs. so we wait until the necessary competitive forces come along that may hopefully bring a better solution for all. in the meantime, they're not going to stop doing whatever it takes to push their hardware. just as ati will do whatever it takes to push theirs, like offering codemasters monetary "incentive" to integrate dx11 effects into dirt2 with tied marketing to their upcoming cards, and pushing eyefinity technology as a competitive advantage. they're just businesses competing however they can in a "dog eat dog" cutthroat marketplace - nothing more, nothing less. they're not in it to make "friends". it would be naive to think otherwise, imo. and as far as i'm concerned, capitalism = greed and these corporations are merely just participants in this "game".
This is reasonable outlook, and I share many of the same viewpoints. However, look at it this way: did AMD pay Codemasters to exclude "DX11 effects" that are easily done in DX10 and 9 (keyword being easily) in Dirt 2? If so, then they're just as guilty for hoodwinking customers. Regarding the PhysX in Batman: AA, NVIDIA did just that. Now you could argue that Batman: AA was a bare POS and NVIDIA only added things to the game, but I don't believe it, not when they highlight such "differences" in all their promotional videos. Call me cynical.
 
hehe, well there's always two sides two an argument. and yeah, i also see how alot of what you say could possibly be true as well. though, considering how taxing a game engine like ue3 can be on older systems (i have an older intel single core and graphics card and it could barely run a ue3 based game), even adding some trivial effects might not be worth it. so why code for the proprietary solution? because they probably had monetary incentive to do so as you suggested. like someone said before, it's just business and trying to make use of any competitive advantage you got. all companies do this type of stuff. that may be the case with amd and dirt2 as well. as far as exclusion, we may not know, just like we don't know this to be 100% fact in this particular case either. and as for the devs, they might not have thought about adding a particular effect in the game at the time. let's say smoke for example. maybe they didn't plan on having smoke in the game in the first place. it never crossed their mind in terms of art direction. so then nvidia gives them an incentive to go back in the game, and now they can brainstorm ideas of what effects can make the game more atmospheric. so they add in smoke. either way, we can never no completely for sure unless we were working on the game itself. i'm just offering my own possible take on it. and you have yours. it's fine if you are cynical about it. i'm cynical about alot of things. that makes you okay in my book, lol. so i guess we can agree to disagree and just leave it at that. hopefully, a better situation will arise which you may deem acceptable.
 
Last edited:
Back
Top