Ghost Busters Gameplay and Testing - Infernal Engine VELOCITY Physics

That, sir, was what I was looking for (Core2 tech). Thx.
Very impressive too. I like how the proton packs' particle streams would gradually cut things in half. Plus the way that the 'book golem' assembled was very nice.

+1 purchase here...
 
And certainly the Infernal Engine could be much more heavily leveraged to take advantage of the VELOCITY Physics engine ability. I think what we are seeing in Ghost Busters is only scratching the surface to be sure. I am glad to see another player in the market that will be using the CPU to simulate physics so well while letting my GPU do graphics. The video shown was at 1920x1200 (AKA 1200P). We did have one graphics option turned off as Mark explained the option needed "Scan Line Interleaving" to run well. We also did NOT have any AA turned on as DX9 does not support it in the type of rendering they are doing. And we have seen this before with games. They are relying on AMD and NVIDIA teams to turn it on in their drivers. So that said, there is a lots of room for our graphics cards to be doing jobs in increasing or video fidelity instead of simulating objects movements in the environment.
 
Pretty damn incredible. Does anyone think Nvidia is watching this? Seems we've been duped to believe that only the GPU can do this with people actually putting a second vid card in the system to do physics when it now appears it's totally un-necessary. Why isn't Nvidia getting with the program and returning their GPUs to be doing what they should be doing (which also means the vid card could be doing more if it didn't have to deal with the physics) and let our powerful cpu's get into the action and be actually used to their potential and most importantly, give us smoother game play with these kick ass physics?

Is this stuff just look good demonstrations to go by the wayside or is this actually going to be a turning point for future game applications using physics?

I wonder now if Nvidia is going to rebuke this with a bunch of "but", "but", "but" stuff.
 
Is this stuff just look good demonstrations to go by the wayside or is this actually going to be a turning point for future game applications using physics?

The game will be launched in mid-June alongside the release of the Blu-ray version of Ghost Busters for the 25th anniversary of the movie.
 
Nvidia is pushing their other tech because if they don't they will die.
Ultimately, it boils down to innovate or stagnate.
Stagnation = death.
 
Nvidia is pushing their other tech because if they don't they will die.
Ultimately, it boils down to innovate or stagnate.
Stagnation = death.

Well after these demonstrations showing the role the cpu can play with physics (and physics without any apparent affect on the game speed I might add), what Nvidia is doing doesn't seem like innovation to me at all. It does in fact seems like "protect ass" stagnation to me if they are going to push a technology that is actually going to slow down games instead of speed them up. When I see people actually having to put a second vid card in just for physics only because the game runs like crap without it, it frankly makes me sick. We shouldn't have to be doing that (and I ain't gonna do it). I have this I7 pretty much sitting there doing nothing while my game is being slowed down by physics because of the way Nvidia is implementing this piling it all on the GPU. My I7 would love to get into the action but it seems Nvidia doesn't want it to and telling everyone it can't.
 
Last edited:
The game will be launched in mid-June alongside the release of the Blu-ray version of Ghost Busters for the 25th anniversary of the movie.

I know the game is coming out. What I meant was if this might be a genuine turning point in technology on how physics are implemented in the future. That would be great news if this took place where the GPU could go back to concentrating on rendering and not having to install separate physx drivers (which can cause game issues of their own) along with vid drivers. I would personally love to see something like this happen. Seems it would be a forward step. Getting rid of this "Nvidia has it, ATI doesn't" kind of thing and allowing general coding instead of company specific coding.
 
What no .50 BMG reference this time? :p

Seriously though I love these videos. Please keep them coming.
 
Awesome looking game - can't wait to play it! What the GPU makers need to do is start building dual and quad GPU chips and bring power consumption down (like they have done with CPUs). It is great to see physics moving to the CPU because right now the video card is the bottleneck (after hard drives). Glad I went against all the naysayers and went with a quad core CPU instead of dual. :p
 
4 th comp i watch those videos on, works in FF.

to be honest, i get issues in IE all the time, opera is okey, but adapted to FF.

It just seems strange that ghost busters will have a good engine and all that......
Movie games tends to have crappy engines, crappy devs and all that :p
 
KYYLE.


Is the game more like Stalker Clear sky, and wait, let me get to the point.


Clear sky runs kinda purely on single thread, if we take the all of the non physics on core 3 which seems to be loaded equally during the whole scene, while the others are affected by the physics.

i really cant remember much physics in clear sky though, but for this game, they might have made it that way.
 
Well, it's more that the developers haven't given us anything from GPU physics. What's being shown in the Ghostbusters footage has been technically possible for a while with PhysX on GPUs, it's just that until now nobody really thought about how it could be used to meaningfully enhance the game.

Believe me, we game developers have been thinking a lot about how to use hardware accelerated physics in our games, it's just that with the complete mess of incompatible 'standards' out there, we'd cut off part of our target market in order to introduce the really cool, gameplay-affecting things. Ergo we are stuck using PhysX in a limited manner so that even a lame CPU can do the physics bits without dying on the spot.

This is why I loathe this attitude of 'nobody needs more than a CPU for physics'. It's people who think like that who are holding back the really cool changes. If you have ever seen a GPU handle 200,000+ particles whereas even the most powerful, 8-threaded CPU would vaporize from the shock and barely be capable of taking on 5,000 particles, if even that few, you wouldn't be so quick to discard the possibilities.

CPU physics is lame and as useless as a software renderer.
 
Dan Aykroyd was right when he said this wouldn't just be another licensed game

This will be THE game to pair up with a new i7 system later on this year, i can't wait! :D
 
Pretty damn incredible. Does anyone think Nvidia is watching this? Seems we've been duped to believe that only the GPU can do this with people actually putting a second vid card in the system to do physics when it now appears it's totally un-necessary. Why isn't Nvidia getting with the program and returning their GPUs to be doing what they should be doing (which also means the vid card could be doing more if it didn't have to deal with the physics) and let our powerful cpu's get into the action and be actually used to their potential and most importantly, give us smoother game play with these kick ass physics?

Is this stuff just look good demonstrations to go by the wayside or is this actually going to be a turning point for future game applications using physics?

I wonder now if Nvidia is going to rebuke this with a bunch of "but", "but", "but" stuff.
I'm pretty sure CPUs aren't fast enough to do things like cloth simulation and fluid dynamics.
 
You can tell from the video that they're using Microsoft Visual Studio. I probably won't buy this game just for that reason. No one should ever use microsoft tools, it encourages writing programs that only work on windows.
 
You can tell from the video that they're using Microsoft Visual Studio. I probably won't buy this game just for that reason. No one should ever use microsoft tools, it encourages writing programs that only work on windows.

:rolleyes:
 
You can tell from the video that they're using Microsoft Visual Studio. I probably won't buy this game just for that reason. No one should ever use microsoft tools, it encourages writing programs that only work on windows.

What??? you want a DOS or MAC or *NIX Ghost busters game? :rolleyes:

I am using VS for my prime number generator.. and it works in both Windows and *NIX.

VS2008 does have a very nice IDE. Doesn't mean they are using it to make everything.
 
Believe me, we game developers have been thinking a lot about how to use hardware accelerated physics in our games, it's just that with the complete mess of incompatible 'standards' out there, we'd cut off part of our target market in order to introduce the really cool, gameplay-affecting things. Ergo we are stuck using PhysX in a limited manner so that even a lame CPU can do the physics bits without dying on the spot.

This is why I loathe this attitude of 'nobody needs more than a CPU for physics'. It's people who think like that who are holding back the really cool changes. If you have ever seen a GPU handle 200,000+ particles whereas even the most powerful, 8-threaded CPU would vaporize from the shock and barely be capable of taking on 5,000 particles, if even that few, you wouldn't be so quick to discard the possibilities.

CPU physics is lame and as useless as a software renderer.

uh... what I am seeing right now GPU still cant handle that while CPU part catch up...

GPU physics only slows down the game.....

even if there is a spare GPU to go with it, it still doesn't make sense, essentially you are using a physics card rather than a GPU....

then what is the point of PhysX on GPU?

it just doesn't make sense there... and nVidia does fail on this part while other physics engine bring to the next level .....
 
wow, that was great, i havent wanted to buy a pc game on launch day since bioshock lol, definately on my list.
 
CPU physics is lame and as useless as a software renderer.

Uh yeah. I thought you had some sense till that statement.

And I guess this goes to show "lame and useless" is still better than something no gamedev will seriously touch due to it totally producing a game with a install base that will not make them money.
 
Kyle did you ask how it runs or will run on an AMD quad core cpu?

I am also wondering how the AMD quad will do, seems everything you have done so far you have pointed out clearly that your useing an Intel CPU whether it be a Core 2 Quad or an i7.

I am ready to do a CPU upgrade, should I be looking at getting a new MB as well so I can upgrade to Intel to play this game as well as others that are on the drawing boards. I have been thinking of getting the AMD Phenom II X4 940 BE.
 
I am also wondering how the AMD quad will do, seems everything you have done so far you have pointed out clearly that your useing an Intel CPU whether it be a Core 2 Quad or an i7.

I am ready to do a CPU upgrade, should I be looking at getting a new MB as well so I can upgrade to Intel to play this game as well as others that are on the drawing boards. I have been thinking of getting the AMD Phenom II X4 940 BE.

AMD quad will run all of this fine. I see no issues with that. Of course CPU performance is CPU performance. But if a Core2 will push it, there is no reason to think a Phenom II of similar GHz would not do the same.

Core 2 Quad, as was spelled out in big letters on the screen in the video.

I see no reason to build a new AMD system at this point in time over a Core i7. If you are going to do that you might as well build a Core 2 Quad as well.
 
Nice to see quad cores finally get some usage, theres barely a handful of games that will actually load all 4 cores up.
 
CPU physics is lame and as useless as a software renderer.

That statement is kind of contradictory to those videos, don't you think? Those videos using cpu physics didn't seem too lame to me. Seems that engine is making cpu physics work just fine. Perhaps you can expand on what you meant. If I can have my game use all of both my vid card and cpu, I'm sure ok with that. I would feel more like I'm getting my money's worth for what I spend on my system rather than having my GPU trying to do everything and my kick ass cpu just sitting there saying "dude I'm here too, give me something to do".
 
Last edited:
Wow impressive gameplay runs on a CPU, this makes PhysX look rather unoptimized.
 
lol And a "loss" for the department of totally unnecessary tech?

If you call features less than what was done in 2006 a "win", we certainly have different definitions.

I don't like this, because it will slow down the progress of physics in games...back to before 2006 levels...but some will see it as a "victory" becuase they now think the have the "OMGZ supa physcis!!!"...some people will look at pretty pictures and think the prettiest is the best and not understand the mechanics under the hood....the poster above you is a prime example of that.
 
It will slow down the progress of physics in games by allowing devs to program to a larger base rather than hoping against all odds that people buy an extra GPU for physics when the majority of the market won't even buy an extra GPU for graphics themselves? I'm not arguing one implementation's better than the other, but only one of 'em is likely to achieve a large audience.
 
Back
Top