Cryengine 3 to Use in-house Physics Engine, No Havok Physics or PhysX Middleware

AMD_Gamer

Fully [H]
Joined
Jan 20, 2002
Messages
18,287
Good News. More companies should take this route. There's more than enough CPU power/threads to handle these types of features.
Also, this saves the consumer money by not having to buy another card to run Physx. Win.
 
They did the same for the cryengine 2 and 1 if IIRC. I see no reason why they would change.

It sound slike they have made no changes to the one they are currently using.
 
So they are using some random physics engine that probably just dumps everything onto the CPU? Hard to see how they can really spin that into an advantage. If they really want to impress in that regard maybe they could have their built-in physics use the GPU via OpenCL or similar.
 
While to me it sounds like good news the fact that they aren't forcing customers to side with certain card, it doesn't sound very efficient. CPU already has enough work to do -- even if it is not fully utilized (the extra padding helps as a buffer for "above and beyond" and unexpected results).

There is always a way to do it right. In this case, they should be using an open format that is compatible on technically any GPU (OpenCL?).
 
Considering how good their CE2 physics were, I'm not surprised. Also exciting is the possibility that my 8 thread Nehalem may actually be utilized.
 
Good News. More companies should take this route. There's more than enough CPU power/threads to handle these types of features.
Also, this saves the consumer money by not having to buy another card to run Physx. Win.

I believe there's still a lot of limitation with CPU, as they are not as massively paralleled as a GPU. I doubt you can do stuff like fluid dynamics or simulating insane amount of particles on CPU without negatively impacting the overall performance.

Otherwise we would have seen many games doing amazing physics on CPU by now if CPU "have more than enough power";)


Anyway this is nothing new or surprising, as others have said, they have done their own physics in previous engine, and they aren't going for some insane physics simulation. Doubt we'll see much difference from what we've seen in Crysis


That's right nvidia, fuck you.
Isn't ATI themself are pushing for GPU physics too? Its open standard IIRC, but still GPU physics :)
nVIdia isn't the only one here
 
Isn't ATI themself are pushing for GPU physics too? Its open standard IIRC, but still GPU physics
nVIdia isn't the only one here
I think the point is that NVIDIA's monopoly won't be extending to Cryengine 3.
 
So they are using some random physics engine that probably just dumps everything onto the CPU? Hard to see how they can really spin that into an advantage. If they really want to impress in that regard maybe they could have their built-in physics use the GPU via OpenCL or similar.

How can you not see an advantage? If it can be done using your extra CPU cores (that won't be used in the game otherwise) without having to buy a specific company's GPU, how is that not progress? The GPU will be busy enough, let unused CPU cores go to good use.
 
I think the point is that NVIDIA's monopoly won't be extending to Cryengine 3.

A total of around a dozen games using gpu PhysX in the last four years with maybe only four triple A titles, and only one in the last year, is hardly a monopoly imo. They are just one more of the vast majority not bothering with GPU PhysX.

It will be nice when we have a physic solution running on OpenCL that both Amd and Nv can use. We might actually see gpu physics go somewhere then.
 
There are about 5 people on this whole forums who actually know the meaning of the word monopoly. The rest just use it every time they hear the words nvidia or Intel.
 
I loved that trailer, the CryEngine 3 - Beauty trailer linked in the article. A lot of the Physics real-time world destruction effects looked quite amazing and probably the best usage of destructible environments I have seen in a little while.

I just hope we'll be able to do things like destroy briges, while' and enemy soldier is standing on the bridge :D. To watch him fall and possibly get crushed by debree? That would make for one of the most interesting gameplay mechanisms I've seen in a good while.
 
That would be good, unfortunately one of the problems from crysis was that destroying a building whilst an enemy was in it more often than not they would get up again. If a sheet of corrugated metal was to fall on my head, not sure I would just brush it off.
 
That would be good, unfortunately one of the problems from crysis was that destroying a building whilst an enemy was in it more often than not they would get up again. If a sheet of corrugated metal was to fall on my head, not sure I would just brush it off.

yea i was kinda frustrated with that too...blow up one of those platforms they like to sit in and snipe you and they just get up and grab an automatic
 
So they are using some random physics engine that probably just dumps everything onto the CPU? Hard to see how they can really spin that into an advantage. If they really want to impress in that regard maybe they could have their built-in physics use the GPU via OpenCL or similar.
How can you not see an advantage? If it can be done using your extra CPU cores (that won't be used in the game otherwise) without having to buy a specific company's GPU, how is that not progress? The GPU will be busy enough, let unused CPU cores go to good use.

Not using Nvidia PhysX is good for not locking the consumer into a particular brand. Not supporting (optional) GPU physics is bad, as it's one less option for the consumer. If it's done via OpenCL, then it's not locking anyone into any brand.

I don't think my 920 would have problems with it, but it would be nice to be able to toss in a $100 GPU (or just have your uber HD GTX Eleventy Billion do it along with graphics) to boost physics on a marginal CPU, rather than being forced to upgrade the CPU (which usually seems to involve a new board and RAM too).
 
I believe there's still a lot of limitation with CPU, as they are not as massively paralleled as a GPU. I doubt you can do stuff like fluid dynamics or simulating insane amount of particles on CPU without negatively impacting the overall performance.

Otherwise we would have seen many games doing amazing physics on CPU by now if CPU "have more than enough power";)

Go ahead and try to simulate fluid dynamics with a GPU. If you're talking about the relatively simple ~100k point "fluid" simulations, it looks like total crap once you change the bounding box to something larger than a 3*2*2 coffer, and RAPES a high end GPU (100k particles absolutely maxes out a 285). Fluid dynamics is ridiculously expensive, and you won't be seeing it in games in this point particle simulation form for a decade at least, for anything more significant than some firehose action, some disappearing raindrops, or brief dynamic effects. Water will be fast Fourier transform for a long time to come, and you don't need a GPU for that.

As far as cloth & soft body physics, once you take efficiency into account, a Nehalem/Lynnfield is probably ~ as powerful as Cell, considering how dogshit poor Cell is for other things that sap a lot of CPU time (A.I for instance). Soft body stuff has been done on PS2, and good quality has been achieved on PS3. I don't see why we GPU offloading physics is in any way a good idea unless we plan on continuing this trend of making console port PC titles with 4 year old graphics.
 
I believe there's still a lot of limitation with CPU, as they are not as massively paralleled as a GPU. I doubt you can do stuff like fluid dynamics or simulating insane amount of particles on CPU without negatively impacting the overall performance.

Otherwise we would have seen many games doing amazing physics on CPU by now if CPU "have more than enough power";)

Not when the performance is limited by how well the XBox 360 is able to run them.
 
Point us to a better looking engine.

Or a better performing one with high quality SSAA, subsurface scattering on all vegetation, 2-3M polygons/frame, and 100% real-time shadows (as well as the level of shader complexity).
 
My question is, can it handle making boobies giggle?


But in all seriousness I think this is a fantasist idea. I rather not have games that are made for on particular GPU. If offloading that work to the CPU helps, I am all for it. Besides it might make my other cores of my CPU do something now.
 
There are engines that are not far off and perform a lot better.
Now you point me to the last engine that generation after generation of amazing new video cards still performs this bad.... ;)

What engine is not far off? I have yet to see one even close so if you want to show us one I would love to see it.
 
Which ones? There are plenty that perform better but none I can think of that look near as good.
There are games with superior art direction that may seem close at a glance, but that is about it

There is no other engine that gen after gen of cards has trouble with. But that's because Crysis's eye candy is not free. FarCry was the same way. It took a couple years for hardware to catch up to that game and for other games to start rivaling it in graphics.
 
Yes, I really suggest some people set A.I to ignore and just go for a wonder around the game. Even though I would like the palm trees to be not so jaggy it still is a nice game to wonder about in.
 
The ironic part about this topic is that it has people who obviously have some link with technology because they visit this forum applauding the lack of a high-end option in a new game engine.

Nobody forces developers to use hardware-accelerated physics, but it does allow for a whole lot of interesting options in games. If OpenCL is used as API, then virtually every GPU currently used to play modern games with can do GPU physics. Now what's wrong with that?

My company's in-house game engine uses PhysX at the moment because it was the best option when we started programming the engine, but if Bullet proves to offer good GPU physics via OpenCL soon, we may switch to it with the next version of the engine.
 
I'd rather a developer use Havok or their own physics engine because they'll be able to create a game where physics can directly affect gameplay.

with Physx supported games, the developer is limited in creating a game where physics has a tangible effect on gameplay due to needing to maintain core gameplay for non-Physx gamers (ATI and console users).
 
Which ones? There are plenty that perform better but none I can think of that look near as good.
There are games with superior art direction that may seem close at a glance, but that is about it

There is no other engine that gen after gen of cards has trouble with. But that's because Crysis's eye candy is not free. FarCry was the same way. It took a couple years for hardware to catch up to that game and for other games to start rivaling it in graphics.

Take at a look at CPU utilization during Crysis, especially on a highly clocked Nehalem.
 
I'd rather a developer use Havok or their own physics engine because they'll be able to create a game where physics can directly affect gameplay.

with Physx supported games, the developer is limited in creating a game where physics has a tangible effect on gameplay due to needing to maintain core gameplay for non-Physx gamers (ATI and console users).

What you say only applies to gpu physics.

Physx, like any other physics software is capable of gameplay physics, just not on the gpu where it is limited to graphical effects which do not need physics of any kind to simulate what they do.

Nvidia incentives + lazy programming = waste of resources.
 
Take at a look at CPU utilization during Crysis, especially on a highly clocked Nehalem.

I would prolly see one core at 90 to 100% usage and another around 50% with the other two under 20%. Is your point that it still looks good without cpu crushing physics effects? If so, you are preaching to the choir. Or are you trying to point out that it makes only mediocre usage of multi core cpus? Again, you would be preaching to the choir.

What does that have to do with what I typed in my post regarding it and GPU's?
 
I would prolly see one core at 90 to 100% usage and another around 50% with the other two under 20%. Is your point that it still looks good without cpu crushing physics effects? If so, you are preaching to the choir. Or are you trying to point out that it makes only mediocre usage of multi core cpus? Again, you would be preaching to the choir.

What does that have to do with what I typed in my post regarding it and GPU's?

I think he's saying pretty much both of those things; there's plenty of room for CPU physics.
 
Last edited by a moderator:
Back
Top