CRYENGINE 5.3 Now Available, Supports NVIDIA PhysX

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
According to this blog post, Crytek has released CRYENGINE 5.3 today with NVIDIA’s PhysX built-in as an alternative to the company's proprietary CryPhysics. Hit the link for full release notes and a link to download CRYENGINE 5.3 for free.

NVIDIA’s PhysX technology is coming to CRYENGINE in the 5.3 release as a beta, making it the first time that users will have a built-in alternative to our proprietary CryPhysics suite available to use when building their games. This will give all Cryengineers more flexibility than ever before when it comes to selecting the right physics solution for their games’ unique needs. For those unfamiliar with it, NVIDIA PhysX is a scalable multi-platform game physics solution supporting a wide range of devices that has been used in many of today’s most popular games and platform.
 
Probably just their way of moving away from their own physics engine so they do not have to worry about it :/
 
I guess this is a little pre-bankruptcy goody that they're releasing into the wild. What a nice bunch of fellahs... except they apparently don't pay their employees.
 
I guess this is a little pre-bankruptcy goody that they're releasing into the wild. What a nice bunch of fellahs... except they apparently don't pay their employees.

I guess you aren't aware it has been free for some time.
 
that c code make file is a pretty big change it means that you can use a standard svn repository to post code to have it build stubs as test code, verify each piece independently and then move to more object oriented code. So a mystery box simply has to hold functions not over head. If you don't understand what code is doing you put it on a virtual machine and log everything it does while feeding it random variables. When it throws stack over flow errors and they consistently over flow to the same areas you go through that code to see why. If it does not throw errors you check is mostly safe code. Once someone has been assigned the code and signs off they have documented in psudo code in the comments what the code does, it is signed off as safe. Each programmer writes what each function does to a database and when people need that functionality they simply call to the most efficient mystery box.

With a make file you don't need to know what is being called by default only what you have tested with the code you have checked off. This was a lot of work in the past because when you needed to do something it worked in the engine or it did not and major systems might have to be gutted because you don't have time to dig through every undocumented dependency. Once you have gone through an engine and broken it down into main code path and functions, instead of having a huge block of text a million lines long you have file structure with lighting and texturing, vert placement and movement, dependent systems, or what every setup makes sense to build the game into the code not simply sit it on top so that there is bunch of redundant code and all the functions are in line instead of referenced.

Instead of asking where the bottle necks are you run dependency checks and test to see how fast the computer can go through the iterations of the code. Coders build by saying the functions do this, artists build by either type of asset or where the asset is used. Being able to go through and over write the basic assumptions with a working engine means that people can start with testing their assets with the basic engine and then when it does not do something they need they can write psudo code that says my textures need to be able to source fractal patterns how do I get them in the engine? So the person looks at the steps the rendering engine uses and says this is the code block that brings in textures... all the functions are in line. The lead coder shrugs and goes straight down the code and assigns a different inline section to a different coder and re-writes the variable declaration and how the functions pass code in and out. Then when the code is working how it used to work, the lead coder looks at what is brought in and each function is compared based on the psudo code and the functions are optimized based on different eyes on. Usually you end up with a tenth of the functions and replacing slow loops with break statements and switch statements so that if you need to ask what file type you setup a switch statement to say on a test statement a one is a jpg, a two is png, etc... so that it does not have to run through the code only the test statements. Then you write code based on open shader languge or simply code a random function to reference a table of ones and zeros that are randomly started in the table so that when it multiples what is in the jpg or png file with the randomized zeros and ones or float values it is storing fast math instead of huge texture files. Every texture file when you use it in a render engine is at the most simple nature a map of red, green, blue, off, pixels with float values being transparency. If you need to edit them you need slight more than that like layers and so forth but when rendering a single layer can be writen as a grin of R00|G00|B00|F00 you can speed up the processing by using four registers and store numbers to each one and when you pull them off writing the blue, green then red data to a grid, you pass askii instead of float values... to the rendering client. Depending on what the render engine does this can speed up or slow down the engine but with a standard make file you can have two builds running on standard machines and see how which one the code runs on faster and swap in various code until you have the functionality at runtime speed. If you have to render to engine that says this is where it expects everything to be you can not move things around and add functionality as fast.
 
Physx is crap because it's tied to a hardware vendor and everyone else is moving away from it.
 
Physx is crap because it's tied to a hardware vendor and everyone else is moving away from it.

that is only for gpu accelerated physics which almost no game developer interested in. bullet physic has been offering vendor neutral solution since like 2010/2011 and yet despite we see more and more games using the engine (including triple A tittle like GTA V) none of them ever use GPU accelerated feature. CPU based physx especially since main version 3 is quite good. actually good enough to force havok make some noise that their solution is still the best when it comes to physic simulation for game engines.
 
To bad it doesn't support its employees :(
mjl.gif
 
According to this blog post, Crytek has released CRYENGINE 5.3 today with NVIDIA’s PhysX built-in as an alternative to the company's proprietary CryPhysics. Hit the link for full release notes and a link to download CRYENGINE 5.3 for free.

NVIDIA’s PhysX technology is coming to CRYENGINE in the 5.3 release as a beta, making it the first time that users will have a built-in alternative to our proprietary CryPhysics suite available to use when building their games. This will give all Cryengineers more flexibility than ever before when it comes to selecting the right physics solution for their games’ unique needs. For those unfamiliar with it, NVIDIA PhysX is a scalable multi-platform game physics solution supporting a wide range of devices that has been used in many of today’s most popular games and platform.

Oh good, more proprietary Gameworks like stuff but hey, not like NVidia would use that to gain an advantage and lock someone else out, right? /s
 
Last edited:
game works is used for things like hair and tearing cloth... a lot of studios and film use it. Most people that use it keep screaming for an open source version but many studios use it because when it comes to eye candy making hair and fur look good on the cpu is really expensive in cycles. On the gpu it is relativity cheap. Pretty sure there are reviews on here showing that off.
 
It's actually hard to find games that use hardware Gameworks or Physx. Most games do it off the CPU now. I was excited Farcry would be hardware Physx for the fur and such, nope, all processor.
 
I would guess that the witcher 3 is one of those games you like it or you don't. The division has some issues and they simply ran out of time converting heads from body scans but most of the games that use it for hair or grass it is very obvious since it goes from spirits and modeled geometry to hair that moved with clusters and bouncy hair. I have a feeling most games don't want to admit using it so their amd users don't feel left out or short changed. I have seen the effect in some games that don't mention it but just look at the witcher 3 and you see the hair in that game you begin to understand why they simply take clusters and assign them to individual shader cores and then render them back to front. so that the hair's movement is calculated individually then the layers are written to a 3d grid and the clusters movement is compared to the layers in front. Then the hair is rendered so that anything in the top most grid is seen and anything that can be seen past that is rendered and then each layer. Pretty much amd could do it but they would have to reinvent the wheel since they use ringbus, and their memory is addressed in a totally different way on the current cards.
 
It's actually hard to find games that use hardware Gameworks or Physx. Most games do it off the CPU now. I was excited Farcry would be hardware Physx for the fur and such, nope, all processor.

Because people complained relentlessly about it being nVidia-only, even though most of it was optional extra fluff that didn't affect gameplay at all.
 
It wasn't only AMD people complaining it was everyone that used it .Gpu based physics run horrible cause instability, stutters and massive fps drops in essentially every game it was featured in unless you owned a titan x.
 
Back
Top