- Joined
- Mar 3, 2018
- Messages
- 1,713
While many reviews have already dug into the technical aspects of 4A Games' newest Metro title, Digital Foundry recently posted a video showing how all that fancy tech actually impacts the in-game experience. Like many other games and benchmarks, Metro Exodus is pretty and demanding, but DF points out that those visuals do an excellent job of making the game feel immersive, as opposed to coming off as gimmicky features. The particle effects, for example, really contribute to Metro's moody atmosphere, while little touches like a remarkably detailed first person body view and the conspicuously detailed shadow it casts all make the game feel realistic.
Check out the video on Metro Exodus's immersiveness here, or read Eurogamer's lengthy interview with 4A programmer Ben Archard and CTO Oles Shishkovstov if you're more interested in the technical aspects.
The open world maps are completely different to the enclosed tunnels maps of the other games. Environments are larger and have way more objects in them, visible out to a much greater distance. It is therefore a lot harder to cull objects from both update and render. Objects much further away still need to update and animate. In the tunnels you could mostly cull an object in the next room so that only its AI was active, and then start updating animations and effects when it became visible, but the open world makes that a lot trickier... So, a good chunk of that extra time is spent with updating more AIs and more particles and more physics objects, but also a good chunk of time is spent feeding the GPU the extra stuff it is going to render. We do parallelise it where we can. The engine is built around a multithreaded task system. Entities such as AIs or vehicles, update in their own tasks. Each shadowed light, for example, performs its own frustum-clipped gather for the objects it needs to render in a separate task. This gather is very much akin to the gathering process for the main camera, only repeated many times throughout the scene for each light. All of that needs to be completed before the respective deferred and shadow map passes can begin (at the start of the frame).
Check out the video on Metro Exodus's immersiveness here, or read Eurogamer's lengthy interview with 4A programmer Ben Archard and CTO Oles Shishkovstov if you're more interested in the technical aspects.
The open world maps are completely different to the enclosed tunnels maps of the other games. Environments are larger and have way more objects in them, visible out to a much greater distance. It is therefore a lot harder to cull objects from both update and render. Objects much further away still need to update and animate. In the tunnels you could mostly cull an object in the next room so that only its AI was active, and then start updating animations and effects when it became visible, but the open world makes that a lot trickier... So, a good chunk of that extra time is spent with updating more AIs and more particles and more physics objects, but also a good chunk of time is spent feeding the GPU the extra stuff it is going to render. We do parallelise it where we can. The engine is built around a multithreaded task system. Entities such as AIs or vehicles, update in their own tasks. Each shadowed light, for example, performs its own frustum-clipped gather for the objects it needs to render in a separate task. This gather is very much akin to the gathering process for the main camera, only repeated many times throughout the scene for each light. All of that needs to be completed before the respective deferred and shadow map passes can begin (at the start of the frame).