Nvidia DLSS Analysis

PhysX is the physics engine in UE4. Like the entire physics engine. If you have ever played a UE4 engine game you are using PhysX.

PhysX is probably one of the more confusing technologies out there, because people don't understand the SDK, There is a huge difference in the ways it is utilized,

1) - PhysX GPU/CPU compute - Plugins that even Havok 2.0 uses and is practically a standard at this point Multiple Engines/Studios/and Publishers actively use it, Typically these use cases don't have the Green NVidia logo associated with them and its just labeled PhysX, However the SDK is being leveraged(just not accelerated).

2) - GPU PhysX Enhancement SDK - This is what people Tend to associate PhysX with, Gameworks titles and a giant NVidia symbol slapped on, and it does exactly what it states, enhancements and in most cases over exaggeration of PhysX, they tend to use the term "Accelerated PhysX on CUDA"

Either way people can't deny companies have always pushed the envelope and introduced new technologies and they have locked down the experience to their hardware, and until years go by and things become "Standard" they should be able to profit on those tech. portfolios to make their GPU look attractive to customers, AMD supposedly makes open-sourced improvements, they can label it as they choose but in some cases its far from the actual truth, "IF" NVidia and AMD used the same technologies as open sourced, there would be zero differences between each platform there are only so many ways to dice something and silicon does have limits as to what you can achieve in any given generation at a physical level, 7nm by far and the failure to push it out has caused problems for the industry as they relied more on Moore's law to provide a boost in performance, architecturally one could argue while different they are the same in many ways, as the principles in which a GPU works can only alter so much. 7nm Double and Triple pass "Printing"(for the sake of my sanity I'll use this term) will be used to save cost while UVL isn't ready yet, the main issue is due to the yields, the problem is performance gains won't be as great and efficient as they should be, AdoredTV did a Video explaining this.

DLSS will be used probably to a great degree in the industry near quality AA with a massive performance boost is what Enginemakers/Studios need to keep pushing the envelope further on what can be achieved.

Remember, and people never really get this, "Optimization" in the industry is a trash can word with multiple meanings depending on who you are in the industry...……
Engine Developers - Hardware/OS/API/Driver optimization deals with tuning the software to run efficiently and leverage the hardware...……
Gamers - Typically talk about the former mentioned
Game Studios - talk about tuning graphical fidelity, basically tune the game to run smoothly at a specific desired resolution and Frame Rate/Time, this is usually achieved by scaling back features like draw distance or reducing reflections..ect ect,
Publishers - typically talk about both or all in generic terms that you simply can't infer what they mean.

DLSS will allow Game developers more wiggle room to provide more advanced features in the future.
 
The fact that it requires NVIDIA-created profiles is going to bottleneck the technology. Such prerequisites have hobbled pretty much every other technology to date. From every vendor who has ever created such a schema.
For example...

It would be great if you could just toggle DLSS on the CP, and maybe eventually this will be the case. As the AI becomes more mature, it may not have to deal with a case by case scenario. But even if its only applied with the most popular games, (as seems to be the case for now) who cares as long as it works?

Again its in its infancy, just give it time.
 
Wow, there's almost no point compared to this at all. This looks like the worst of both worlds to me:

Compared to normal SSAA: Has flickering on single pixel content, can't antialias very thin lines properly. Definitely doesn't have the same visual quality as real SSAA.
Compared to FXAA: Blurrier image, Substantially slower.

It is a little bit better than FXAA visually (except for the blur), it strikes me as very similar to TXAA in the past. I won't be missing this at all.
 
Those of you calling this blurry...

Are you guys looking at full screen uncompressed 4k images, and switching between them sitting at gaming distance? Or are you going off the heavily blown up portions of a 4k screen?

Also remember what was said in the video showing the Infiltrator demo? During certain portions of the demo effects ended up crushing the frame rate at true 4k no matter the hardware used because of outright bandwidth limitations. The reasons why aren't relevant in this context. There's the demo, it's what we have, and at certain points the frame rate crashes doing certain effects. That's just like any game that has flaws or overdone effects or a worst case scenario of effects and assets on screen at once.

By using DLSS that renders the scene at 1440p and doing wizardry to bring it to 4k instead those portions just zoomed right by at high frame rates. This is... more than just interesting.

I'm not saying we don't WANT real brute force native 4k or 8k scenes rendered but maybe that's just not realistic from a processing standpoint. I was grudgingly more impressed that it could just fly through the sections that would otherwise be a slide show while maintaining the level of quality that it was.

Sometimes, actual use case does matter.

Put the whole "Nvidia monopoly" thing down for a second. AMD doesn't have a monster 4k card either that's cheap. At the current state of technology how much video card or how many video cards will it take to get you guys the 4k 144hz you want at ultra settings? More than currently exists. More money than any average gamer can spend even if it was possible to build. This is going to be true for the short to mid term (up to 10 years at current pace of development.)

On top of that, we have the specter of needing to perhaps render at 4k per eye at very high refresh rates to make VR and AR really good with the wide FOV that will really immerse.

And dog pile on top of that the need to start ray tracing light sources on scenes which needs even more processing power (more silicon, bigger dies, more cost to brute force).

How much processing power IS that exactly to do it at 4k or higher? More importantly what exactly is the path to get there from HERE (from the 1080Ti which is already quite a monster) This has to get done on reasonably sized GPU dies that consumers can afford to buy. As we can all see, the 1080Ti and now Turing cards especially are both pushing the max limits of price.

Sure, Nvidia probably could have made the same size die as Turing, packing as many Cuda cores as humanly possible onto it. Then what's left? At that point we are at endgame of Pascal. Die shrinks will be the only way forward and looking at Intel's woes it's apparent that 7nm tech and smaller has real major difficulties to produce.

Based on the available options, tech like this or other similar creative solutions are the only way we are getting there inside the next decade. Let's develop it and see where it leads.
 
Back
Top