Nvidia DLSS Analysis

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Eurogamer posted an analysis of Nvidia's DLSS tech found in their Turing graphics cards, and the results are interesting. DLSS does boost game performance significantly in some cases, while still providing good image quality. The author points out that Final Fantasy 15 ships with a particularly blurry temporal AA implementation, and running DLSS instead provides a image quality boost in some areas. Check on the screenshots comparisons on Eurogamer's website, or watch video here.

The Infiltrator demo also serves to highlight that the performance boost offered by DLSS is not always uniform - it's not a straight 35-40 per cent uplift throughout. The demo features a number of close-up scenes that stress the GPU via an insanely expensive depth of field effect that almost certainly causes extreme bandwidth issues for the hardware. However, because the base resolution is so much lower, the bandwidth 'crash' via DLSS is far less pronounced. On a particular close-up, the DLSS result sees the scene play out over three times faster than the standard, native resolution TAA version of the same content. But has Nvidia truly managed to equal native quality? For the most part, it passes muster and inaccuracies and detail issues are only truly noticeable when conducting direct side-by-side comparisons - but we did note that in the demo's climactic zoom-out to show the high detail city, the lower base resolution does have an impact on the quality of the final image. Is it likely to distract a user playing the game and not conducting detailed side-by-side comparisons? Highly unlikely.
 
Some of the spots in those screenshots look terrible with DLSS; the truck in particular is an absolute blur-fest even when compared to the TAA screenshot.
 
While I respect what they are up to, this seems more like a solution in search of a problem. At 4K, turning on FXAA smoothes out edges on foliage and jaggies and......honestly I think it's fine. *shrug*. I turn on 2xaa and everything looks still fine....and I lose 5 frames.........again....this feels more like "and look what ELSE you get...." like those old 70's informercials......NOW HOW MUCH WOULD YOU PAY?
 
All this tech is great, but we have to start somewhere. DLSS and Ray Tracing are all something that has to come out at some point, and with time the changes and benefits it brings to the table will be more noticeable and vastly improved. The 1st gen of every new tech can't be expected to be perfect.
 
All this tech is great, but we have to start somewhere. DLSS and Ray Tracing are all something that has to come out at some point, and with time the changes and benefits it brings to the table will be more noticeable and vastly improved. The 1st gen of every new tech can't be expected to be perfect.

Well it shouldn't be a cluster of nonsense either.
 
This one feels to me like a .... look gamers no no all this AI compute stuff we been doing, its for you too.

The idea of a GPU driver needing to get AI crunched numbers for each and every game makes this a silly feature imo. I don't see this being realistic. Waiting for new GPU drivers to support X or Y game. Are they going to crunch numbers for year old games with low user numbers when a developer releases a low volume DLC, or worse when users crate content. I can only imagine the smear show DLSS will be when some DLC gets released with completely different textures and enviroments. Then your I guess turning it off till Nvidia does a driver update.

Cool tech... but I can see this being used in a handful of games and then disappearing as developers and NV loose interest.
 
I think we may need to take these two techs together to understand what Nvidia is trying to do.

The DLSS is likely part of the master plan to get ray tracing working quickly enough over the long run.

Render at a lower res = massively easier on CPU and GPU cycles to render.
Ray Tracing using accelerated RT cores also happens at this lower res at a decent frame rate where it's massively easier to do.
Apply DLSS using AI and deep learning with game images to it to bring it back up to snuff on high resolution displays.

How else can they avoid having to brute force ray tracing an entire 4k or 8k scene? Looking 3, 5 or 10 years down the line when 4k and 8k screens and VR headsets are all anyone uses there has to be a way to ray trace scenes without casting rays on that many pixels.

So yeah, I think I get it. This might be annoying as hell right now but this might just be crazy like a fox over the long haul.

I'm thinking DLSS is just a happy accident for rasterized-only games. It was really only developed to make RT happen.
 
I think we may need to take these two techs together to understand what Nvidia is trying to do.

The DLSS is likely part of the master plan to get ray tracing working quickly enough over the long run.

Render at a lower res = massively easier on CPU and GPU cycles to render.
Ray Tracing using accelerated RT cores also happens at this lower res at a decent frame rate where it's massively easier to do.
Apply DLSS to it to bring it back up to snuff on high resolution displays.

How else can they avoid having to brute force ray tracing an entire 4k or 8k scene? Looking 3, 5 or 10 years down the line when 4k and 8k screens and VR headsets are all anyone uses there has to be a way to ray trace scenes without casting rays on that many pixels.

So yeah, I think I get it. This might be annoying as hell right now but this might just be crazy like a fox over the long haul.

The rays can be calculated at lower resolution while the rastered (textures) are still computed at the target resolution.

What that means is DLSS will have no impact on rays or no rays or half or full res.

Developers have decided they can get away with calculating the rays at a lower resolution because they don't need to be calculated at a higher one. The data is laid onto the scene. For texturing and AA purposes the scene is still full res. The lower res Ray calculation idea is actually quite ingenious and has no impact on final render quality.
 
I'm all for ray tracing if they can come up with a gpu with enough horsepower to do it natively in real time. But from what I've seen of DLSS it cuts down resolution then runs AI to get a form of AA and that's besides needing custom coding to work on a per application basis.

I'd rather keep a rasterization GPU that runs native resolution and AA at hellishly fast speeds, maybe with ray tracing thrown in.

DLSS is just AI looking for a job to do.
 
While I respect what they are up to, this seems more like a solution in search of a problem. At 4K, turning on FXAA smoothes out edges on foliage and jaggies and......honestly I think it's fine. *shrug*. I turn on 2xaa and everything looks still fine....and I lose 5 frames.........again....this feels more like "and look what ELSE you get...." like those old 70's informercials......NOW HOW MUCH WOULD YOU PAY?

I think you are missing the point.

DLSS is gives a HEFTY performance boost. Turn on DLSS and get 30% increase in FPS at essentially no real loss of image quality.

Who doesn't want a free 30% performance boost?
 
I'm all for ray tracing if they can come up with a gpu with enough horsepower to do it natively in real time. But from what I've seen of DLSS it cuts down resolution then runs AI to get a form of AA and that's besides needing custom coding to work on a per application basis.

I'd rather keep a rasterization GPU that runs native resolution and AA at hellishly fast speeds, maybe with ray tracing thrown in.

DLSS is just AI looking for a job to do.

There’s a DLSS 2X version that takes the native res instead of a lower res image. They compare it to SSAA at 1.83x.

So DLSS is for FPS, DLSS 2X keeps FPS the same but boost image quality. Good for me with a 3440x1440 screen...
 
I go for big hardware so I can run high res at high fps. No compromises. I'm not a competitive FPS shooter that needs max fps at the cost of everything else - and I think alot of [H]'ers are like me in that regard.

The issue I see with DLSS that is that it relies upon Nvidia doing software optimizations via their AI. So if they don't, you don't get any benefit. It's not a standard or hardware agnostic / neutral. At least Ray tracing is in the next DX from MS for windows IIRC..
 
All this tech is great, but we have to start somewhere. DLSS and Ray Tracing are all something that has to come out at some point, and with time the changes and benefits it brings to the table will be more noticeable and vastly improved. The 1st gen of every new tech can't be expected to be perfect.

Then they shouldn't charge 40% more money for half baked features either....
 
Then they shouldn't charge 40% more money for half baked features either....

Hasn't that always been the case with Founder's Editions? If you want cheaper 2080s then wait for other board makers to offer their cards...
 
Hasn't that always been the case with Founder's Editions? If you want cheaper 2080s then wait for other board makers to offer their cards...

Lol...the "starting price" is supposedly $999, and I haven't seen a card yet that's priced less than the FE cards. Even at $999 that's 30% more than before.

Let's not kid ourselves. If the 2080 series came in anywhere near the price points of the 1080 series cards no one would complain. When you start boosting prices astronomically, people are going to balk at the price/performance.

Personally, I cancelled my $850 2080 pre-order and picked up a $500 used 1080Ti. I saved myself $350 and get virtually the same performance.
 
Apparently DLSS has a shimmering issue problem. And details like thin lines can be blurred out compared to TAA.

So this feature is not firmly baked.
 
While I respect what they are up to, this seems more like a solution in search of a problem. At 4K, turning on FXAA smoothes out edges on foliage and jaggies and......honestly I think it's fine. *shrug*. I turn on 2xaa and everything looks still fine....and I lose 5 frames.........again....this feels more like "and look what ELSE you get...." like those old 70's informercials......NOW HOW MUCH WOULD YOU PAY?
FXAA doesn't reduce the framerate much but MSAA is a big performance hit. Many of us prefer MSAA so this isn't a solution in search of a problem.
 
FXAA doesn't reduce the framerate much but MSAA is a big performance hit. Many of us prefer MSAA so this isn't a solution in search of a problem.
I totally get what you're saying, but so far it seems the visual benefit from DLSS is closer to FXAA than MSAA. DLSS seems to still blur everything, much like FXAA and TAA. It doesn't keep it crisp and sharp the way MSAA does.
 
I go for big hardware so I can run high res at high fps. No compromises. I'm not a competitive FPS shooter that needs max fps at the cost of everything else - and I think alot of [H]'ers are like me in that regard.

The issue I see with DLSS that is that it relies upon Nvidia doing software optimizations via their AI. So if they don't, you don't get any benefit. It's not a standard or hardware agnostic / neutral. At least Ray tracing is in the next DX from MS for windows IIRC..

Yeah, but if they do it for every game moving forward, it's not a problem and it's on Nvidia to do, not the game devs, they just have to submit samples ahead of time if they want a profile at launch (or near launch).

I could envision a (near) future where you can opt in to DLSS training on your pc which captures a bunch of frames, sends them to Nvidia via Geforce experience, and then uses your data along with thousands of others playing the same game to mine the source material to build DLSS profiles. So for games that don't have a profile, you just get a bunch of people that play it to enable training and Nvidia can build a profile. As all of the RTX cards have tensor cores that can be used to build models, they could also have your own gpu build a profile in idle time, but it'd likely take a lot longer than an Nvidia render farm that has a 10 gpus per server that can work on it. It also may take too much memory for a single card to handle, but it raises a lot of interesting possibilities.

Honestly, the raw frame rate performance of my 1080 is enough so I was hoping more for stuff like this - enhanced features that take graphics to the next level, support for advanced VR rendering (which RTX cards also have) etc. I don't care much about 4k and most games that don't get 90fps in VR are due to shit coding on the dev's part that hammers a single cpu thread - and these advanced techniques can definitely help alleviate some of those bottlenecks and enable thet 8k horizontal headsets that are coming.
 
Doing techniques like MSAA isn't going to be viable going forward. We have to REDUCE overhead going forward by any means necessary. It's the only way we will ever get to 4k 144hz or 8k anything.

When we start talking about image "sharpness" at resolutions like 4k+ removing shimmer pixel artifacts and jagged edges becomes the principle issue. Look at how zoomed into the scenes these guys are getting to find fault with it.

Yes, I agree there's always room to tweak but the point in all this is how many people can either see or care about the level of blurring that's happening? In a blind test Nvidia probably has it down to just a few people out of a hundred being able to pick out any issues at all. And they're rendering the scene at a lower base res!!! That's just... amazing.

They're onto something here. Maybe this is the destination maybe not but honestly I just found one good reason for the 2080. If I want to run 4k going forward I can use DLSS to do it with better overall results.

Sure native is nice but this makes running a 4k monitor for gaming way more sane.

- I'd like to have a 32"+ monitor at 4k for desktop work. Absolutely.
- I like gaming at 1440p and don't care if the game is 4k or not, but I don't like 1080p. That's too low a res for my taste.
- I don't want to spend $1200-1300 on a graphics card.
- The 1080Ti doesn't really handle many games at 4k well at high graphics settings.
- 4k screens don't upscale 1440p as properly as they upscale 1080p, but I don't WANT 1080p.

So yeah, DLSS and a 2080 (IF MORE GAMES SUPPORT IT, I KNOW) makes 4k far more usable overall.
 
  • Like
Reactions: Elios
like this
I think you are missing the point.

DLSS is gives a HEFTY performance boost. Turn on DLSS and get 30% increase in FPS at essentially no real loss of image quality.

Who doesn't want a free 30% performance boost?

No loss of image Quality? What video did you watch, because blurring an image is a huge lose in image quality.
 
While I'm all about performance + quality / fidelity - I do not want things to go in a proprietary direction as Nvidia seems to be pushing with DLSS. If DLSS v2 or v3 can give +30% fps for "free" great ... but that's not here yet AFAIK (lowering resolution to game or putting up with "bad" blur, isn't an option).

For me personally, my upgrade plan isn't going to involve a GPU upgrade until maybe one more gen down the road.
 
No loss of image Quality? What video did you watch, because blurring an image is a huge lose in image quality.

I have examined detailed comparison in several videos, several articles, and a dozen or so still image comparison. I have a very good idea what is going on.

TAA and DLSS trade blows. If you have an axe to grind with either method, you can fixate on some small area in some particular still crop, and claim that method is worse. I can easily point out many areas were TAA is worse than DLSS (and vice-versa).

But in motion, they are essentially indistinguishable.

One of the guys in the Video describes how he compared it. He watched the comparisons standing close to his 65" 4K TV, which means relatively low PPI from close up, and he said he couldn't tell the difference in motion, and had too look at stills to pick out differences.
 
Last edited:
Apparently DLSS has a shimmering issue problem. And details like thin lines can be blurred out compared to TAA.

So this feature is not firmly baked.

Well no that's just the nature of NN upscale -- it's not perfect by any means. It can't concretely upscale since the information is truly missing. And it doesn't add any desirable new "information" (mathematically) to an image. It just tries to look good, sort of like how lossy audio compression tries to avoid any artifacts that the human brain is prone to complaining about. The audio reproduction isn't the original, but it sounds good. The DLSS upscaled image won't the same as a 4K image down-sampled from an 8K image, but it should "look good". Additional baking can't fix the issue, it's a mathematical limitation of contained information.

I can explain "shimmering". The NN is interpreting a lower res image, and as you move about, the interpretation changes slightly. A tech where screetshots won't tell the whole story, nor will you spot it in a typical YT video. Someone will have to capture a subsection of the screen and magnify.
 
So the skinny is that the game goes back to NVIDIA, they run it through their AI and apply 64X SuperSampling at 8K, then hand it back off to the dev. The AI then uses that information for what is should do for your camera in the game. So yeah, I am very much looking forward to checking DLSS out.
 
Cough....Nvidia PhysX.....Cough
Exactly! Rather than opening up the tech as a development environment they are basically gatekeeping with this profile system.

So, when they inevitably move on to the Next Big Thing, we get Yet Another Unmaintained Legacy Feature...
 
Exactly! Rather than opening up the tech as a development environment they are basically gatekeeping with this profile system.

So, when they inevitably move on to the Next Big Thing, we get Yet Another Unmaintained Legacy Feature...

Open it up to what exactly? How many game developers do you know of that have deep learning supercomputers able to actually do this? And the tech relies on the tensor cores in the RTX cards so it would useless to anyone else. Nvidia pulls a lot of BS with their proprietary tech, but this one being more closed off seems to actually make sense.
 
So basically, wait a year and buy the next gen card when they work out all the bugs...

Why is everyone talking about next year? It took over two years before these came out and Pascal was 20 months after Maxwell. The only, loose, estimations I'm seeing about Ampere are pointing to 2020. At most, we might see a 7nm refresh of Turing next year. Unless AMD can pull a Zen out of their ass with Navi, I doubt Nvidia is going bother trying to rush anything new out next year.
 
Why is everyone talking about next year? It took over two years before these came out and Pascal was 20 months after Maxwell. The only, loose, estimations I'm seeing about Ampere are pointing to 2020. At most, we might see a 7nm refresh of Turing next year. Unless AMD can pull a Zen out of their ass with Navi, I doubt Nvidia is going bother trying to rush anything new out next year.

In theory, this technology is damn near useless right now when Nvidia is asking top dollar for it. Coupled with the unprecedented release of the Ti card immediately and the impending move to 7nm it sure seems like Nvidia doesn't plan on this gen being around 20 months...

If it is and the price comes down to a reasonable level, maybe more people would be excited. I've said this since the pricing was revealed. If the 2080 series was at or near the same price as the outgoing 1080 series, nobody would complain. It's the 40% markup for marginal performance gains that has people poo pooing Nvidia.
 
Well no that's just the nature of NN upscale -- it's not perfect by any means. It can't concretely upscale since the information is truly missing. And it doesn't add any desirable new "information" (mathematically) to an image. It just tries to look good, sort of like how lossy audio compression tries to avoid any artifacts that the human brain is prone to complaining about. The audio reproduction isn't the original, but it sounds good. The DLSS upscaled image won't the same as a 4K image down-sampled from an 8K image, but it should "look good". Additional baking can't fix the issue, it's a mathematical limitation of contained information.

I can explain "shimmering". The NN is interpreting a lower res image, and as you move about, the interpretation changes slightly. A tech where screetshots won't tell the whole story, nor will you spot it in a typical YT video. Someone will have to capture a subsection of the screen and magnify.

Yes but the shimmering effect is non existent on other as including traditional super sampling txaa and fsaa that is because they are rendering more.

What the dlss appears to be doing is checking to see if there is noise on a move (similar to temporal) and then using prediction analysis based on a noise vector to predict if it needs anti alaising. A moving wire that appears in and out of a pixel has a certain repeated frequency which ai would be good at recognizing.

Well artificial intelligence doesn't always get it right or I would not be yelling at Google when I ask for Chinese food and it Google's Chinese fool

I could be way off but that's my initial hunch. It's a half baked approach. Initially it's all noisy till there is a bit of movement for the ai to form test vectors for noise. Or that's what it appears to be on my video inspections.
 
Last edited by a moderator:
Open it up to what exactly? How many game developers do you know of that have deep learning supercomputers able to actually do this? And the tech relies on the tensor cores in the RTX cards so it would useless to anyone else. Nvidia pulls a lot of BS with their proprietary tech, but this one being more closed off seems to actually make sense.

Then the feature is essentially DOOMED.

Because EVERY time NVIDIA acts as a gatekeeper on a function like this, leaving the industry reliant on them for "Make it work" game profiles, they eventually run out of personnel "bandwidth" to get this done.
Because they wind up becoming, essentiallly, unpaid devs on these games and spend tons of time trying to dial it in and unfucking things as driver pactches break stuff.
 
Then the feature is essentially DOOMED.

Because EVERY time NVIDIA acts as a gatekeeper on a function like this, leaving the industry reliant on them for "Make it work" game profiles, they eventually run out of personnel "bandwidth" to get this done.
Because they wind up becoming, essentiallly, unpaid devs on these games and spend tons of time trying to dial it in and unfucking things as driver pactches break stuff.

If it was AMD, I would concur. But nvidia has plenty of resources to make it work. Sure there are only a few upcoming games that support it, but the list is already growing. and you can bet every upcoming TWIMTBP game will come with DLSS

DLSS is in its infancy and hopefully will get better in time. As it stands now, its just a faster version of TAA. Which is pretty much what nvidia promised.
 
If it was AMD, I would concur. But nvidia has plenty of resources to make it work. Sure there are only a few upcoming games that support it, but the list is already growing. and you can bet every upcoming TWIMTBP game will come with DLSS

DLSS is in its infancy and hopefully will get better in time. As it stands now, its just a faster version of TAA. Which is pretty much what nvidia promised.

Exactly. TAA is one of the more popular AA modes currently, as it covers a wide range aliasing artifacts, and has a small performance loss.

Deliver similar quality, and a 30%+ performance boost. That is going to make DLSS extremely compelling.

They were already up to 25 games with announced support before the cards were even shipping.
 
If it was AMD, I would concur. But nvidia has plenty of resources to make it work. Sure there are only a few upcoming games that support it, but the list is already growing. and you can bet every upcoming TWIMTBP game will come with DLSS

DLSS is in its infancy and hopefully will get better in time. As it stands now, its just a faster version of TAA. Which is pretty much what nvidia promised.


The fact that it requires NVIDIA-created profiles is going to bottleneck the technology. Such prerequisites have hobbled pretty much every other technology to date. From every vendor who has ever created such a schema.
 
Back
Top