DLSS vs resolution scaling?

jhatfie

[H]ard|Gawd
Joined
Mar 19, 2003
Messages
1,636
I know in theory what I have read about DLSS, it sounds promising, but from all the screen shots and videos I have seen it lacks detail and the images seem soft in comparison to native resolutions. Has anyone done a comparison how performance and image quality stack up against regular resolution scaling?

For instance in AC:Odyssey on my 4k monitor I did tests at 100%, 80% and 70% scaling. In the same scene average fps was 44, 60 and 67 fps respectively. I took a few screen shots so image detail could be compared. At each step in scaling, detail is certainly gets softer, but I wonder how DLSS would look in comparison along with how performance would compare. Yes I know DLSS is not available with AC, but it is one of the few games I know that have built in resolution scaling for me to quickly test with.

70%:
20190216100736.jpg
80%
20190216100939.jpg
100%
20190216100900.jpg
 
FYI those images show up as 2000x1152 jpgs for me. I usually save screenshots as PNGs and upload them to a host that doesn't re-compress them (like lensdump) when making image comparisons.

Anyway, here's your 100% screenshot at 720p, zoomed in 2x:

W8AJIM.th.png

https://lensdump.com/i/W8AJIM

Here's how the graphics driver might scale it to 1440p (Bilinear):

W8A6UA.th.png

https://lensdump.com/i/W8A6UA

Here's how a good "traditional" GPU upscaling algorithm might scale it, which could be done via an in-game shader or maybe ReShade (Spline36 + Finesharp):

W8AQmQ.th.png

https://lensdump.com/i/W8AQmQ

Here's how a NN algorithm might upscale it. This isn't actually DLSS, but it's something similar (MSRN 2x):

W8AVoa.th.png

https://lensdump.com/i/W8AVoa

EDIT: Fixed broken images, sorry. Follow the links and click the image for full res.
 
Last edited:
Hardly any difference between those, yet DLSS looks crap.
 
Hardly any difference between those, yet DLSS looks crap.

Well DLSS uses 1440p to 4k for example. That would be 50% scaling. But you generally don’t get that performance... 70-80% downscaling is probably a proper comparison in reality since they share about the same perf gain as dlss.

Overall I was more excited for DLSS than RT and it’s been a complete bust so far. It introduces shimmering which is the graphical defect I hate most. DLSS2X was specifically what I care about and shouldn’t have shimmering. It takes your native resolution and upscales that higher. I won’t hold my breath on it being released anytime soon. And not fond of how non-generic DLSS is. Certain video cards and only certain resolutions... just give me more CUDA cores instead.

Anywho - the closest comparison might be dlss vs 1800p.

I think it was this article. https://www.techspot.com/article/1712-nvidia-dlss/
 
It introduces shimmering which is the graphical defect I hate most. DLSS2X was specifically what I care about and shouldn’t have shimmering.

So by DLSS2X, you mean render at native res, upscale, then downscale it?

Unfortunately, if DLSS already has shimmering, DLSS2X wouldn't solve that unless Nvidia overhauls the algorithm.

Also, upscaling then downscaling to the same res doesn't work as well as you'd think, and running a neural network upscaler on a full frame takes a lot of horsepower. You'd almost always be better off rendering the game natively at a slightly higher res, and then downscaling it a little.

Here's that car from the Techspot article:

https://lensdump.com/i/W8ID2P

Upscaled, then downscaled, using bicubuc:

https://lensdump.com/i/W8IooZ

Upscaled using a neural network, downscaled using bicubic:

https://lensdump.com/i/W8IYBK
 
DLSS is a non-starter for me unless Nvidia can support arbitrary resolutions (or at least train their model on a wider range).

I'm on 1080p ultrawide and that doesn't even seem like an option. I might even be willing to run at vanilla 1080p (with black bars) but the 2080 Ti seems to only work at 4K, which is not ideal.

The tech sounds decent, though, so I hope they can expand in the future.
 
DLSS is a non-starter for me unless Nvidia can support arbitrary resolutions (or at least train their model on a wider range).

I'm on 1080p ultrawide and that doesn't even seem like an option. I might even be willing to run at vanilla 1080p (with black bars) but the 2080 Ti seems to only work at 4K, which is not ideal.

The tech sounds decent, though, so I hope they can expand in the future.

The reason why DLSS is not present at lower resolutions on the 2080 Ti is that the Tensor cores cannot keep up with the framerates...simple physics.
 
DLSS is a non-starter for me unless Nvidia can support arbitrary resolutions (or at least train their model on a wider range).

I'm on 1080p ultrawide and that doesn't even seem like an option. I might even be willing to run at vanilla 1080p (with black bars) but the 2080 Ti seems to only work at 4K, which is not ideal.

The tech sounds decent, though, so I hope they can expand in the future.
The reason why DLSS is not present at lower resolutions on the 2080 Ti is that the Tensor cores cannot keep up with the framerates...simple physics.

The whole idea behind DLSS is to make rendering games at non native resolutions look better than just letting the driver/monitor upscale the image, hence it's called "deep learning super sampling," and not "deep learning anti aliasing."

A RTX 2080 TI has no problem running any DLSS-enabled game in existence on a 1080p ultrawide and its native resolution. Hence Nvidia didn't disable DLSS because an RTX 2080 TI can't keep up, they disabled it because there's no reason to run the game at 720p or whatever the base resolution would be.

If you want anti-aliasing at native resolution that's powered by the tensor cores, developers will have to cook something different up.
 
The whole idea behind DLSS is to make rendering games at non native resolutions look better than just letting the driver/monitor upscale the image, hence it's called "deep learning super sampling," and not "deep learning anti aliasing."

A RTX 2080 TI has no problem running any DLSS-enabled game in existence on a 1080p ultrawide and its native resolution. Hence Nvidia didn't disable DLSS because an RTX 2080 TI can't keep up, they disabled it because there's no reason to run the game at 720p or whatever the base resolution would be.

If you want anti-aliasing at native resolution powered by the tensor cores, that's something different developers will have to cook up.
DLSS 2x ...
 
A RTX 2080 TI has no problem running any DLSS-enabled game in existence on a 1080p ultrawide and its native resolution.
Mostly true, but for BF5 ray-tracing is really intensive and I'm trying for high refresh (my monitor is 166Hz).
 
Ah, so I missed Nvidia's actual DLSS 2x talk: https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/

My apologies.

Yeah, DLSS 2x would be nice on an ultrawide, or at any res. Why Nvidia wouldn't include it, I don't know...

My guess is they are focusing all their resources on using DLSS to increase ray tracing fps. I am also thinking it’s harder than they originally imagined this to be...

Hopefully they figure it out and make it automated so it’s more universal with regards to resolutions, ect.
 
The reason why DLSS is not present at lower resolutions on the 2080 Ti is that the Tensor cores cannot keep up with the framerates...simple physics.
I thought about it more, and I think I understand.

If the cost for running DLSS is more than the brute force approach (rendering at full resolution), then it could potentially cause a *drop* in performance and Nvidia was probably trying to avoid that PR backlash.
 
Well DLSS uses 1440p to 4k for example. That would be 50% scaling. But you generally don’t get that performance... 70-80% downscaling is probably a proper comparison in reality since they share about the same perf gain as dlss.

Overall I was more excited for DLSS than RT and it’s been a complete bust so far. It introduces shimmering which is the graphical defect I hate most. DLSS2X was specifically what I care about and shouldn’t have shimmering. It takes your native resolution and upscales that higher. I won’t hold my breath on it being released anytime soon. And not fond of how non-generic DLSS is. Certain video cards and only certain resolutions... just give me more CUDA cores instead.

Anywho - the closest comparison might be dlss vs 1800p.

I think it was this article. https://www.techspot.com/article/1712-nvidia-dlss/


This been saying it all along. I get the RTX and I feel that is the right way to go and good first step. Why nvidia just didn't replace the tensor cores with more cuda cores give how efficient their architecture is mind-boggling. Had the gotten another 15-20% more performance from cuda cores instead of those tensor cores it would have been much better for everyone instead of doing this. Or just use that die space for even more RTX. I would have honestly taken more cuda cores with rtx cores. They didn't even have to do DLSS to increase performance had they dedicated that die space for more cuda cores.
 
This been saying it all along. I get the RTX and I feel that is the right way to go and good first step. Why nvidia just didn't replace the tensor cores with more cuda cores give how efficient their architecture is mind-boggling. Had the gotten another 15-20% more performance from cuda cores instead of those tensor cores it would have been much better for everyone instead of doing this. Or just use that die space for even more RTX. I would have honestly taken more cuda cores with rtx cores. They didn't even have to do DLSS to increase performance had they dedicated that die space for more cuda cores.
Tensor cores are required for DXR to clean up image from all the noise shooting only one day per pixel or less is producing. Also the idea is that games will actually use Tensor cores for AI and graphical effects which would otherwise be impossible with just using CUDA cores alone.

For ray-tracing the biggest bottleneck is not ray intersection calculations (as would be with no RT cores present) but shader occultation for each ray intersection. RT cores alone do not take that much space alone but with Tensor cores they do stack up to take a lot of space. Given both are absolute must for real time ray-tracing there was no other way... except maybe to wait for 7nm with all these features.

What is most funny is that games run already stupidly fast on modern high-end GPUs, even with ray-tracing and people complain like without this 20-30% extra performance there was some tragedy happening. It is as much funny as it is sad...
 
A RTX 2080 TI has no problem running any DLSS-enabled game in existence on a 1080p ultrawide and its native resolution. Hence Nvidia didn't disable DLSS because an RTX 2080 TI can't keep up, they disabled it because there's no reason to run the game at 720p or whatever the base resolution would be.

From Nvidia's recent article https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/

DLSS requires a fixed amount of GPU time per frame to run the deep neural network. Thus, games that run at lower frame rates (proportionally less fixed workload) or higher resolutions (greater pixel shading savings), benefit more from DLSS. For games running at high frame rates or low resolutions, DLSS may not boost performance. When your GPU’s frame rendering time is shorter than what it takes to execute the DLSS model, we don’t enable DLSS. We only enable DLSS for cases where you will receive a performance gain. DLSS availability is game-specific, and depends on your GPU and selected display resolution.

Seems like Nvidias aim at the moment is to use DLSS to give higher framerates but the techstill needs work because the end results are not really better than just running a lower resolution or running a lower resolution scale if a game supports it.
 
Tensor cores are required for DXR to clean up image from all the noise shooting only one day per pixel or less is producing. Also the idea is that games will actually use Tensor cores for AI and graphical effects which would otherwise be impossible with just using CUDA cores alone.

For ray-tracing the biggest bottleneck is not ray intersection calculations (as would be with no RT cores present) but shader occultation for each ray intersection. RT cores alone do not take that much space alone but with Tensor cores they do stack up to take a lot of space. Given both are absolute must for real time ray-tracing there was no other way... except maybe to wait for 7nm with all these features.

What is most funny is that games run already stupidly fast on modern high-end GPUs, even with ray-tracing and people complain like without this 20-30% extra performance there was some tragedy happening. It is as much funny as it is sad...
The Tensor cores effectively double the size of the SM, and as you said NVIDIA's implementation of hardware ray tracing can't work without them. It isn't as simple as replacing the Tensor cores with more CUDA.
 
Tensor cores are required for DXR to clean up image from all the noise shooting only one day per pixel or less is producing. Also the idea is that games will actually use Tensor cores for AI and graphical effects which would otherwise be impossible with just using CUDA cores alone.

For ray-tracing the biggest bottleneck is not ray intersection calculations (as would be with no RT cores present) but shader occultation for each ray intersection. RT cores alone do not take that much space alone but with Tensor cores they do stack up to take a lot of space. Given both are absolute must for real time ray-tracing there was no other way... except maybe to wait for 7nm with all these features.

What is most funny is that games run already stupidly fast on modern high-end GPUs, even with ray-tracing and people complain like without this 20-30% extra performance there was some tragedy happening. It is as much funny as it is sad...

I get you about games running stupid fast. But lets relax about the ray tracing being stupid fast. its not! My 2080ti says otherwise. Its not sad to ask for extra raw performance with the prices they are charging. I happen to get a good deal on my 2080ti but at 1300+ tax for after market cards its never too much to ask for 20-30% more performance.
 
Back
Top