RTX doesn't necessarily mean "Ray Tracing" support in games according to this video

Brent_Justice

Moderator
Joined
Apr 17, 2000
Messages
17,755
Might be a re-post, not sure, but RTX doesn't necessarily mean "Ray Tracing" support in games.

RTX can mean either Ray Tracing or DLSS (Deep Learning Super Sampling) AI support in games. Only one of those has to be true to call it an "RTX" enabled game. Meaning games being called with RTX support may only support DLSS and not actually Ray Tracing. According to this video:



Just thought I'd pass that along
 
is DLSS a form of anti-aliasing?...or does it only deal with AI in games?...confusing naming scheme
 
is DLSS a form of anti-aliasing?...or does it only deal with AI in games?...confusing naming scheme

From what I understand DLSS is a new form of AA.

It’s a form of smart super scaling using AI. So yeah, AA.

From the Jensen presentation they take a lesser quality images from a game and perfect images. They keep iterating until the AI can guess the perfect image from only the lesser quality then throw that iteration into the drivers (I am assuming it’s driver side.). Jensen used the example of turning 1440p into 4k.
 
then they should have just called it DLAA to match up with the other AA names (FXAA, SMAA etc)
 
All I know is I hope its cleaner looking than TAA and FXAA. But so far it seems like it will be used for upsampling from a lower resolution. Which makes me think it can be blurry and that is why the performance increase appears significant when looking at their slides. I could be completely misunderstanding this tech though.

e.g. You have your resolution set to 4K ingame, DLSS sets the render resolution to 1800P then upscales that image to 4K and fills in the blanks based off whatever the deep learning algorithm they ran it through came up with.

I can see DLSS having some types of artifacts where the deep learning algorithm failed. Hopefully it's better than what I'm expecting. I bet it is used side by side with DSR when going for a native resolution image with super sample liked quality. Could be nice running at 4K native; 8K supersampled image with the performance hit of 5K or 6K resolution.
 
Good question. Looks like RTX game title doesn't necessarily include ray tracing despite all the hype about how important ray tracing is.

Did Nvidia go into as much detail on DLSS?

Does this mean that we have to turn off anti-aliasing and enable DLSS on RTX games to see the benefits of DLSS? Does DLSS only work on upscaling.

Or will we see benefits at the same resolution while using DLSS?

How will DLSS on with antialiasing off compare to low end antialiasing such as FXAA or MSAA 2x?

A lot of unanswered questions, I hope that [H] benchmarking will do a deep dive on this less hyped feature, but a feature which Nvidia touts nonetheless, in particular on its slide that claims the RTX 2080 doubles up on performance vs the GTX 1080 with DLSS on.
 
All I know is I hope its cleaner looking than TAA and FXAA. But so far it seems like it will be used for upsampling from a lower resolution. Which makes me think it can be blurry and that is why the performance increase appears significant when looking at their slides. I could be completely misunderstanding this tech though.

e.g. You have your resolution set to 4K ingame, DLSS sets the render resolution to 1800P then upscales that image to 4K and fills in the blanks based off whatever the deep learning algorithm they ran it through came up with.

I can see DLSS having some types of artifacts where the deep learning algorithm failed. Hopefully it's better than what I'm expecting. I bet it is used side by side with DSR when going for a native resolution image with super sample liked quality. Could be nice running at 4K native; 8K supersampled image with the performance hit of 5K or 6K resolution.
From what I took from the keynote:

DLSS will determine where the edges are
DLSS will determine the sampling rate
DLSS should blur much of the image since the entire image won’t be sampled

It sounds a lot like temporal AA, but he also said this replaces that. It also has its own dedicated hardware to run which explains some of the posted numbers.
 
Tainted posted a pic in a different thread. It looked really good. Did a good job of not blurring the image. Like, probably better than a human could with a bunch of time on their hands...

I go back to they could have used CUDA cores instead of tensor. They must believe there’s a huge benefit to DLSS to do it this way.
 
Last edited:
Tainted posted a pic in a different thread. It looked really good. Did a good job of not blurring the image. Like, probably better than a human could with a bunch of time on their hands...

I go back to they could have used CUDA cores instead of tensor. They must believe there’s a huge benefit to DLSS to do it this way.
Dedicated HW will be better than general use compute cores. If the other generations can use DLSS their CUDA cores are probably going to be used.
 
From what I took from the keynote:

DLSS will determine where the edges are
DLSS will determine the sampling rate
DLSS should blur much of the image since the entire image won’t be sampled

It sounds a lot like temporal AA, but he also said this replaces that. It also has its own dedicated hardware to run which explains some of the posted numbers.

It uses AI based on trillions of iterations done at nVidia for a particular game. It doesn’t use any traditional AA.

I put the DLSS keynote below if anyone is interested..

https://www.twitch.tv/videos/299680425?t=9816s
 
I didn’t say it used AA, but it does sample the screen.

It takes the image and converts it based on what the super computers at nVidia determined most closely approximated the perfect image. I guess it does “sample” the original image. I think it converts the entire image though, not just pieces. Kinda of like a smart image filter. There is no set “way”. Generally with AI they start with a blank sheet and through the billions/trillions of iterations it finds the best path. Writes it’s own code in a sense...


I’d love to read more about it if anyone finds something good.

If you want to see something really cool look up the DOTA AI vs professional players. They let it train from a blank sheet over two weeks. By the end it was mauling the best humans on the planet and it taught itself.
 
Last edited:
Back
Top