RTX - DLSS really uses checkerboard rendering? DLSS 2X is the real thing.

Snowdog

[H]F Junkie
Joined
Apr 22, 2006
Messages
11,262
Until now NVidia has been very circumspect about how DLSS is supposed to improve performance.

But there were more beans spilled today, and it looks like their real reason is they run at lower resolution and upscale:
https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/
"Whereas TAA renders at the final target resolution and then combines frames, subtracting detail, DLSS allows faster rendering at a lower input resolution, and then infers a result that at target resolution is similar quality to the TAA result, but with half the shading work. "

Lower input resolution, and half the shading work. Imply checkerboard rending to me.

So no surprise it is much faster, when it is likely cutting to half size rending, like some PS4 games.

Also we now find out there is a DLSS 2X mode, that seems to render at Native resolution and then does more complex DL network operations for much higher quality:

"DLSS 2X. In this case, DLSS input is rendered at the final target resolution and then combined by a larger DLSS network to produce an output image that approaches the level of the 64x super sample rendering – a result that would be impossible to achieve in real time by any traditional means. Figure 21 shows DLSS 2X mode in operation, providing image quality very close to the reference 64x super-sampled image."

It kind of looks like were were marketed the performance benefits of regular DLSS (without being told it runs at lower resolution), and quality benefits of DLSS 2X, which was unknown until today, and comes with an unknown performance hit.

I am a little less impressed with DLSS than I was before.
 
  • Like
Reactions: N4CR
like this
It was explained during the keynote at gamescon it takes a lower resolution rendering and upscales it using AI algorithms created by supercomputers at nVidia.

It is absolutely nothing like what a PS4 would do (besides starting with a lower res picture).

DLSS 2X is where it’s at for me. Holy shit can’t wait.
 
Last edited:
Upscaling any image or video will affect quality. Will remain to be seen how the image quality will turn out with this tech. DLSS 2X will be interesting to see.
 
DLSS 2x is also what I am interested in. But until now, I thought that is what were getting a low overhead, now there is no mention of the performance hit.
 
Not surprised that images and graphs from nvidia have ambiguous and dissimilar settings between old and new. Pretty much anything they have "leaked" or shown for RTX has had an element of suspicion. If you aren't being specific, then there's always a reason.
 
  • Like
Reactions: Curl
like this
It was explained during the keynote at gamescon it takes a lower resolution rendering and upscales it using AI algorithms created by supercomputers at nVidia.

It is absolutely nothing like what a PS4 would do (besides starting with a lower res picture).

DLSS 2X is where it’s at for me. Holy shit can’t wait.

I thought the AI was to determine which part of the screen got DLSS?

Not surprised that images and graphs from nvidia have ambiguous and dissimilar settings between old and new. Pretty much anything they have "leaked" or shown for RTX has had an element of suspicion. If you aren't being specific, then there's always a reason.

They only showed the recent UE demo running it in real time, you can determine what you want based off of that. I thought it looked pretty damn good.
 
I thought the AI was to determine which part of the screen got DLSS?



They only showed the recent UE demo running it in real time, you can determine what you want based off of that. I thought it looked pretty damn good.

AI performs the “DLSS”. It’s kinda like facial recognition... but takes it a step further and makes your face (which might be blurry from your friends shitty phone) and makes it look like it would on your best day. I am terrible with analogies but you might know where I am going.

If you look at nVidia’s work with enhancing still pictures it’s very similar. Or AI in general. It doesn’t decide for another process to be applied it IS the process.

Maybe think of it as a dynamic, hugely complicated filter designed for that specific game by nVidia’s super computers? It keeps iterating the code until it has the least amount of error when comparing the blurry pictures (maybe half the resolution) to the perfect “ground truth” and then shoves that code into the drivers.

I am interested in DLSS 2x since I run 3440x1440. It takes the native and upscales it then back down. I don’t need more speed but “free” good AA would be huge.
 
Last edited:
I'm pretty excited that the whole "let me enhance this image" bullshit in all the CSI shows and sci-fi films taking 320 x 240 video camera images and turning them into 4K is actually happening. XD

enhance-super-troopers.gif
 
Lol yeah that’s the first thing that came to mind when I saw DLSS.

That's interesting because every time I read about DLSS and NVIDIA's super computers I just think marrketing BS. If they had real 4k performance in the bag, then none of this would be necessary.
 
That's interesting because every time I read about DLSS and NVIDIA's super computers I just think marrketing BS. If they had real 4k performance in the bag, then none of this would be necessary.

True. Just wait for real 4K then.
 
If you've ever played a PS4 Pro game at 4K, checkerboarding actually works wonders. Sure it's not "real" 4K, but it looks great and with great performance (which is the end goal).

I'm glad to see this coming to PC, and we don't yet know how well the AI upscaling works. It's too bad they couldn't do it in a generic way that would support existing games without updates.
 
  • Like
Reactions: Curl
like this
If you've ever played a PS4 Pro game at 4K, checkerboarding actually works wonders. Sure it's not "real" 4K, but it looks great and with great performance (which is the end goal).

I'm glad to see this coming to PC, and we don't yet know how well the AI upscaling works. It's too bad they couldn't do it in a generic way that would support existing games without updates.

True. I read a technical PDF on how they did the checkerboard rendering in Horizon ZD. It's a pretty amazing bit of technology, that goes way beyond scaling. The end result is really very close to the native 4K.

I have no issue with NVidia offering something similar, but they should be more clear about what is going on.
 
True. I read a technical PDF on how they did the checkerboard rendering in Horizon ZD. It's a pretty amazing bit of technology, that goes way beyond scaling. The end result is really very close to the native 4K.

I have no issue with NVidia offering something similar, but they should be more clear about what is going on.

https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/

It’s near the end. They compare DLSS 2X to 84% SSAA in a figure which is perfect and what I want...
 
I'm pretty excited that the whole "let me enhance this image" bullshit in all the CSI shows and sci-fi films taking 320 x 240 video camera images and turning them into 4K is actually happening. XD
You might be able to turn a 320x200 image into 4k with deep learning, but you won't be able to run faical recognition on it, and it will be inadmissible. (we should already start educating judges on the issue far ahead)
Wowever mighty deep learning might be it can't fill in the information that is not there in the first place, so it makes up the details. Enhancing a 320x200 face to 4K might look like a 4K face, but it certainly won't look exactly like the face of the person who was on the original.
 
You might be able to turn a 320x200 image into 4k with deep learning, but you won't be able to run faical recognition on it, and it will be inadmissible. (we should already start educating judges on the issue far ahead)
Wowever mighty deep learning might be it can't fill in the information that is not there in the first place, so it makes up the details. Enhancing a 320x200 face to 4K might look like a 4K face, but it certainly won't look exactly like the face of the person who was on the original.

In photography, we call this 'false detail' or something similar and it is not favorable- pushed too far and it goes from enhancing the image to being destructive.

I don't see Nvidia taking DLSS that far except perhaps as an extreme performance enhancer for lower end hardware. No different than how you can bury settings in some games (fewer these days) and make them look terrible to the point of being almost unrecognizable.
 
Although for the sake of GAMES where being able to fill in good detail on anything where the absolute truth of the image isn't necessary... is a heck of a good piece of technology.
 
Although for the sake of GAMES where being able to fill in good detail on anything where the absolute truth of the image isn't necessary... is a heck of a good piece of technology.

That's most photography too ;)

[surprisingly, getting the 'truth' in photography is actually extremely hard...]
 
This explains why the TAA vs DLSS (non-2X) image actually looked WORSE. The texture fidelity went out the window and it all makes sense since it's lower res upscaled. Correct me if I'm wrong, but this really isn't all that different than those shady tactics back in the day of dropping some quality settings and claiming a performance boost. I don't remember specifically what it was that nV was guilty of, but I do remember ATi being guilty of it with their "Catalyst AI" setting which would "imperceptibly" lower texture quality to improve performance. Problem was, it was noticeable, which is why they were called out on it... lol Which is a bit ironic given it wasn't really AI doing it back then, but nV has taken the ball and ran with it, accomplishing that exact thing! :p

DLSS has seemed shady to me from the start and the more I'm reading, the more skeptical I am.

The biggest thing I'm interested to find out is how large these "Deep Neural Network Models" are to download per game. As it sounds like to me, these aren't just a small shader-like file, but what I feel will end up as being a rather sizable (multi-GB) file. Reason I feel that'll be the case is since it's the product of running the game through their DNN supercomputers at 64xSSAA and this is the calculated results. So my two guesses are that it's either: A) a compilation of every game scene viewed at every possible angle in order to produce all potential foreground/background views and their AA patterns. or B) having passed all the models and textures through their computers and determined a form of transparent AA effect for them all so that whatever's in the background is able to seamlessly blend but not distort the calculated AA, but being that it's how every model/texture with AA will look, ends up again as a sizable file. In either case, it's the Tensor Core's job to determine how it's applied to the scene on the consumer's side.

Even if both guesses are incorrect in the end, I will still be really surprised if the result isn't a large download. While I personally have no plans to ever buy nV components, I still dislike if that's the case since we're at a time where the internet has actually never been worse. Either people have data caps (be it hundreds of GBs, or cellular like myself and 24GB/mo) and another download to get the most from your game is just silly... Or a game receives and update and due to how games are distributed, a 4KB update requires you to download the entire goddamn 1.6GB file again! I sure do miss the days when an update to a game was a simple 5MB to 100MB EXE file that patched in the updated content --and don't get me started on how back then we'd receive new content and features for free in these patches, whereas now they are made into DLC-- but with the way Steam operates, we are force fed entire game archives full of unchanged files. Or a game with multiple voiced languages, which is great and a commendable choice by the developer, but are forced to download that additional 4GB of language files that we'll never use... Because it's not like it's on physical media and we can re-sell to someone who actually may want to play in with that other language :meh: [/ramble]

Time will tell!
 
The biggest thing I'm interested to find out is how large these "Deep Neural Network Models" are to download per game. As it sounds like to me, these aren't just a small shader-like file, but what I feel will end up as being a rather sizable (multi-GB) file. Reason I feel that'll be the case is since it's the product of running the game through their DNN supercomputers at 64xSSAA and this is the calculated results.

I seriously doubt Nvidia is going to run every individual game through their supercomputer array as they come along. Instead they have used AI-driven modeling to generate an algorithm that is capable of taking any scene in any game and processing that to generate the DLSS upscaled image. Similar to how you could train an AI to recognize if a picture contains a cat, you only need to give it enough examples of a cat for it to be able to tell if any image fed to the trained algorithm contains a cat. There are probably a bunch of parameters that developers can use to fine tune the DLSS results for better accuracy.
 
In photography, we call this 'false detail' or something similar and it is not favorable- pushed too far and it goes from enhancing the image to being destructive.

I don't see Nvidia taking DLSS that far except perhaps as an extreme performance enhancer for lower end hardware. No different than how you can bury settings in some games (fewer these days) and make them look terrible to the point of being almost unrecognizable.
I don't mean DLSS specifically. That is an entirely different application. But deep learning in general, can achieve this in a few year imo.
 
I seriously doubt Nvidia is going to run every individual game through their supercomputer array as they come along. Instead they have used AI-driven modeling to generate an algorithm that is capable of taking any scene in any game and processing that to generate the DLSS upscaled image. Similar to how you could train an AI to recognize if a picture contains a cat, you only need to give it enough examples of a cat for it to be able to tell if any image fed to the trained algorithm contains a cat. There are probably a bunch of parameters that developers can use to fine tune the DLSS results for better accuracy.
I can definitely agree with you to a point, that it is a time consuming endeavor and so it doesn't seem that likely a scenario I've cooked up. This is just how I've interpreted what has been released, is all.

As far as comparing it to training AI to identify something in a photo... well I'll admit my ignorance when it comes to Deep Neural Networks and AI, but that seems like a totally different situation given it's a still photo.
However, considering that the Deep Fakes are fairly belieable, I suppose it's not out of the question. Anyone have a clue how long one of those videos takes to generate? I can't imagine it's in real time like what a game would be requiring...
 
As far as comparing it to training AI to identify something in a photo... well I'll admit my ignorance when it comes to Deep Neural Networks and AI, but that seems like a totally different situation given it's a still photo.

Think face detection on phones and cameras. Battery-powered devices that do it for hours of videos and/or hundreds of high-resolution photographs. Now you have a whole GPU?

It ain't no problem.
 
I seriously doubt Nvidia is going to run every individual game through their supercomputer array as they come along. Instead they have used AI-driven modeling to generate an algorithm that is capable of taking any scene in any game and processing that to generate the DLSS upscaled image. Similar to how you could train an AI to recognize if a picture contains a cat, you only need to give it enough examples of a cat for it to be able to tell if any image fed to the trained algorithm contains a cat. There are probably a bunch of parameters that developers can use to fine tune the DLSS results for better accuracy.

They make an algorithm for each specific game.

https://www.nvidia.com/content/dam/...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf
 
Can you quote where it says that? To me the whitepaper doesn't explicitly say that the algorithm would need images from each specific game to handle DLSS, just that they used the UE4 Infiltrator demo as an example for results.

Good point. I definitely heard it in “The Full Nerd” podcast Sept 14 where they talked about Turing the whole time. They said it’s per game and nVidia is doing it for free to try and push it into the market place.

I wish a global generic algorithm would work. Then this could just be a toggle in the control panel and work on everything. It would make me feel a lot better about spend $1400.

I’ll have to see where I read it later. Sorry I thought it was in the white paper.
 
I seriously doubt Nvidia is going to run every individual game through their supercomputer array as they come along.

I think they have to. At least that's how they explained it in the interviews. If you want your game to use DLSS you have to send the game early in its development for them to put through their super computer. It outputs algorithms specific to that game. For example, not sure they could make a ground truth image for Grand Theft Auto that would be in anyway applicable to the Witcher 3.
 
Good point. I definitely heard it in “The Full Nerd” podcast Sept 14 where they talked about Turing the whole time. They said it’s per game and nVidia is doing it for free to try and push it into the market place.

I wish a global generic algorithm would work. Then this could just be a toggle in the control panel and work on everything. It would make me feel a lot better about spend $1400.

I’ll have to see where I read it later. Sorry I thought it was in the white paper.
This is how I understood it too, the AI generates an algorithm per game based on many different factors and gets updated in the driver. If this is indeed how it works, this is a GREAT way to save on not only performance but be completely neutral to the 3D engine used and their lightning, which was something that has plagued AA methods in the past.
 
I seriously doubt Nvidia is going to run every individual game through their supercomputer array as they come along. Instead they have used AI-driven modeling to generate an algorithm that is capable of taking any scene in any game and processing that to generate the DLSS upscaled image. Similar to how you could train an AI to recognize if a picture contains a cat, you only need to give it enough examples of a cat for it to be able to tell if any image fed to the trained algorithm contains a cat. There are probably a bunch of parameters that developers can use to fine tune the DLSS results for better accuracy.
The developers have to support it, and it is specific to the game. NVIDIA gives access to their supercomputing cloud servers if the developer wants to add DLSS to their game.
 
By the sounds of it, I'm not the only one who has come away from the information with an understanding that DLSS will be the byproduct of the game being ran through nV's supercomputer at 64xSuperSampling to output the downloadable model.

Though, I didn't come away thinking that the Dev had to do it. I took it as nV would do that on a game to game basis. Devs required to support DLSS sounds familiar, but also as I have no intention to buy nV, it's been tough to remember exactly which mentionings I'm remembering as being for RayTracing or for DLSS.

and gets updated in the driver.
The DLSS "model" will be downloaded at the user's choice, via the GeForce Experience thinger. That's what gave me the impression that the download will have potential size to it, otherwise if it were akin to an SLi or CrossFire profile, they would just bundle it into the driver's. Then again, they may opt not to simply for the ease of download, not due to download size. As this way the end user wouldn't have to download the driver package every time. However, lol I also can't see why they wouldn't just make an all-inclusive package like how the early CrossFire Profiles were, and let people grab them as needed. It's incredibly hard for me to speculate on what nV's logic is behind how they do things, these days... :p
 
By the sounds of it, I'm not the only one who has come away from the information with an understanding that DLSS will be the byproduct of the game being ran through nV's supercomputer at 64xSuperSampling to output the downloadable model.

Though, I didn't come away thinking that the Dev had to do it. I took it as nV would do that on a game to game basis. Devs required to support DLSS sounds familiar, but also as I have no intention to buy nV, it's been tough to remember exactly which mentionings I'm remembering as being for RayTracing or for DLSS.


The DLSS "model" will be downloaded at the user's choice, via the GeForce Experience thinger. That's what gave me the impression that the download will have potential size to it, otherwise if it were akin to an SLi or CrossFire profile, they would just bundle it into the driver's. Then again, they may opt not to simply for the ease of download, not due to download size. As this way the end user wouldn't have to download the driver package every time. However, lol I also can't see why they wouldn't just make an all-inclusive package like how the early CrossFire Profiles were, and let people grab them as needed. It's incredibly hard for me to speculate on what nV's logic is behind how they do things, these days... :p

I highly doubt the DLSS models are large. It’s just an algorithm. It sounds like it automatically keeps the whole package up to date:

“ The software will look for a Turing GPU and, upon finding it in the system, proceeds to download the NVIDIA NGX Core package as well as the deep neural network models available for the installed games and applications.”
 
I highly doubt the DLSS models are large. It’s just an algorithm. It sounds like it automatically keeps the whole package up to date:

“ The software will look for a Turing GPU and, upon finding it in the system, proceeds to download the NVIDIA NGX Core package as well as the deep neural network models available for the installed games and applications.”
Yea I looked at the article again and concluded I mis-read that part. I had read it as the GeForce Experience software will detect all of that but then the end-user will determine which games they want to download the models for. That's my bad.

Yet, in fairness, that doesn't exactly mean that they can't be a large download. Referring back to my rant about how Steam updates work, it appears to be completely acceptable in the industry to require the consumer download a large chunk of data whether they want to or not, and whether it's merited or not. (merited, in the sense that the same small file could easily be delivered in a self-injecting package to update what's already on the computer)
As such, it's not far fetched to think that nVidia finds 1GB acceptable. Whether or not that is per-game or for a dozen... *shrug* I'm just speculating right now anyways.

My point being is that it's seeming more and more like they are only after what improves the company's "quality of life". You can't even download drivers like you used to, because it's insisted that they come packaged with every damn contingency (redistributable software), for every language, and every in-use version of Windows. Granted, I'm speaking from "Rural World Problems" and the majority of people aren't in this same boat, but yea... If left unchecked we'll end up in the same place the gaming industry is with charging $5+ for new player/weapon skins or there being $20 Launch-Day DLC packs. Which if you consider how Windows 10 operates, we're well on the way!

Alright, I need to shutup before I start to ramble even further and eventually get off topic. lol
 
Original [H] logo, and same logo with 4x neural net upscaling.

3s6Nor7.png


OeOu8IO.png
 
Original [H] logo with 4x neural net upscaling. DLSS x2
oeou8io-png.png

[H] logo with some time in Photoshop DLSS Performance.
upload_2018-9-19_19-26-6.png


And for the low-low-price of $200, I will personally upsample every frame of your gameplay, too!
Don't delay! Place your order now!
Operators are standing by...
No assembly required. Batteries not included. Slide-show framerate guaranteed. Choking hazard for infants and those with trouble swallowing sarcasm.
 

Attachments

  • upload_2018-9-19_19-37-6.png
    upload_2018-9-19_19-37-6.png
    109.8 KB · Views: 0
Does Photoshop have NN upscale that runs with cuDNN lib? I don't know actually.
 
[H] logo with some time in Photoshop DLSS Performance.
View attachment 105109

And for the low-low-price of $200, I will personally upsample every frame of your gameplay, too!
Don't delay! Place your order now!
Operators are standing by...
No assembly required. Batteries not included. Slide-show framerate guaranteed. Choking hazard for infants and those with trouble swallowing sarcasm.

Wow only 200? Here I was ready to pay you 500 for that! WHAT A DEAL!!!
 
Does Photoshop have NN upscale that runs with cuDNN lib? I don't know actually.
*shrug* I just did it by hand in CS3. Resized the image 4X (aka 400%). Magic-wand the black with a tolerance of 68 with antialiasing. Inverted the selection. Refined Selection to smooth it out, and shrunk it around the letters more (since inverted, I 'increased' the selection size). Then just pressed delete to clear selection. From there I painted a background layer black and presto.

That was sort of the idea behind the "fine print". As well as the "offer" to do each of your frames while gaming, since it'd be by hand, and... well, that's just an absurd task; thus, the slide-show framerate guarantee. :shame:
Even then, the only nV hardware I own are two kaput S939 nForce boards lol (Why I haven't thrown them away is beyond me...)
 
*shrug* I just did it by hand in CS3. Resized the image 4X (aka 400%). Magic-wand the black with a tolerance of 68 with antialiasing. Inverted the selection. Refined Selection to smooth it out, and shrunk it around the letters more (since inverted, I 'increased' the selection size). Then just pressed delete to clear selection. From there I painted a background layer black and presto.

Oh I see. Notice how yours is fuzzy, and mine is sharp yet has rounded corners. Particularly the TM. That's what a trained NN will get you.
 
Back
Top