...people playing games today are much more savvy than they were 10-20 years ago.

I'm just going to go out on a limb here and guess you weren't playing games 20 years ago, or at least weren't concerned with the features or the hardware you were using to do so. Back when ATi introduced Truform (an early implementation of a tessellation engine) with the Radeon 8500, you didn't see a bunch of users or tech websites touting it as the next big thing that was going to revolutionize the industry and leave nVidia completely in the dust. Because it was a technology that required developer support and had some... interesting quirks (see below), it was just seen as an intriguing new feature that you could care about or not; it didn't completely define the product.

1464692124859.png
 
Well how do NVIDIA engineers determine that something looks good? Do they just "trust" the AI or do they actually inspect the work that it performs and then adjust the algorithm to correct for errors?

I'm sure that's exactly how it works, actually. You make a tweak to the algorithm, take the "residual" of the two frames (basically a quantification of the differences between the two), and make a determination from that whether or not the image is "better" (i.e there are fewer differences between the DLSS frame and the "perfect" frame.) That's not even remotely controversial.

I don't think the issue here is that NVIDIA is having trouble finding the "ideal" settings for DLSS in each game; machine learning is VERY effective at these sorts of problems and NVIDIA knows very well what they're doing in both the areas of machine learning and graphics processing. I think the issue is simply the underlying technology. The idea behind it is solid, but the implementation needs work.
 
I'm just going to go out on a limb here and guess you weren't playing games 20 years ago, or at least weren't concerned with the features or the hardware you were using to do so. Back when ATi introduced Truform (an early implementation of a tessellation engine) with the Radeon 8500, you didn't see a bunch of users or tech websites touting it as the next big thing that was going to revolutionize the industry and leave nVidia completely in the dust. Because it was a technology that required developer support and had some... interesting quirks (see below), it was just seen as an intriguing new feature that you could care about or not; it didn't completely define the product.

View attachment 143252
Thankfully , we did not see tremendously inflated video card prices from AMD at that time, in that we did not have to pay for "extra" silicon on those GPUs that did nothing but support Truform.

15375786111bdizypll8_1_2_l.png


And yeah, Truform in 2001 had about a good a start as RTX...

1011678077KiAEtsjgkY_5_4.jpg


1011678077KiAEtsjgkY_5_1.gif
 
I'm sure I'll get called an Intel shill soon enough because I'm looking forward to them shaking up the GPU market.
Probably be 2022 till we see anything on the high end, but the expected mid-level GPUs are going to be exciting for sure. Whole lot of money to be made in the middle, even if it is not glamorous.
 
This is what Digital Foundry had to say... Linked specifically to their take on DLSS

 
Basically how it was being sold
  • Take a source lower resolution image (non-static), upscale that image to a higher resolution (e.g. 1440 > 4k)
  • The upscaled image will look the same or better than a native resolution image (e.g. 4k) when applying TAA
  • This is going to be done 50-80 times /second
  • All done on a end user system with a single GPU to handle the entire workload through the magic of AI / Machine Learning

Directly from their FAQ

Q: How does DLSS work?

A: The DLSS team first extracts many aliased frames from the target game, and then for each one we generate a matching “perfect frame” using either super-sampling or accumulation rendering. These paired frames are fed to NVIDIA’s supercomputer. The supercomputer trains the DLSS model to recognize aliased inputs and generate high quality anti-aliased images that match the “perfect frame” as closely as possible. We then repeat the process, but this time we train the model to generate additional pixels rather than applying AA. This has the effect of increasing the resolution of the input. Combining both techniques enables the GPU to render the full monitor resolution at higher frame rates.

View attachment 143212

Did they really mention the "applying TAA" portion initially ? I do not recall that... maybe it was in the fine prints ? (Marketing at its best, take 1080P, apply DLSS & RTX and say it's as good and fast as 4K TAA while the truth is 4K TAA is probably the worst combo you could choose?)
 
Did they really mention the "applying TAA" portion initially ? I do not recall that... maybe it was in the fine prints ? (Marketing at its best, take 1080P, apply DLSS & RTX and say it's as good and fast as 4K TAA while the truth is 4K TAA is probably the worst combo you could choose?)

I don't know if was Nvidia marketing directly or a presentation, but pretty much everywhere compared it to TAA as a selling point.

https://www.tomshardware.com/reviews/dlss-upscaling-nvidia-rtx,5870.html

https://www.techspot.com/article/1712-nvidia-dlss/

Then there is this screenshot that is directly from Nvidia where they use TAA as a comparison.

http://images.nvidia.com/geforce-co...-rt-and-dlss-on-vs-rt-on-dlss-off-taa-on.html

Also from their FAQ

Why don’t I just use upscaled TAA instead?

The game industry has used TAA for many years and we know that it can fall down in certain ways. TAA is generated from multiple frames and can suffer from high-motion ghosting and flickering that DLSS tends to handle better.
 
Last edited:
Well how do NVIDIA engineers determine that something looks good? Do they just "trust" the AI or do they actually inspect the work that it performs and then adjust the algorithm to correct for errors?

Have you seen how Steam Controller profiles work? All of them are submitted to the same repository. Everyone can look through them and choose the one that they want to use. The Steam Controller profile with the highest actual play time is the first one that I see when I look through them. So in that instance the metric to determine the "best" is the one that people use the most.

So what's so hard about making various DLSS profiles? Allow people to choose which they want to try. Then vote if they like it or not. They can even go by hours logged like Steam if they want.

Its not hard at all.

Also I didn't suggest that they send in screenshots. I said pause the game like it is done with Ansel. So you play through a scene and then hit the pause button and rate the scene. Check boxes that say, "too fuzzy", "picture perfect", "too sharp", etc. Ever seen an overlay in a game? Used MSI Afterburner? Free player feedback and free labor. Its just Big Data; I thought NVIDIA was all about collecting and processing data? I vote on items I purchase from Amazon all the time.

What's so hard about having multiple DLSS profiles for a game and voting on them? Maybe I'm as "daft" as you say, but I think user feedback could have saved them time and money with DLSS if they had shown the community "real DLSS" in action instead of "best case scenarios."

Wow, I got no notification about this post at all? Sorry for the late reply, also, just want to make one thing clear, I wasn't calling you daft, I was calling the idea daft. And the more you talk about it the more daft it gets.

DLSS profiles? Real DLSS? lol WTF? You keep comparing it to Steam controller profiles and rating things you buy on Amazon. Sorry, but, Seriously?

For DLSS the AI is comparing a low res version of the game with a super high res version of the game and it learns how to upscale the low res version to look as near as possible to the high res version. There are no different versions. There isn't going to be multiple profiles. It either looks like the High res version or it doesn't. The problem at the moment is that it hasn't been done right. Maybe they didn't use enough nodes when doing the learning, or there are tweaks to be made to the algorithm or maybe it's just that at this moment in time for certain games DLSS is never going to be exactly right as the way humans interact with games is impossible to predict compared to a benchmark run or demo. It will never be exactly like the native resolution and there will always be places where it doesn't look quite right but it will look much better than it does now as they tweak both the learning process and the algorithm output to suit each game better.
 
Wow, I got no notification about this post at all? Sorry for the late reply, also, just want to make one thing clear, I wasn't calling you daft, I was calling the idea daft. And the more you talk about it the more daft it gets.

DLSS profiles? Real DLSS? lol WTF? You keep comparing it to Steam controller profiles and rating things you buy on Amazon. Sorry, but, Seriously?

For DLSS the AI is comparing a low res version of the game with a super high res version of the game and it learns how to upscale the low res version to look as near as possible to the high res version. There are no different versions. There isn't going to be multiple profiles. It either looks like the High res version or it doesn't. The problem at the moment is that it hasn't been done right. Maybe they didn't use enough nodes when doing the learning, or there are tweaks to be made to the algorithm or maybe it's just that at this moment in time for certain games DLSS is never going to be exactly right as the way humans interact with games is impossible to predict compared to a benchmark run or demo. It will never be exactly like the native resolution and there will always be places where it doesn't look quite right but it will look much better than it does now as they tweak both the learning process and the algorithm output to suit each game better.
I'm not really having much hope for DLSS.
 
Just a heads up, they are a known biased anti nVidia web site. Not a place you want to get your information or news from.
Known anti-nVidia web site? Seriously? Are you some kind of nVidia fanboy? nVidia really screwed up with the launch of the RTX cards, there's no doubt about it. Hardware Unboxed laid out the facts. Just because you don't like the facts doesn't make them anti-nVidia, it makes makes they pro-consumer by telling people the negatives of a product which there are a great many regarding the RTX line.
 


Even more Anti-Nvidia content! Can you believe these guys. How can they say such terrible things about the 1660ti! I mean they are lying calling it a RX 590 killer.
 
Back
Top