NVIDIA DLSS 3 “Frame Generation” Lock Reportedly Bypassed

I want to see side by side output quality comparisons with it disabled and enabled.

If it turns the output quality to crap to boost the frame rate than it is still crap.
HUB does a good analysis on it. If you've used video interpolation before, it exhibits exactly the same artifacts:
 
I want to see side by side output quality comparisons with it disabled and enabled.

If it turns the output quality to crap to boost the frame rate than it is still crap.
That is exactly what it does. Its not terrible on something like a 4090... because the real FPS is so high the generated frames are on screen for a micro second.
Having said that if your card can do 120 FPS native without it WTF would you turn it on anyway. The only reason to have FPS over that is esports type titles... and DLSS 3 introduces a ton of lag.
On lower end cards when they come out those crap frames will be on screen longer... and the lag will start getting into the zone where people notice. Who wants to use a feature that gets you to 120FPS but response still feels like your getting 60.
I have yet to see it in operation but I assume its much like frame generation on TVs. No thanks Motion compensation is the first thing I turn off... I don't need the soap opera effect on my movies, and pans that make me want to be sick. :)
 
HUB does a good analysis on it. If you've used video interpolation before, it exhibits exactly the same artifacts:

So it is pretty much crap.
I remember way back when nVidia had the cheaty drivers where they would skip rendering some stuff in order to inflate the benchmark numbers.

But now it is ok to do so because it is a "feature" and they aren't trying to hide it?
 
My take away from the frame generation is if you own a 240 or 360hz display and you play mostly single player games then give it.

What I am most interested in personally are questions related to the new DLSS SDK version and Nvidia Streamline SDK toolsets.

Frame generation is good and all, but I feel the frame rate increase is still too much compared to the currently released SDK’s for DLSS 2.

I am wondering if the newer kits also have some improvements in there that improve the DLSS performance rate for the non frame generation feature sets. AKA DLSS 2, Nvidia hasn’t updated the SDK’s available on their website yet with the newer versions and I am wondering if that is so we can’t test a wider array then their current cherry picked titles so they can make the gains from DLSS 3’s frame generation seem more impressive in comparison.
 
HUB does a good analysis on it. If you've used video interpolation before, it exhibits exactly the same artifacts:

Fairly sure HUB did a very poor analysis this time. There are inherent difficulties in proving a video analysis for DLSS 3 as described in the analysis done by digital foundry.

 
Last edited:
So it is pretty much crap.
I remember way back when nVidia had the cheaty drivers where they would skip rendering some stuff in order to inflate the benchmark numbers.

Everyone did it, I certainly remember reading reviews here about image quality and the bullshit rendering techniquest that one vendor would do over another vendor. It was definitely a factor that was pointed out here. This sticks out in my mind, during the early to mid 2000s.

But now it is ok to do so because it is a "feature" and they aren't trying to hide it?
So, as for a devils advocate, I certainly appreciate DLSS 2.0 in my first-person-shooters to keep my frame-rate's up, given my resolution and my hardware. My appreciation for the technology is running a 2070Super, at 1440p in Warzone. It keeps me around the mid-80's frame-rate, with what I perceive as low latency performance. I'm not looking for 100% visual fidelity, and I'm certainly okay with some occasional "issues" with it.


This isn't too bad, I'd never notice this while dropping in, but if I had to pixel-peep, the outside of my immediate focal point of the game, looks a bit, "off". Textures at the periphery of my view, are muddy,

IDGAF-1.png





When I swing around violently, this is sometimes the result of a captured NVENC frame:
WTF-is-thsi.png


It disappears and the game goes on. Honestly, I don't ever recall seeing this level of "Shittacular" rendering in my game, it lasted for 1/16" of a second, but from my perspective, it was a 0.011sec (per the FPS counter) event. Was this an NVENC artifact, or was this the worst of DLSS has to offer, for 1 frame ?

Is what HUB reports accurate, I'd have to say "yes." Is this a feature that would turn me off to the product.... I'd probably say "no."

DLSS is not a magic solution, but if it keeps me in the game with sub-standard hardware at a performance level that I feel that is needed, I'm gonna use it.
 
Seeing as most games use either TAA or FXAA, DLSS Quality is always a better option for getting rid of Aliasing as it disables the other forms of AA. the only time I would say "native" is better is if MSAA is an option, but I rarely see that in games anymore. Last game I played that had it was Forza Horizons 5, and it does look damn good and play smooth.
I have to echo this sentiment. DLSS 2.0 looks pretty damn good, but it varies. In Spiderman Remastered, it looks better than the optional AA options (to my eyes). In Destroy All Humans 2, I do notice some softness so I leave it off. On 77" OLED, I would notice if DLSS had a "smeared vaseline" effect. It's never that bad.

There are plenty of caveats and I don't think DLSS is a substitute for developers properly optimizing their games, but it's not crap at all. It's actually pretty impressive tech in many cases.

In regards to it looking better than native - sometimes DLSS cleans up the image better than whatever AA solutions are offered otherwise. At least in motion.
 
Fairly sure HUB did a very poor analysis this time. There are inherent difficulties in proving a video analysis for DLSS 3 as described in the analysis done by digital foundry.


Isn't he saying the exact same thing as HUB ?
Only thing I see different is he is bullet pointing the issues...
1 Cycling animations... animations that repeat can get you to perceive the AI generated minor F ups. As they repeat.
2 Scene transitions... same thing HUB said, if you pan a camera fast or change camera angles your going to get noticeable gambled frames
3 Hud or Thin Transparency... same issue HUB described.
4 Rapid flashes... things like guns firing, Smokey tail pipes. To me this sounds sort of like #1 rapidly cycling animations get Fd up.
5 Erratic mouse movements. (sounds like an issues for First person shooters to me)

As for lag he finds DLSS 3 has 1/3 more lag then DLSS 2.

I think they both come to the same bottom line... with a 4090 where your starting north of 100 FPS. All of the above are not super noticeable (outside of the cycling animation stuff). The issue is imo DLSS is not a technology for flagships. Who is paying almost 2k for a GPU to turn on DLSS ? I hope no one in their right mind. When they finally do release a 4070 and a 4060.... people are going to notice all of the above x2 because the AI frames will be on screen twice as long.
 
Isn't he saying the exact same thing as HUB ?
Only thing I see different is he is bullet pointing the issues...
1 Cycling animations... animations that repeat can get you to perceive the AI generated minor F ups. As they repeat.
2 Scene transitions... same thing HUB said, if you pan a camera fast or change camera angles your going to get noticeable gambled frames
3 Hud or Thin Transparency... same issue HUB described.
4 Rapid flashes... things like guns firing, Smokey tail pipes. To me this sounds sort of like #1 rapidly cycling animations get Fd up.
5 Erratic mouse movements. (sounds like an issues for First person shooters to me)

As for lag he finds DLSS 3 has 1/3 more lag then DLSS 2.

I think they both come to the same bottom line... with a 4090 where your starting north of 100 FPS. All of the above are not super noticeable (outside of the cycling animation stuff). The issue is imo DLSS is not a technology for flagships. Who is paying almost 2k for a GPU to turn on DLSS ? I hope no one in their right mind. When they finally do release a 4070 and a 4060.... people are going to notice all of the above x2 because the AI frames will be on screen twice as long.
I'm talking specifically about the fact he says certain recordings can show artifacts that aren't really there locally due to lacking hardware capture devices that support 4K 120Hz and beyond.

I have no doubt that DLSS 3 will have issues in it's first foray into image interpolation, but I wonder if a lot of these, maybe even the vast majority of these images and slowed down video examples aren't just flawed methodology in analyzing due to limitations.
 
Last edited:
I remember way back when nVidia had the cheaty drivers where they would skip rendering some stuff in order to inflate the benchmark numbers.
You mean when both ATI and Nvidia had cheat drivers? Let's not kid ourselves here, corporations will do anything and everything under the sun to make themselves look better. Why so many people on the web pretend like one company is more altruistic than the other is beyond me.
 
My first problem with DLSS was when in motion. That was with DLSS 2, later Nvidia did a good job cleaning up the motion artifacts to something one can ignore.

DLSS 3 where you have 1/2 of the frames rendered, the good frames, and the other half taking two frames, then making a guess what each pixel should be, a crap frame.

Best way to show the two, is to separate the good rendered frames from the generated frames during game play. Use lower resolution if needed, Vsync (currently not supported but was able to be implemented by DF at some level.) Play the render frame video next to the generated frame video, side by side.

What I see in DLSS 3, half good frames, half crap frames. Some folks will adjust and see more of the good frames while others will see more crap.
 
Last edited:
You mean when both ATI and Nvidia had cheat drivers? Let's not kid ourselves here, corporations will do anything and everything under the sun to make themselves look better. Why so many people on the web pretend like one company is more altruistic than the other is beyond me.
Yep. But Nvidia has shown time and time again that they are more slimy than ATI/AMD.
 
I imagine ATI/AMD really loves that you and others think that way. Wise old proverb, "There's a sucker born every minute". :whistle:
No no, they are definitely slimier than their competition. They've had the opportunity, if AMD's next offerings start to out perform nVidia's we can watch them become the villain.

AMD has to produce a few generations of great cards to build the kind of mind share nVidia has. Once that happens they can go full tech industry and fuck their customers and partners because that how a tech company do.
 
No no, they are definitely slimier than their competition. They've had the opportunity, if AMD's next offerings start to out perform nVidia's we can watch them become the villain.

AMD has to produce a few generations of great cards to build the kind of mind share nVidia has. Once that happens they can go full tech industry and fuck their customers and partners because that how a tech company do.
So far in 53 years as a company the only thing you can say they have ever done that was anti customer. Was not have the funds to invest in some things other companies have had cash to do. Software side could have been better CPU side... AMD has never had the cash or size to get MS to do everything required to max their hardware. They have been behind on man power in datacenter support... and until fairly recently their in house GPU driver teams have been smaller then the other guys. They are also still not the guys that pay streaming software companies directly to implement their best tech. (AMDs open source support is first a $ decision... its pro consumer but its also a way to try and get open source people and developers themselves to do some of the work) They do work with some game developers more directly the last few years... now that they are making some more money. They still don't have the bank roll to be doing it as hard as NV does. (and presumably if Intel decides to do the same) Lisa has been doing a MUCH better job at juggling their smaller pool of resources then anyone previously in charge at AMD. Still she has (or had anyway) to make choices other companies haven't had to. She choose to keep all that consumer facing support light so they could R&D CPUs... she kept the GPU teams light for a few years so they could focus on one thing and compete.

Do they have the potential to be a villain... I guess. It would be a first in over 50 years but anything is possible. And before anyone mentions one off cheating OpenGL drivers or something.... to my knowledge anything like that that happened either came down to one stupid employee or incompetent one. There was a few times where they (and Nvidia for that matter) where accused of cheating a driver where if you explore what was going on you realize it was just a shit implementation of a feature that was skipping steps on something almost for sure by accident. There was a few times when both companies fixed issues that reduced performance without being called out. I remember both AMD and Nvidia at one point doing some iffy filtering and they both corrected them I don't think either was "cheating" specifically.
 
There was a few times where they (and Nvidia for that matter) where accused of cheating a driver where if you explore what was going on you realize it was just a shit implementation of a feature that was skipping steps on something almost for sure by accident. There was a few times when both companies fixed issues that reduced performance without being called out. I remember both AMD and Nvidia at one point doing some iffy filtering and they both corrected them I don't think either was "cheating" specifically.

quack3.exe
 
Best way to show the two, is to separate the good rendered frames from the generated frames during game play. Use lower resolution if needed, Vsync (currently not supported but was able to be implemented by DF at some level.) Play the render frame video next to the generated frame video, side by side.
More and more youtube 60 fps limitation will be somewhat of a big deal, maybe people will actually do a realistic version for people to look at, but on youtube outside the 30 fps with DLSS 2 pushed to 60 with DLSS 3.0 type of example, they will never be like in real life.
 
I imagine ATI/AMD really loves that you and others think that way. Wise old proverb, "There's a sucker born every minute". :whistle:
I don't see anyone claiming that AMD loves them, simply that they're much less shitty than Nvidia who will screw over anyone if given the chance. AMD is a company that looks out for their interests first and their shareholders interest second but have done much less shitty things than Nvidia and in many cases push for solutions that benefit everyone, you can argue that they'd do the same thing if they were in that position but the fact of that matter is that AMD has been much less shitty than Nvidia and should be judged on actual track record.

I'm not sure why some people can't seem to grasp that comparing companies and pointing out that one is less shitty than the other doesn't mean that they think the less shitty one is some wonderful and benevolent entity.
 
How easily people seem to forget. Thank you for the reminder.
Remember yes... blame the entire company for it no. It was a stupid time where both companies had some bad actors. Still most of the "cheating" blamed on them was incompetence.
The official story on the Quake cheating controversy was simple. AMD wasn't cheating, they where doing what every company does to this day. Optimizing for specific games. When a driver release comes out and says X or Y game gets 20% better performance what do people think is happening ? All those games use the same Frameworks... the drivers accelerate DX/Vulcan. Game developers are not doing anything specifically outside the standard APIs.
SO where do the optimized driver finds come from ?
In 9 out 10 cases the driver engineers notice a specific title loads specific size textures, or makes use of X or Y more then other things. So they optimize it.... (if I remember right back with quake AMD was compressing mip maps regardless of quality setting... which they said was in error and that is all they changed after their performance was basically unchanged in quake, with the acceptation of the highest quality setting where disabling the compression was part of the highest qual setting) Frankly the uproar at the time was stupid.

The upside of that incident... was I think going forward from there both AMD and Nvidia where a little more careful with their driver optimizations. Optimize when its really that... but don't go so far it reduces IQ even a little.
 
Last edited:
*ahem*

https://www.nvidia.com/en-us/geforce/news/dlss3-ai-powered-neural-graphics-innovations/



Where can i get me an RTX 2000 card with 4th-generation Tensor cores?

*ahem*

Direct from Brian Catanzaro, VP of Applied Deep Learning Research @ NVIDIA

https://twitter.com/ctnzr/status/1572334396145012738

More specifically, this quote, right here:

"The OFA has existed in GPUs since Turing. However it is significantly faster and higher quality in Ada and we rely on it for DLSS3."

If you read the entire Twitter thread, he basically explains that the OFA has been there since Turing (RTX 2xxx cards). Yes, DLSS3.0 relies on this, and it's probably not the ONLY thing it relies on to WORK EFFECTIVELY, which is why he goes on to also say that it would be possible for Turing and Ampere cards, but that they didn't 'unlock' the feature, as it would provide instability, frame drops and overall poor performance for the majority of people.

You quoted a PR document from NVIDIA's website. OF COURSE they're going to claim it's only possibly on the RTX 4xxx cards! It's the SELLING POINT for the card. "Oh, we're offering new and improved DLSS running on the newest hardware.... buuuuuut... we could have possibly provided this to you 4 years ago! You're welcome!"
 
*ahem*

Direct from Brian Catanzaro, VP of Applied Deep Learning Research @ NVIDIA

https://twitter.com/ctnzr/status/1572334396145012738

More specifically, this quote, right here:

"The OFA has existed in GPUs since Turing. However it is significantly faster and higher quality in Ada and we rely on it for DLSS3."

If you read the entire Twitter thread, he basically explains that the OFA has been there since Turing (RTX 2xxx cards). Yes, DLSS3.0 relies on this, and it's probably not the ONLY thing it relies on to WORK EFFECTIVELY, which is why he goes on to also say that it would be possible for Turing and Ampere cards, but that they didn't 'unlock' the feature, as it would provide instability, frame drops and overall poor performance for the majority of people.

You quoted a PR document from NVIDIA's website. OF COURSE they're going to claim it's only possibly on the RTX 4xxx cards! It's the SELLING POINT for the card. "Oh, we're offering new and improved DLSS running on the newest hardware.... buuuuuut... we could have possibly provided this to you 4 years ago! You're welcome!"
It's a legitimate explanation. Just like ray tracing on Pascal and older cards. Sure, it's possible, but would you really want to run a game at single digit framerates?
 
I really like DLSS3 so far. A graphic whore like me can crank up the settings to extreme levels. For example Spiderman at 5760*3240 via DL-DSR + DLSS2 quality and all other settings maxed out. Looks absolutely insane. Average base framerate is roughly 45-65FPS and 70-90FPS with frame generation enabled (game is extreme CPU heavy with raytracing though). As long as you don't hit the Vsync limit the input lag feels actually very good. A FPS cap via Afterburner/RTSS does not work though and will lead to higher input lag and stuttering. You have to crank up the settings to stay below your max refreshrate.

I also don't see any obvious artifacts (except maybe some flickering of the mission-marker) even at such a low FPS baseline of 45-65FPS.

I respect Hardware Unboxed opinion, but my summary is way more positive so far. Pretty amazing first step of frame generation IMO.
 
I really like DLSS3 so far. A graphic whore like me can crank up the settings to extreme levels. For example Spiderman at 5760*3240 via DL-DSR + DLSS2 quality and all other settings maxed out. Looks absolutely insane. Average base framerate is roughly 45-65FPS and 70-90FPS with frame generation enabled (game is extreme CPU heavy with raytracing though). As long as you don't hit the Vsync limit the input lag feels actually very good. A FPS cap via Afterburner/RTSS does not work though and will lead to higher input lag and stuttering. You have to crank up the settings to stay below your max refreshrate.

I also don't see any obvious artifacts (except maybe some flickering of the mission-marker) even at such a low FPS baseline of 45-65FPS.

I respect Hardware Unboxed opinion, but my summary is way more positive so far. Pretty amazing first step of frame generation IMO.
Single-player high graphic games are where DLSS 3 plays, latency is less of an issue in games of that nature but graphics and the details will make or break the open world and noticeable FPS dips are immersion breaking
 
Single-player high graphic games are where DLSS 3 plays, latency is less of an issue in games of that nature but graphics and the details will make or break the open world and noticeable FPS dips are immersion breaking
Definitely going to be some ups and downs as the technology improves. Gotta start somewhere.
 
Makes sense it can be useable in single player stuff with eye candy worth bogging the card down with. Interesting take from a real customer.
 
Single-player high graphic games are where DLSS 3 plays, latency is less of an issue in games of that nature but graphics and the details will make or break the open world and noticeable FPS dips are immersion breaking

Also while there is a latency increase, 120-130 frame rates will be smoother than 60-70 or so. I'm assuming the increase in frame rates would be more beneficial over the increase in latency.

Of course if you can get 120+ frame rates without any type of DLSS or similar technologies that would be the best. Use it in games where the frame rates might drop low, and these are typically single player games and often ones where extreme reaction times aren't necessary. In an online FPS I wouldn't use it though.
 
Also while there is a latency increase, 120-130 frame rates will be smoother than 60-70 or so. I'm assuming the increase in frame rates would be more beneficial over the increase in latency.

Of course if you can get 120+ frame rates without any type of DLSS or similar technologies that would be the best. Use it in games where the frame rates might drop low, and these are typically single player games and often ones where extreme reaction times aren't necessary. In an online FPS I wouldn't use it though.
Depending on your internet connection you may be able to reasonably use it with no “noticeable” impact but I agree, any fast paced competitive title would not be well suited to the tech.
 
Depending on your internet connection you may be able to reasonably use it with no “noticeable” impact but I agree, any fast paced competitive title would not be well suited to the tech.
Has anybody actually played a "competitive" game using the tech yet? It's fun to speculate, but I want real world examples.
 
Back
Top