More DLSS...

Can you imagine the confusion to keep the exact same first numbers (say it would have been DLSS 2.2 or 2.3) but not make it available to the previous generation of cards ?

Seem like a massive no-brainer to something has massive has a new inter frame generator and a new tracking of motion vector less element on the scene to get a new number, seem not that smaller if any smaller than going from 1 to 2 because of the generalised instead of specialised machine learning.

I would even go to say if you decide to not make it available on the previous gneration of cards you have almost no choice at all to change the name or at least the number, so game with the little sticker do not create confusion on the buyer.

I think the reason(s) is(are) a little more simple than all that: There are a lot of Nvidia 30 series GPUs floating around for bottom of the bin prices (if you include secondhand, it's even worse). One of their most purchased (and from a warranty/longevity standpoint, trusted) AIB partners also just left the game entirely (and we don't know what the others are thinking). Nvidia seems to be scrambling to try to make this generation seem like a worthy upgrade for gamers, so how are they going to do that with, "Uh this is DLSS...2.5..ish"? I know it's standard fare to hype up every generation, but to me it will be a bit interesting to see how Nvidia tries to actually make upgrading seem worthwhile when current gen cards can be bought for what seems to be lower and lower pricing, and can play basically anything out on the gaming market at way more than sufficient framerates and image quality, especially for 1440p ultrawide (or similar) resolutions. It's entertaining to me actually, I would enjoy to see them scramble a bit more.
 
This also begs the question... is the 4080 12GB and 16GB truly faster than the previous gen? We know the 4090 will be faster as it will just brute force its way on framerates, but the 4080 16GB and 12GB both seem somewhat inferior to previous gen 3080 10GB and 12GB versions.
4060 ti is rumored/leaked to match 3080 in raster performance.
 
4060 ti is rumored/leaked to match 3080 in raster performance.
And that that would be almost impossible considering the 4080 12 gig is only around 15% faster than a 3080 for non DLSS 3 games in nvidia's own slides.
 
Last edited:
Its great as long as it doesn’t add input latency. My fear, though, is that it does, and Nvidia will tell competitive gamers to disable DLSS 3.
DLSS nearly always results in a framerate increase, which results in overall lower latency.

And since DLSS 3 has taken steps to even lower CPU dependant frametimes: I think the latency should only be as good or better, than DLSS 2.X

There is nothing here making me think it could increase latency. Especially since Nvidia has been pretty bullish recently, about lower latency and marketing that fact.
 
DLSS nearly always results in a framerate increase, which results in overall lower latency.

And since DLSS 3 has taken steps to even lower CPU dependant frametimes: I think the latency should only be as good or better, than DLSS 2.X

There is nothing here making me think it could increase latency. Especially since Nvidia has been pretty bullish recently, about lower latency and marketing that fact.
Except that in order to do Frame Interpolation, you have to have a beginning and ending frame rendered before you can fill in the gaps. That means you're at a minimum of 1 frame behind real-time, which is not good for competitive gaming. At 60 FPS, that's 16.7ms additional latency at least (obviously competitive games would have much higher framerates).
 
Except that in order to do Frame Interpolation, you have to have a beginning and ending frame rendered before you can fill in the gaps. That means you're at a minimum of 1 frame behind real-time, which is not good for competitive gaming. At 60 FPS, that's 16.7ms additional latency at least (obviously competitive games would have much higher framerates).
Was about to post the same.
There are other interpolation types but best quality needs 1 frame rendered ahead at least.

Forgot to add...
I saw mention only 1/3rd of the frames are rendered, implying 2 of 3 frames are interpolated.
this would require rendering 2 frames ahead at least with the associated lag.
 
Last edited:
Except that in order to do Frame Interpolation, you have to have a beginning and ending frame rendered before you can fill in the gaps. That means you're at a minimum of 1 frame behind real-time, which is not good for competitive gaming. At 60 FPS, that's 16.7ms additional latency at least (obviously competitive games would have much higher framerates).
that not my understanding (in a game if you have the next frame just show it, FPS is high enough to not be worth it to do that imo and it would require zero intelligence to do that type of interpolation has well, that something a TV can try to do)

From my understanding they uses the game motion vector and AI generated motion vector for the element and a couple of the previous frames has a way to generate a very possible interpolated frame between the last one and the yet to know next one.

I am not sure if it will add lag, but it seems impossible that it will reduce it, you still need a real new from the game frame to react to your action to arrive.

------------
Edit: So that was all wrong, they lie about the not adding significant lag (only true if the rendering became close to irrelevant in the input lag chain)
 
Last edited:
confirmed by Nvidia's AndyBNV on the ResetEra forums:

"When a DLSS 3 game comes out, and you own a GeForce RTX 30 Series or 20 Series graphics card or laptop, you can use DLSS 2 Super Resolution in that title...So if you own a GeForce RTX 30 or 20 Series, you would get DLSS 2 + Reflex. Even if a game is labeled as a "DLSS 3 Game"

https://www.resetera.com/threads/nv...-will-get-support.634256/page-3#post-93607910
Yes, because DLSS 3 is DLSS 2 with the Frame Generation tech bolted on.

Honestly, DLSS 3 with the Frame Generation might be the most dishonest bullshit tech we have seen in years. Making fake interpolated frames and calling it a frame rate boost is pretty damn high on the bullshit meter.
 
Yes, because DLSS 3 is DLSS 2 with the Frame Generation tech bolted on.
And at a minimum new motion vector creation that do not depend on the game engine which should reduce artifact on the motion vector less elements issues (like shadows or particules), no ? Which for sure would not break compatibility with DLSS 2.0 because the game does not change or need to do anything in that regard.

I am not sure how Nvidia reflex mentionned by the marketing got involved vs DLSS 2.0, it could be just to help the timing of the made up frame vs real one (maybe reflex is used to make sure that you never slow down the real frame by a yet to be finish made up one for example)
 
Nvidia Engineer Says DLSS 3 on Older RTX GPUs Could Theoretically Happen, Teases RTX I/O News

Nvidia Vice President of Applied Deep Learning Research Bryan Catanzaro left the door open to DLSS 3 potentially becoming compatible with GeForce RTX 2000 and 3000 series in the future, although he stressed it wouldn't yield the same benefits seen with the new graphics cards...

1.png
 
Catanzaro also commented on the potential latency issue potentially introduced by the new DLSS 3 method...Nvidia plans to circumvent the problem by bundling their Reflex anti-system latency technology...

https://twitter.com/more_fps/status/1572299524839968768

View attachment 512440
View attachment 512441
Interesting, so I didn't have this take on them getting the motion vectors from games. It sounds like their dlls or injectable codebase can just pull game object data and derive motion vectors from draw call data. Devs won't have to do much work at all to integrate this stuff. The drawback is that it'll only work on their most recent cards. You could probably have another codebase devs could hook up manually to feed motion vectors to the GPU, but that would take a lot of work for bigger games. My guess is the cost of integrating this as opposed to regular old DLSS is 0 though. That's pretty crazy tech. Good on them for advancing image processing technology, just wish the cards were a little more affordable. It's a good gen of tech and cards, and rough pricing.
 
It's not interpolation, it's extrapolation, and will not add any latency, it's displaying frames immediately as they are rendered. It's taking the last two frames and generating a future frame that is displayed before the next frame is even generated.

It's less accurate than interpolation because it's guessing the future, but there is no added latency. But it does not decrease latency either.

if you walk in a straight line it will work very well at predicting the next frame. It cannot predict that you are going to change direction. So there is no reduction in latency. Latency is simply the same with it on or off. But it's not going to be accurate when things change direction so it may not work well for certain things. There are a lot of AI things they're doing account for that and make it a lot better than a simple extrapolation calculation.

if you had 60 fps of real frames and 120 fps of DLS3 frames they would have the same latency. 120 fps of real frames would have half the latency as 120 fps of DLS3 frames.

This is actually very exciting. If it works well it will make a huge difference, and doubling frame rate is just the beginning, they could triple, quadruple, 10x, etc. And dynamically change how many frames are generated so you get a solid 120 fps, or 1000 fps.
 
It's not interpolation, it's extrapolation, and will not add any latency, it's displaying frames immediately as they are rendered. It's taking the last two frames and generating a future frame that is displayed before the next frame is even generated.

It's less accurate than interpolation because it's guessing the future, but there is no added latency. But it does not decrease latency either.

if you walk in a straight line it will work very well at predicting the next frame. It cannot predict that you are going to change direction. So there is no reduction in latency. Latency is simply the same with it on or off. But it's not going to be accurate when things change direction so it may not work well for certain things. There are a lot of AI things they're doing account for that and make it a lot better than a simple extrapolation calculation.

if you had 60 fps of real frames and 120 fps of DLS3 frames they would have the same latency. 120 fps of real frames would have half the latency as 120 fps of DLS3 frames.

This is actually very exciting. If it works well it will make a huge difference, and doubling frame rate is just the beginning, they could triple, quadruple, 10x, etc. And dynamically change how many frames are generated so you get a solid 120 fps, or 1000 fps.
Predictive modeling like that can and will cause issue if you extrapolate too far, even one or two frames makes me think there's going to be issues. Inserting frames between two frames in the frame buffer makes more sense. There's no additional latency cost, but there's none gained either, it'll just look far smoother. Is there a paper on their extrapolation? I understood it as an interpolation function using draw call data and they were also able to get vector motion data and interpolate frames from there. Engineering isn't magic, but it is cool. Kind of reminds me of working on multi-player FPS games a bit.
 
Predictive modeling like that can and will cause issue if you extrapolate too far, even one or two frames makes me think there's going to be issues. Inserting frames between two frames in the frame buffer makes more sense. There's no additional latency cost, but there's none gained either, it'll just look far smoother. Is there a paper on their extrapolation? I understood it as an interpolation function using draw call data and they were also able to get vector motion data and interpolate frames from there. Engineering isn't magic, but it is cool. Kind of reminds me of working on multi-player FPS games a bit.

I haven't seen any papers on it or anything, I could only see what they posted on their website. But there has been talk about frame techniques like this for a long time. It is actually very similar to what they do for multiplayer FPS dealing with lag. The higher the native frame rate you can render the less noticable any issues there will be with extrapolating more frames because they'll be so quick. But if you get super high FPS you'll be able to get to a point where you have zero motion blur without having to resort to strobing. It would be huge for VR where low latency and low blur are extremely important, but strobing sacrifices a ton of the brightness.
 
I haven't seen any papers on it or anything, I could only see what they posted on their website. But there has been talk about frame techniques like this for a long time. It is actually very similar to what they do for multiplayer FPS dealing with lag. The higher the native frame rate you can render the less noticable any issues there will be with extrapolating more frames because they'll be so quick. But if you get super high FPS you'll be able to get to a point where you have zero motion blur without having to resort to strobing. It would be huge for VR where low latency and low blur are extremely important, but strobing sacrifices a ton of the brightness.
They already do something like this in VR. Valve calls it "Motion Smoothing" for SteamVR and is more generally refered to as "Asynchronous Space Warp".
 
And it's pretty damn good in VR already. I think of DLSS 3 as (hopefully) a new and better iteration of that, which also works for flat screen gaming. The potential is absolutely huge, it could literally be a revolution... IF it works well enough.

We will see, I'm looking forward to independent, in-depth testing on it. I don't mind upgrading to a 4090 or 4080 if it's good enough and it works on more than a couple of games I own. But for now a lot of unknowns so I guess I'm not gonna pre-order (if that's even an option) this gen or bother chasing a card on launch day. For all I know the tech might not be quite ready - and it'd be smarter to skip one gen.

For sure it is exciting and I can't wait to know more about it.
 
Predictive modeling like that can and will cause issue if you extrapolate too far, even one or two frames makes me think there's going to be issues. Inserting frames between two frames in the frame buffer makes more sense. There's no additional latency cost, but there's none gained either, it'll just look far smoother. Is there a paper on their extrapolation? I understood it as an interpolation function using draw call data and they were also able to get vector motion data and interpolate frames from there. Engineering isn't magic, but it is cool. Kind of reminds me of working on multi-player FPS games a bit.
No, there'd be additional latency. You can't just insert the frame; you'd have to present the tweened frame "now", and delay the "now" frame by half-a-frame. So, total additional latency would be half-a-frame plus a couple milliseconds for image generation. Call it an extra 10-20ms.

That said, I still think it's going to be tweening and not predictive generation. VR systems can get away with reprojection because the rate of viewport motion is fairly low, and the content is already basically made of compromises. On PC games with high action and low framerate, the disocclusion artifacts with predictive generation would be hellish.

That's likely why they're coupling this with Nvidia's Reflex thing -- to try to steal back enough input latency to make tweened smoothing a wash on non-esports titles.
 
And it's pretty damn good in VR already. I think of DLSS 3 as (hopefully) a new and better iteration of that, which also works for flat screen gaming. The potential is absolutely huge, it could literally be a revolution... IF it works well enough.

We will see, I'm looking forward to independent, in-depth testing on it. I don't mind upgrading to a 4090 or 4080 if it's good enough and it works on more than a couple of games I own. But for now a lot of unknowns so I guess I'm not gonna pre-order (if that's even an option) this gen or bother chasing a card on launch day. For all I know the tech might not be quite ready - and it'd be smarter to skip one gen.

For sure it is exciting and I can't wait to know more about it.
But there is a notable difference. We accept the artifacting in VR because without this feature the likely hood of physically puking from low frame rate drops makes it acceptable. And thats why the feature is there. Where as with traditional gaming... it's a visual compromise for speed on-top of the already present compromise that DLSS 2.x can be to start with.

So I really really hope that with DLSS 3 you can turn this off if you don't want it... and yet excited to see what it looks like. Fun times :D

*edited, think I was too harsh in the initial post*
 
Last edited:
During a behind closed doors presentation with the press, Nvidia shared the first concrete performance figures for the GeForce RTX 4090 GPU...

DLSS 3.jpg
 
I think the potential is really giant (think how much we do for sound vs images)

In the futur AI will maybe predict where it is important to have quality, when frame rate is important versus not, we only see in "high resolution, high color" for a very small portion of our field of view (3-4 degree), rendering at 8k-120hz everything all the time on giant TV size monitor (And on 2 screen at the same time in VR) would be a giant amount of waste, Apple will make giant step here and NVIDIA on day has well.
 
If DLSS 3 has insane artificating or whatever it's going to blow up in their face. Atleast the 4090 has a good base performance uplift over the 3090 ti

I really doubt it's going to look like hot garbage, I expect it's going to look decent. Because your right, if this didn't at least look OK they likely wouldn't do it. Why I'm kinda excited to see how it looks.
 
I really doubt it's going to look like hot garbage, I expect it's going to look decent. Because your right, if this didn't at least look OK they likely wouldn't do it. Why I'm kinda excited to see how it looks.
Without a doubt I'm curious, but I find it hard to believe there won't be drawbacks to native frame rates. I think it's a great idea and I hope it works well. I think we're all on the same page, it's a great generation of GPUs but the pricing just stinks for the 80 series.
 
So given that we are onto the 4000 series now, I guess that NVidia completely skipped the lower power options (eg. 6-pin power connector video cards) for midrange users in their 3000 series? No 6-pin video cards at all with RT capabilities, I guess I will be holding my breath for a 6-pin RTX 4050 but in all honesty, given the trend, I think that it is probably more likely that it will be 12-pin than 6-pin. I was planning to eventually grab a 6-pin RTX card as an upgrade path for my current PC once it becomes cheap in a few years, but I guess it's possible that NVidia will force anybody who wants a 100W TDP card to stay with GTX indefinitely.
 
But there is a notable difference. We accept the artifacting in VR because without this feature the likely hood of physically puking from low frame rate drops makes it acceptable. And thats why the feature is there. Where as with traditional gaming... it's a visual compromise for speed on-top of the already present compromise that DLSS 2.x can be to start with.

So I really really hope that with DLSS 3 you can turn this off if you don't want it... and yet excited to see what it looks like. Fun times :D

*edited, think I was too harsh in the initial post*
Well sure but I'm pretty certain we are WAY more sensitive to any glitches and visual flaws when using VR vs flat screen since we are so close to the screens and literally see individual pixels. And in VR there is virtually no motion blur even with LCD panels due to strobing and other clever things - so really any flaws become that much more noticeable and distracting even to the average joe.

ASW/motion smoothing etc. are "lightweight" solutions that work even on low end hardware so they have serious constraints compared to DLSS 3 on a dedicated last gen GPU. And yet they are still pretty solid. So I'm cautiously optimistic about DLSS 3. But of course it's a company out to make money so I have no idea how much time and effort they truly put into it.
 
Good analysis, but we will have to wait and see what Nvidia is selling with real break-downs. Can't wait!
As far as we go I have a sinking feeling when Nvidia only charted their new "tech"! DLSS3 so not great?
Reminds me of "Turding" where it was embarrassed over previous gen but had undeveloped/useless RT as THE selling point.
Sold like shit! and allowed AMD back in the game so to speak. . Back to Fermi it is then. Hope the second round has a champion light weight but not putting $$ on it this time.
 
According to DF (will see if they are right), DLSS 3.0 would use the actual next frame and use interpolation:


If true all the talk of little latency added would be a play on word, in the sense DLSS 3.0 with Reflex not much latency added versus playing the game without Reflex and not comparing without DLSS and Reflex on.
 
So it is interpolation. It's actually _really_ good and outperforms every product on the market. And yes, there's still some artifacts.


Latency is basically flat to slightly behind not using frame generation.
 
So it is interpolation. It's actually _really_ good and outperforms every product on the market. And yes, there's still some artifacts.
Latency is basically flat to slightly behind not using frame generation.
Those 2 things seem impossible to me, and make me wonder 1 of 2 things
1) They are simply wrong on how it works
2) There is a marketing campaign that say (nope):
frame interpolation + Reflex about the same latency than not using frame interpolation
which could be true but still a different way than saying
frame interpolation + Reflex is about twice the latency than no using frame interpolation with Reflex on

Or it is some voodoo magic by brain do not comprehend going on and it make the lightspot artifact we saw on the spiderman demo strange, why interpolating between 2 existing frame without them would have the AI frame pop some up.

In cyberpunk the latency is indeed almost doubled, i.e. reflex work well to have 0 added lantency if the frame rate is really high and you already have a latency much higher than the one added by rendering. The non added latency being has expected, a marketing lie.

One interesting is how much a 12900K seem to limit performance at 4K some games, will need a redo with a 13900K-7900x
 
Last edited:
Seems really cool, I am a little concerned with the latency issue but we'll see what happens once these things are independently tested. My guess is that will be the biggest hurdle for them to clear. Well, not that concerned there's no way I can afford a 4090 right now haha. Seems like great tech though, I'm sure it will only get better.
 
The higher the framte will get the better the tec will be I could imagine, rapidly latency stop to be an issue if you are in high framerate and the less change easier to create the made up frames, for people wanting to reach 240 frame on the next generation of TV or 500-1000hz on monitor, it could become a very low downside tools.

For the would game only at 30 fps without it scenario, now that become less obvious, if latency isn't an issue at all on a flight simulator, Detroit Becoming Human video game type could be nice, otherwise could be a better but still SLI issue like scenario, nice average numbers for a not that better experience.
 
I dont mind a small bit of latency increase if it wasnt good enough before to get fluid motion. But if a game can be done at 4k120+ without DLSS 3 then theres no point for me.
 
DLSS 3 probably helpful in scenarios where it can boost 60 fps to 120 fps
Yea, now my question is: will there be a toggle for DLSS 2 vs 3 in games? DLSS 2 for Multiplayer modes and DLSS 3 for single player max out settings at 4k.
 
DLSS 3 probably helpful in scenarios where it can boost 60 fps to 120 fps

... but why? I mean yes 120fps is better than 60fps, but half the reason is the responsiveness offered by 120fps. But with Frame generation your not getting that, your just getting the visual part.

I'm curious though. It's going to be really interesting. Because for the right type of games, it could be noice :)

Yea, now my question is: will there be a toggle for DLSS 2 vs 3 in games? DLSS 2 for Multiplayer modes and DLSS 3 for single player max out settings at 4k.

Yes. It's looking like Frame Generation will be a separate toggle.
 
Last edited:
But there is a notable difference. We accept the artifacting in VR because without this feature the likely hood of physically puking from low frame rate drops makes it acceptable. And thats why the feature is there. Where as with traditional gaming... it's a visual compromise for speed on-top of the already present compromise that DLSS 2.x can be to start with.

So I really really hope that with DLSS 3 you can turn this off if you don't want it... and yet excited to see what it looks like. Fun times :D

*edited, think I was too harsh in the initial post*
This!
Fallout VR with FSR mod comes to mind and No man sky with DLSS. Upscaling is godsend in those titles and if DLSS 3 works well (and is well supported) in VR titles, it will be gold for us that use VR!
 
Back
Top