DLSS 3 frame generation - yes or no?

Hershy

Gawd
Joined
Apr 19, 2023
Messages
550
What are your thoughts on frame generation? The debate seems to revolve around input latency.

I suppose this is something that should be examined on a game by game basis: if you’ve maxed out a game at 4K, say, and you’re getting roughly 50 fps with frame generation off, but 70 with it on, then you might want to leave it on.

If, on the other hand, you’re already well above 60 fps then you might decide to leave frame gen off due to increased input latency.

Also, wouldn’t it depend on the nature of the game itself? A slow RPG might benefit from frame generation, whereas an fps might suffer.

All of this is theoretical, of course - so what are your real-world experiences?

I’m playing Cyberpunk at 4K (max settings with path tracing enabled) on my new 4090 right now, and I have frame gen enabled in order to stay above 60 fps. Alternatively, I might drop the path tracing, and the frame gen, and play at near 100 fps.

Thoughts on this?
 
I like it. I wouldn't use it for some kind of twitchy competition shooter, but I find myself using it in several games that offer it. The visual smoothness is tough to beat, and most of those games also support the Nvidia input reduction tech.
 
Only game I have with DLSS 3 is Portal RTX and I usually dislike soap opera effect on videos and am sensitive to input latency when game streaming on my network - so I thought I'd hate it - I couldn't even notice it really

But it's only one title and an Nvidia authored showcase one at that, so I'm still withholding judgment

I also had a bug in Portal RTX where I couldn't adjust settings and had to leave everything as the presets apply them - which meant motion blur was on and also could have been masking/messing with me noticing, so another reason I'm withholding judgment until I experience it in the wild in multiple titles/executions

I'd leave it on to keep things prettier @ """60 FPS""" if you can't notice it/it doesn't bother you
 
With Cyberpunk I decided to switch off path tracing - in which case, frame gen is not necessary with DLSS enabled.

It’s fascinating how my brain works: two years ago I couldn’t come close to playing this game maxed out with ray tracing set to psycho. But now that I can… at almost 100 FPS… I feel like I’m ‘compromising’ because I’m disabling path tracing.

LOL.
 
I'm not against frame gen as a tech, but as noted in the OP, the issue currently is around input latency.
I haven't had a chance to use it first hand, but if it affects input latency in any noticeable way, then I'm basically not for it.

I'm also not expecting the next gen of FSR to change my mind either.
 
When DLSS3 FG first came out, NO. My first experience with frame generation was laggy and stuttery with Gsync and NVCP vsync on my 120Hz ultrawide.

DLSS3 FG has improved since the FG + Gsync + V-sync bug was fixed in the drivers recently and makes motion look smoother and latency more manageable, but whether I use it or not still depends on a game to game basis. I turn it on when I find that it works well and is beneficial, but on the other hand, I don't use it if it's detrimental to the gaming experience. All subjective on my part.

I have it ON so far for Portal RTX, Witcher 3, FH5 and MSFS 2020. For the last one, I had to enable motion blur to get rid of the night-time screen flickering when FG is ON.
 
The consensus is single player yes, multiplayer no. I'm guessing because of the latency sensitivity of certain types of online gaming?

Question if anyone knows, is DLSS3 a thing in VR? I haven't heard and am wondering because of things like motion reprojection, asynchronous timewarp, and motion smoothing and how the would all play together.
 
A few things I try to keep in mind with DLSS and frame amplification tech like frame generation, and even VRR:

The base rates matter. The higher the starting frame rate, the higher the resolution as a foundation for AI upscaling, frame gen, even VRR vs input lag, the better the results will be. So one person's experience with different technologies might be different than another's with varying hardware, game demands, and what the person personally used as parameters for the technologies and the game settings.

. . . . . . . . .

- the higher the base frame rate, the smaller the frame times so the less time it takes to buffer/compare frames and the less time is being inserted between frames as a generated frame. So theoretically should be less input lag by comparison to lower base rates.

https://hothardware.com/news/dlss-3-frame-generation-digital-foundry-testing
For starters, there is latency added because the generated frame appears between the two source frames used. This necessarily means the second rendered frame is effectively delayed slightly to maintain consistent frame pacing, though it should not be noticeable in general. In theory, this is taking a rendered frame that might be displayed for 6ms on-screen, but not showing it until the last 3ms of the window while the “free” generated frame is shown for the first 3ms. This may not be ideal for latency-sensitive games like competitive shooters, but could be a welcome trade-off for immersive visuals-first titles.

- I'm not certain exactly how nvidia's version of the tech currently buffers and compares frames but just for fun lets take for example
60fpsHz which is around 16.7ms per frame, 120fps hz solid which is around 8.3ms per frame , 200fpsHz solid which is around 5ms per frame.
If buffering type were theoretically to double it up in order to look ahead to compare to the next frame - then
adding another 200fpsHz frame to compare would only be adding +5ms so up to 10ms between 2 frames.
Adding another 120fpsHz frame to compare would be +8.3ms so up to 17ms between two frames.
Adding another 60fpsHz frame to compare is another +16.7ms so up to 33ms between two frames.

As an end result they'd all be cut by 1/2 compared to the first frame with the manufactured inserted frame though, as the overall frame rate in relation to motion articulation and blur reduction would be doubled so I'm not sure how that hashes out exactly. According to the quote above the latency would be because it wouldn't show the 2nd frame for half as long, but then again, it would be forecasting the in between frame "ahead of" the frame you'd otherwise see. The base rate could potentially have a huge effect (10ms is not that bad compared to 33ms).

Instead comparing the increase as ~half frame times added~ as "wait times", that would be
8.3ms "wait" generated between 60fpsHz frames
4.1ms "wait" generated between 120fpsHz frames
2.5ms "wait" generated between 200fpsHz frames
Also the smaller the differences between frames, the more accurate the "tween" frame should be. DLSS3/frame amplification tech requires Nvidia Reflex to reduce the latency too I believe (probably more important because people are often using it on the lower frame rate ranges/graphs). Also have to remember that when people say "their frame rate" they usually mean an average which varies +/- up to 15 to 30fpsHz from that. Most importantly how it varies lower. So someone with a 90fps rate could be getting 60fpsHz at times with greater latency (away from both VRR's lowest latency and getting larger "half frame times" from any frame amplification tech latency) - which could feel a lot worse overall.



- the higher the starting resolution + the frame rate, the smaller the differences between frames will be to upscale from. E.g. more unique frames in an animation flip book with a ton more pages, so the difference between two pages won't be as great to manufacture a "tween" cell between as less has changed. Also, the higher the resolution the more detail to work with to upscale from, and the tinier the pixels will be vs any occasional fringe artifacts if they happen so they will be less noticeable than if there was any occasional fringing on larger pixels/PPD grids.

-Nvidia's frame generation tech, unlike VR's form of frame amplification, is just comparing two frames and guessing the vectors. Theoretically the os, drivers/peripherals, and game development could mature the tech someday so that some actual vector information (which is how vr does it with space warp and time warp tech) would be being passed from the peripherals and in game entities and in game physics in addition to the more uninformed comparison between the two frames. That would get a lot more accurate results and would help the AI to not get as confused by orbiting 3rd person virtual cameras in games where the scene or character doesn't look like that are moving from where they are vs the camera's orbit for example. There is also the potential in the future to have varying fpsHz rates of different entities and ranges in a scene (e.g. background vs middle and vs foreground) in games to save processing power and increase the fpsHz of the more important things in the scene (kind of like foveated rendering does for resolution in a way). There is a lot of room for the tech to mature.



AI upscaling a healthy 4k rez to 8k, even supersampled in comparison to "regular" non-upscaled 8k (or in comparison to integer scaled). Also frame amplified from a fairly high frame rate 100fps, maybe 200fps as a base in future more powerful gpu gens x2 (x3 to x5 in the long run), and hopefully with vector information translated from games physics, game entities/pathing and peripherals in future development instead of using AI to compare two frames and guess the vectors without being informed of any up front. Not only for 8k but for the path to much higher Hz overall.

. . . . . .

TLDR:

What I'm trying to say is (especially as we get into the next several gpu gens and hopefully maturation of AI upscaling and Frame Amplification techs) - you'll likely get much better results AI upscaling and frame amplifying higher performance rates and resolutions than you would trying to get blood from a rock trying to amplify from a low ppi/PPD 1080p or 1440p base and VRR frame rate graphs that start at relatively lower averages or sink down low in their graphs (= larger frame times/draws in milliseconds to work from). 4k to 8k , 120 ~ 200fps then frame amplification applied for 240hz+ and higher screens should look and perform way better in the long run.
 
Last edited:
If you have the hardware that supports it, I'd suggest giving it a shot. I was pleasantly surprised. In some games (not just slower ones, either), you'd never know there was a downside. Well, outside of menus occasionally being oddly laggy and oddball (but minor) things like that.
 
The consensus is single player yes, multiplayer no. I'm guessing because of the latency sensitivity of certain types of online gaming?

Question if anyone knows, is DLSS3 a thing in VR? I haven't heard and am wondering because of things like motion reprojection, asynchronous timewarp, and motion smoothing and how the would all play together.

I think that's the concensus.

In most competitive multiplayer games you already get high framerates. If you're already getting 100+ FPS and close to or already maxing out your refresh rate you would definitely not want frame generation on.
But if your'e struggling to get a decent frame rate and can't lower the settings, I guess you would turn it on just to make the game playable, despite the slight latency hit.

As for VR, yes spacewarp is already doing something similar generating frames, the spacewarp itself could probably be enhanced with some sort of DLSS AI frame generation tech, or maybe you simply can use DLSS 3 instead instead and it would be doing the same thing. I'm not sure.
 
Even Outside of VR, asynchronous time warp, which is just a better implementation of frame generation, would be a great augmentation or replacement for DLSS/FSR techniques for frame gen as some proof of concepts have shown. I’m not sure if there’s a patent issue with using it but it solves the latency issue much better than other techniques. In a way this is what i needed to better utilize 240, 480hz and up oled capabilities while we wait for GPUs to get faster.
 
Even Outside of VR, asynchronous time warp, which is just a better implementation of frame generation,
How true is that ?

In VR both the cost of missed frame is so much higher for the user experience and the cost of artifact of the edge of a giant screen lower, game engine could render in a buffer significantly larger than the render screen at the cost of a lower FPS I imagine, it would be hard to imagine why Valve game could not use it in the non-vr version because of some patent.
 
Even Outside of VR, asynchronous time warp, which is just a better implementation of frame generation, would be a great augmentation or replacement for DLSS/FSR techniques for frame gen as some proof of concepts have shown. I’m not sure if there’s a patent issue with using it but it solves the latency issue much better than other techniques. In a way this is what i needed to better utilize 240, 480hz and up oled capabilities while we wait for GPUs to get faster.

View: https://www.youtube.com/watch?v=IvqrlgKuowE
 
It works okay in my experience but you will see some ghosting. The other problem is that it seems to not work with frame rate caps. So it can push above the refresh rate and make G Sync not work when frame rates go too high.
 
It works pretty well, particularly at higher frame rates. It isn't perfect, but you really don't notice things looking off too much. Again the higher the source FPS the less you'll notice. However it does have two big downsides:

1) The game doesn't "feel" fast. This is more an issue at low source FPS but one of the reasons you want high FPS is how input feels. A 40fps game feels choppy and unresponsive. Well, DLSS FG may well be able to get that 40fps up to 80... but it'll still feel like 40 even though it looks like 80. So it helps with FPS issues to an extent, but it doesn't just magically fix it. Now this is probably better at something like 120 to 240fps (I don't have a 240Hz monitor to test it with) because the game is already going to feel pretty smooth so not such a big deal.

2) It doesn't respect fps caps, there's no way to toggle it on and off. So say you have a 144Hz monitor and you have a game that runs at about 50-70fps normally. You figure this is a good choice for DLSS FG, it'll get your FPS up nicely... however the game runs at up to 90fps in some areas. Well in those areas where it goes above 72fps, which is 144 when doubled, you start to get tearing since it won't limit to the display's refresh.


Personally, I generally don't use it much, because I basically need a game to consistently run slow otherwise it tears, and I don't like tearing. So far the only game I've used it in is like others: Portal RTX. That runs so slow that it was always under 120fps and thus no tearing on my system.

If you have one of those new 240Hz OLEDs, I'd certainly give it a try. If you have a 120/144Hz monitor but with a GPU that struggles to get 60fps at settings you like, maybe give it a try. However if you have a chonk GPU that can usually get 90+ fps, I wouldn't do it as to me it feels better to have higher native framerates.
 
From what I understand, part of what VR systems do is prediction based on the system being aware of some of the hardware/peripheral vectors. Primarily, head movement. PC gaming would probably, optimally, need to be reinvented with OS, peripheral drivers, game engines/development and graphics card AI all working together transmitting and informing the system of actual vectors. Your keyboard+mouse/controller broadcasting vectors, every entity in the game broadcasting vectors. That rather than just guessing what vectors might logically be, using machine learning applied to taking a look at a few buffered frames "visually" in a much more uniformed system. VR still guesses/predicts but the more actual vectors the system is informed of (e.g. headset movement, but maybe hand controllers, eye-tracking too potentially, along with all of the virtual objects in the game/simulation's own vectors), the better and more accurate the predicted/interpolated frame(s) would be as a result. The more informed a system is vector wise might even allow it to generate more than one frame accurately between "actual" frames. It would probably also help to avoid some problems nvidia frame gen can have with 3rd person orbiting camera games, where it can be fooled into thinking the main character or other things are moving or not moving. Poor generation guesses can result in artifacts.
 
Last edited:
I think that's the concensus.

In most competitive multiplayer games you already get high framerates. If you're already getting 100+ FPS and close to or already maxing out your refresh rate you would definitely not want frame generation on.
But if your'e struggling to get a decent frame rate and can't lower the settings, I guess you would turn it on just to make the game playable, despite the slight latency hit.

From what I read about how game servers work,

The lowest "rubberband" gap you can get on a 128tick valorant server for example - is to exceed that 128tick with your local fpsHZ and get 72ms peek/rubberband as compared to someone at 60fpsHz on that same 128tick server getting 100ms. Your frame rate minimum would have to exceed the 128hz of the server's ticks. (Having a 1000fpsHz capable screen isn't going to change that 72ms). The movement data (for Valorant in the excerpts below since it's a 128 tick , optimized online gaming server system) is buffered at tick granularity, not at your client side frame rate. Each tick of 128 is 7.8ms. Valorant servers buffer 2 frames, the client 3 frames.

There is a lot more to it than that but my main point in replying to you is that you have to keep over 128fpsHz minimum in order to have a chance to get every tick on a 128tick server.

. . . .

Even exceeding 128tick, 128fpsHz as your local frame rate minimum, you can run into delays in that chain which result in delivering less than 128 ticks per second to the server, or receiving less than 128ticks back depending on timing:

"Frames of movement data are buffered at tick-granularity. Moves may arrive mid-frame and need to wait up to a full tick to be queued or processed."

"Processed moves may take an additional frame to render on the client."

. . . . .

That said, frame gen isn't generating a new game state from the online game server, it's throwing a placeholder frame in there guessing what your local simulation would be. What the server determines is happening can differ from what you see locally, so the generated frame rate isn't going to help you see more delivered ticks in the 128 tick chain from the authority of the server. You would also miss ticks if you were running 100fpsHz average without framegen: on a graph that is something like (70)85 <<< 100 >>> 115 (130) you are going to miss a lot of ticks throughout and end up with somewhat higher ms of peek/rubberband than 72ms in the valorant example.

You always see yourself ahead of where you actually are on the server, and you always see your opponent behind where they actually are on the server. The server goes back in time using the buffered frames system in an attempt to grant successful shot timing and other actions like player movement compared to (what your machine simulates to the server based on) what you saw locally. However different game's server code use their own biased design choices to resolve which player/action is successful, usually in regard to who's ping is higher or lower than the other - it's an interpolated/simulated result. The client also uses predicted frames in the online gaming system.

>"Smooth, predictable movement is essential for players to be able to find and track enemies in combat. The server is the authority on how everyone moves, but we can’t just send your inputs to the server and wait around for it to tell you where you ended up. Instead, your client is always locally predicting the results of your inputs and showing you the likely outcome. "

Online gaming is "fuzzy" to put it mildly, but in order to get the lowest rubberband/peeker's advantage in online games you'd have to exceed the tick rate as your frame rate minimum - without using framegen to hit that minimum. Frame gen just generates filler, it doesn't open new fpsHz rate "slots" for a tick to fill so to speak. At least as I understand it from how the tech is now.
 
Last edited:
That said, frame gen isn't generating a new game state from the online game server, it's throwing a placeholder frame in there guessing what it your local simulation would be. What the server determines is happening can differ from what you see locally, so the generated frame rate isn't going to help you see more delivered ticks in the 128 tick chain from the authority of the server, e.g. if you are running 100fpsHz average on a graph that is something like (70)85 <<< 100 >>> 115 (130).
It's literally just using optical flow data to say "objects moved from here to there in these two frames so stuff should be... here-ish for an intermediate frame." It can actually, potentially, increase latency a bit as it needs to have a "previous frame" and "next frame" before it can display its generated frame, which mean it doesn't display the "next frame" as soon as it is done.

Also as I noted the felt input latency is still at whatever teh non-generated framerate is.
 
It's literally just using optical flow data to say "objects moved from here to there in these two frames so stuff should be... here-ish for an intermediate frame."

Yep. Like I said, it's uninformed of actual vectors, where VR does the same guessing between frames but also is informed of the head hardware movement vector so it can probably be a little more accurate in it's prediction. If everything broadcasted vectors (peripherals and in-game entities) a system could be a lot more accurate.

I wanted to outline that online aspect specifically since it was brought up along with "100fps" which I assumed meant 100fps average, which isn't enough to minimize your peeker's-advantage/server rubberband completely on a 128 tick server. That, and the fact that boosting your frame rate with frame gen ~ interpolation to where your fpsHz minimum is above 128fpsHz probably isn't going to allow you to receive every tick, authoritative game world action state, in a 128tick chain from the server because part of your frame rate is "imaginary" so to speak.

. . .

It can actually, potentially, increase latency a bit as it needs to have a "previous frame" and "next frame" before it can display its generated frame, which mean it doesn't display the "next frame" as soon as it is done.

Right. Online gaming servers are also buffering a few frames server side, and several client side, and the client does prediction too. It's fuzzy. Local gaming or LAN gaming would be a more focused example.

Also as I noted the felt input latency is still at whatever teh non-generated framerate is.


Low latency mode in gaming tvs duplicates frames from 60fpsHz (consoles for example) to 120fpsHz. That cuts input lag because even though you are seeing the same frame twice, your input is given double the number of places, 120hz, every 8.3ms to be applied instead of waiting every 16.6ms. Also, similarly, the higher the fpsHz tested on gaming tvs, the lower the input lag.

Maybe frame gen input lag increase + high fpsHz gained input lag decrease cancel each other out, more or less?
 
In my recent personal experience, DLSS FG is quite useful in maximizing the headroom my high refresh rate monitor and overcoming CPU bottlenecks. I'm currently using DLSS SR 3.7 (Preset E) and FG 3.7 versions of the DLL files in my games and ghosting is minimal to none.

DLSS FG works fine with G-sync as long as V-Sync is enabled in the NVCP. Reflex which is automatically enabled when DLSS FG is on, caps the frame rate a few frames below the max refresh rate. The smoothness of the frame pacing is pretty good with this setup.

I also noticed that as long as the base frame rate is at least 60fps, the latency is generally within acceptable territory. The higher the base frame rate is though, the latency becomes less noticeable which is why I usually use DLSS FG in combination with DLSS SR.
 
Head movement vectors in VR, when decoupled from the main game engine frame rendering pipeline are used to calculate camera and geometry translation and rotation. This is outside of game engine physics and AI and networking recalculating the next frame. Frame reconstruction like dlss 3 is taking the latter into account while async time warp is more about the former.
Outside of VR, mouse movement + directional inputs are the equivalent of what repoints the camera instead of headset movement. With 1000hz+ mouse sampling (keyboards too) it’s not inherently missing something magic or less accurate than VR, it’s more about developers implementing this into non VR games, IMO.
 
. With 1000hz+ mouse sampling (keyboards too) it’s not inherently missing something magic or less accurate than VR, it’s more about developers implementing this into non VR games, IMO.

It is missing something from a development standpoint, even in PC VR, at least compared to Application Spacewarp on the VR systems using that rather than the PC VR versions. But yeah, there's no reason they couldn't do that on pc games in the future if they wanted to devote the money and energy into doing it.

From an older oculus quest article (2019) :

https://www.reddit.com/r/oculus/comments/ah1bzg/timewarp_spacewarp_reprojection_and_motion/

https://www.uploadvr.com/reprojection-explained/

"Differences Between Application Spacewarp (Quest) and Asynchronous Spacewarp (PC)

While a similar technique has been employed previously on Oculus PC called Asynchronous Spacewarp, Meta Tech Lead Neel Bedekar says that the Quest version (Application Spacewarp) can produce “significantly” better results because applications generate their own highly-accurate motion vectors which inform the creation of synthetic frames. In the Oculus PC version, motion vectors were estimated based on finished frames which makes for less accurate results."

. . . .

They could theoretically develop a system with vector informed AI frame generation, perhaps informed by vectors of virtual objects (and even forces) in the game simulation plus vectors broadcast by the peripherals - which could be a lot more accurate than an uninformed system comparing buffered frames that have no application generated vectors involved in creating the inserted frames. Hopefully pc game development wise someday that is.
 
Last edited:
With 1000hz+ mouse sampling (keyboards too) it’s not inherently missing something magic or less accurate than VR,
Maybe more the other way around (not missing anything but input being often way too much), mouse and keyboard movement can't they be much faster and unpredictable than a head ? possible to do a 180 degree command in between fast frame with a mouse but not with your whole body and needing to make a frame without any info on it ?, making the need for how much bigger than the monitor resolution your frame buffer would need to be ? (Is that what they do the game say render a 1600x1600 image instead of the 1200x1200 so if you move your head it has something to show using those 200 extra pixels ?)
 
Back
Top