RUMOR: AMD FSR3 may generate multiple frames per original frame, be enabled driverside.

I must be missing something here.Does AMD really need 4 interpolated frames for every single frame to match Nvidia or something? Why interpolate that much? And won't this make the AMD faithfull depressed to know their lord and savior ir producing more fake frames than the competition?
 
Would probably be a maximum supported ?

Maybe a better question could be why not, if the latency change between 2-3-4 is quite low (they all have to way for only one next frame) with a very simple algo.

That said would probably make only sense for the very high frequency monitor here the 240-360hz and more type, very high fps support via (54gbs?) dispay port was part of their previous launch marketing (480hz 4k support was on a slide, 900hz at 1440p).
 
Dunno, but if they do it driverside they break the argument of "fsr works on everything!!!!" those people love to exclaim, ZeroBarrier. As if anyone with DLSS hardware would care. I also imagine it will look worse than DLSS3 just as fsr2 does to dlss2.
 
https://wccftech.com/amd-fsr-3-might-generate-up-to-4-interpolated-frames-be-enabled-on-driver-side/

Still no release date despite being promised for 2023 1st half back when they announced their 7000 series, and we're almost to June.

They would have until the end of June before you could say they broke their promise. However I barely care at all about there version of DLSS3, I'd rather not degrade my image quality for a few more frames, especially when my 6900XT has 0 issues rendering my games in 1440p.
 
I must be missing something here.Does AMD really need 4 interpolated frames for every single frame to match Nvidia or something? Why interpolate that much? And won't this make the AMD faithfull depressed to know their lord and savior ir producing more fake frames than the competition?
Maybe you read too fast? :)

It doesn´t say AMD need 4 interpolated frames, but that its not limited to 1 like DLSS 3:

The code states that AMD's FSR 3 will have a Frame Generation Ratio of up to 4 interpolated frames which means for each real frame, the technology can generate up to 4 interpolated frames. NVIDIA DLSS 3 also has frame generation technology which will generate one interpolated frame for every single frame when enabled. However, NVIDIA's DLSS 3 doesn't allow users to modify the frame gen value to anything beyond 1 nor it's part of the suite yet.
 
Moving beyond one interpolated frame seems like a poor idea, even if possible.
 
Moving beyond one interpolated frame seems like a poor idea, even if possible.
I am curious why, you do not want to lose actuals frames because you were busy buffering the interpolated one.

One obviously need to try, it will be a 100% feeling experience not possible to youtube it, but say the non linear interpolation algo take 3ms instead of 1ms to generate 4 extra frame instead of one and it is well parallel to other workload, with so many more CPU bound scenarios poping up and monitor-connector hz going up, it could be a lot of frame smoothing for very little negative.

My brain does not understand enough the timeline-renderqueue pacing of the generated frames, etc... to be able to see a significant difference between 1 or 2 extra frame or how could it be a bad idea, is it just a gut feeling ?
 
I am curious why, you do not want to lose actuals frames because you were busy buffering the interpolated one.

One obviously need to try, it will be a 100% feeling experience not possible to youtube it, but say the non linear interpolation algo take 3ms instead of 1ms to generate 4 extra frame instead of one and it is well parallel to other workload, with so many more CPU bound scenarios poping up and monitor-connector hz going up, it could be a lot of frame smoothing for very little negative.

My brain does not understand enough the timeline-renderqueue pacing of the generated frames, etc... to be able to see a significant difference between 1 or 2 extra frame or how could it be a bad idea, is it just a gut feeling ?
100% gut feeling on this, happy to be proven wrong.

It seems like all the textbook examples of when you'd want max FPS above all else (when you'd add even more interpolated frames) - you want that because it conferred some advantage in play. DLSS 3 is already criticised wrt input latency. Adding more interpolation seems like chasing a number for a number's sake and might actually work counter to the original goal - more responsive play.

And maybe this can be dialed-in per-title and that title's needs, probably a good thing.

Interesting topic.
 
might actually work counter to the original goal - more responsive play.
frame generation as of now does not have as a goal more responsive play I think.

There 3 variable of a subjective pleasant gaming experience being balanced (depending of the player-game priority will change between them), smoothness (what frame generation try to adress), responsiveness (what frame generation will always hurt as long has it use the yet to be here future frame to interpolate) and image quality.

The goal is never the most responsive play as possible (e-gamer people get close playing at very small square in the middle of a small monitor with very low detail and no texture to help contrast, etc...) I mean by the dev and hardware maker or it would be arcade all hardware no software games.

Smoothness and image quality are always part of the equation, generating frame either more smooth at the same image quality or higher image quality for the same smoothness.

a lot of the added latency (waiting the next frame) will be there would you generate 2 extra frame instead of one, so maybe the cost of the extra frame past the first one is really small. Specially if a lot of the non linear work is already done and you just change the weights for each interframe
 
frame generation as of now does not have as a goal more responsive play I think.

There 3 variable of a subjective pleasant gaming experience being balanced (depending of the player-game priority will change between them), smoothness (what frame generation try to adress), responsiveness (what frame generation will always hurt as long has it use the yet to be here future frame to interpolate) and image quality.

The goal is never the most responsive play as possible (e-gamer people get close playing at very small square in the middle of a small monitor with very low detail and no texture to help contrast, etc...) I mean by the dev and hardware maker or it would be arcade all hardware no software games.

Smoothness and image quality are always part of the equation, generating frame either more smooth at the same image quality or higher image quality for the same smoothness.

a lot of the added latency (waiting the next frame) will be there would you generate 2 extra frame instead of one, so maybe the cost of the extra frame past the first one is really small. Specially if a lot of the non linear work is already done and you just change the weights for each interframe
Let me rephrase. I think there are incredibly diminishing returns on more interpolated frames, for any conceivable use case.
 
Last edited:
Let me rephrase. I think there are incredibly diminishing returns on more interpolated frames, for any conceivable use case.
It remains to be seen, but its very interesting at least. :)

One use case that I can think of is fixed hardware like consoles. FSR made it possible for Nintento to create the new Zelda game with improved visuals. They could push and tweak it more, since its fixed hardware.

FSR 3 might make it possible to create better visuals for other consoles and in some games, having the option to interpolate more frames might make developers tweak and push it even further in some games. Especially if the cost is less for the second frame when it comes to latency.

Same goes for Steam deck that already have global FSR. It could extend the life of the console.
 
Just wait for AI to predict what you will do then all frames are extrapolated.. intropolated(??) And you're basically directed where to move and shoot!!

That said I'm still confused about how it all works, like I get point a and b and it squeezes one between there but if you already did b then arent you already beyond the inserted frame? And if its guessing what you'll do to get to b isn't that a real frame it could render instead of a made up one? Are we going to be at a point if "real fps" vs "guessed fps"
 
FSR 3 will likely be fine, and I think I understand why they are doing it driver-side.
I believe AMD intends this to be used on TVs, most TVs do not do adaptive frame rates, and most newer games do very weird things when you enable VSync.

AMD is pretty clear that this only works in situations where you can maintain 60fps then frame generation adds from there.

So assuming you set something for an adaptive frame quality with 60or 61fps as your target and you enable multiple frame generation you can natively match the refresh rate of the TV without needing adaptive sync or VSync so you can land on your 120-240 hz rates for most TV out there by moving a slider.

That’s how I see it as a practical thing and it could be a pretty big deal for consoles going forward, but less so for more modern PCs. A console could detect the native refresh rate, adjust the render quality to what it needs to maintain 60fps, and then automatically adjust the frame generation to match. It might get added to the existing lineup retroactively but I can see it being a feature on the expected upcoming mid-cycle refresh that has been rumored.
 
Last edited:
AMD is pretty clear that this only works in situations where you can maintain 60fps then frame generation adds from there.
This is a very good point. Even with added latency, it will be less of an issue if it doesn´t fluctuate and is predictable for the end user.

I can see this also being used on desktop for high refresh screens. When LCDs started to become mainstream, 2-3 frames latency were common and some even had more. People enjoyed gaming on those. Its not that people didn´t know better, since many came from CRT which has much lower latency.

Latency isn´t a game breaker in every case. Witcher 3, which is a popular game still, could easily have added latency if it ment more fluid animation for mainstream users IMHO. There is a cutoff point where it becomes too obvious, but some games are less sensitive to added latency.

It remains to be seen if its good or not, but its not without potential at least. :)

DLSS 3 and upcoming FSR 3 perhaps too are not magic bullets for every game, but can make a big difference in some use cases. Same goes for DLSS 2 and FSR 2.2. Not something you want to use everywhere, but for me, injecting FSR into Fallout VR made a huge impact for me when I played it with a 2080 TI. Especially since that game is so poorly optimized for VR to begin with.
 
As an addition to my comment above about FSR3 being a console-focused feature, most consoles are connected via wifi, using a wireless controller, on a TV with god knows what for actual timings.
The latency induced by the frame generation, in this case, would be insignificant to the other factors in that sort of setup. and not really significant for online gaming against other console players.

Given that AMD makes far more from their console sales than desktop GPU sales I figure any and all features they introduce while this continues to be true is intended first for the console and any trickle-down to the Desktop market is a happy side effect.
 
As an addition to my comment above about FSR3 being a console-focused feature, most consoles are connected via wifi, using a wireless controller, on a TV with god knows what for actual timings.
The latency induced by the frame generation, in this case, would be insignificant to the other factors in that sort of setup. and not really significant for online gaming against other console players.

Given that AMD makes far more from their console sales than desktop GPU sales I figure any and all features they introduce while this continues to be true is intended first for the console and any trickle-down to the Desktop market is a happy side effect.
Well if that's true wouldn't we see a lot more use of FSR2 in console games?
 
Well if that's true wouldn't we see a lot more use of FSR2 in console games?
We already do, just about every PS5 game allows you to choose between game modes, Performance, Game Default, and Native. Where many Game Default options will dynamically upscale from a source between 768 and 1080p to the target TV resolution to maintain frame rates.
 
Latency isn´t a game breaker in every case. Witcher 3, which is a popular game still, could easily have added latency if it ment more fluid animation for mainstream users IMHO. There is a cutoff point where it becomes too obvious, but some games are less sensitive to added latency.
Exactly. And consider more extreme examples of "not caring about latency" - something like Baldur's Gate 3. A turn based RPG, but one with good visuals.

I love the expanding tool-chest, especially if they expose controls for the user to dial-in their preference. If I feel something isn't right - let me change it, and I'm good.
 
I must be missing something here.Does AMD really need 4 interpolated frames for every single frame to match Nvidia or something? Why interpolate that much? And won't this make the AMD faithfull depressed to know their lord and savior ir producing more fake frames than the competition?
They have never said it would provide anymore then a max of 2x better performance so no its not that.
My guess would be instead of using hardware only available on current gen. They may interpolate 4 frames. Then use a simple mathematical A-B test to toss the 3 frames that looks least correct and display the one interpolated frame that looks right. This may allow them to solve for X for all AMD cards rather then just the latest. It seems to me like a logical way to achieve = or better visuals without a ton of overhead.

We'll see though. For a tech AMD claims to have been working on prior to the 7900s... this is taking awhile to see light.
 
They have never said it would provide anymore then a max of 2x better performance so no its not that.
My guess would be instead of using hardware only available on current gen. They may interpolate 4 frames. Then use a simple mathematical A-B test to toss the 3 frames that looks least correct and display the one interpolated frame that looks right. This may allow them to solve for X for all AMD cards rather then just the latest. It seems to me like a logical way to achieve = or better visuals without a ton of overhead.

We'll see though. For a tech AMD claims to have been working on prior to the 7900s... this is taking awhile to see light.
I expect that AMD is applying the work done by various TV manufacturers on their TruMotion/MotionSmoothing/MotionFlow.... and tweaking it from there.
IF a TV can do it on a bit of ARM silicon running on 15w I expect that AMD can find a way to create their version that can manage it at 65.
 
I must be missing something here.Does AMD really need 4 interpolated frames for every single frame to match Nvidia or something? Why interpolate that much? And won't this make the AMD faithfull depressed to know their lord and savior ir producing more fake frames than the competition?
Probably to have more than nVidia. nVidia does X, AMD does 4*X! That kind of thing.

That said, if it can be made to work it isn't a bad thing. There's no reason not to want more generated frames to allow for higher FPS. There are 500Hz monitors out there, and hopefully it'll just keep increasing. If we can generate frames and make it look good, that's just a win. Say you have a 240Hz monitor, which isn't super uncommon these days, even OLEDs are doing that, but a game can only render 60fps. If you can get a 3:1 frame generation (3 generated per 1 real) then you could get that silky smooth 240Hz output.

Now it had to look good for that to be something we want. If it looks crappy then it sucks, who cares about numbers. But if it can be made to work this is probably the path forward to higher frame rates. It just doesn't seem likely that we are going to have graphics cards that can do all the fancy visuals AND high resolutions AND high framerates. They are already massive chonkers and they can't, and it isn't like fabrication nodes are progressing that fast. So if we want it, we are going to have to get it via tricks like upsampling and frame generation. So long as they look good, there's no reason they aren't a good solution. I don't care what rez and fps my game actually renders at if the output looks great and smooth. If they discover some magic process to render at 720p 15fps and make it look as good as native 4k 240fps, I'm completely ok with that. It is about the experience, not what makes it happen.
 
For Single player games where everything revolves around the only human (you), thats fine.

For MP games, its harder to implement trickery
 
For Single player games where everything revolves around the only human (you), thats fine.

For MP games, its harder to implement trickery
Yes and no after a point your internet connection will have more of an impact than the frame gen. It’s not uncommon with DLSS3 frame gen to have local latency in the low teens where anything online is going to be multiple times that at least. As long as you are keeping something playable as your minimum baseline (60fps) then actual impact is relatively minor. But if you are 30fps bumping up then yeah you are in for a bad time.

Frame generation is just the opposite side of adaptive frame rate monitors why slow the screen when you can just add input. And why only do one when you can do both.
 
I’m still waiting for AMD and Nvidia to deliver selective frame gen. Why have the GPU bother with constantly rendering things like sky, mountains, far off buildings, wildlife and such. Have the game work on rendering the immediate surroundings then have the peripherals rendered at half or quarter speed with frame generation making up the difference. That way you are lightening up the load generated by otherwise static elements.
 
I’m still waiting for AMD and Nvidia to deliver selective frame gen. Why have the GPU bother with constantly rendering things like sky, mountains, far off buildings, wildlife and such. Have the game work on rendering the immediate surroundings then have the peripherals rendered at half or quarter speed with frame generation making up the difference. That way you are lightening up the load generated by otherwise static elements.
(Good) games have been doing this and similar techniques for a very long time. Sometimes done well, sometimes not (obvious pop-in).

It's not an IHV issue, it's game design.
 
I’m still waiting for AMD and Nvidia to deliver selective frame gen. Why have the GPU bother with constantly rendering things like sky, mountains, far off buildings, wildlife and such. Have the game work on rendering the immediate surroundings then have the peripherals rendered at half or quarter speed with frame generation making up the difference. That way you are lightening up the load generated by otherwise static elements.
Isn't that effectively what motion blur does?
 
AMD should add this as an option within FSR3 capabilities.

Since 1000Hz displays aren't too far off -- they will possibly be out (very approximately) by the time of the next GPU generation. 10:1 frame gen 100fps -> 1000fps, in a lagless manner.

I welcome AMD and NVIDIA to race each other to this. Even this will benefit today's 240Hz+ displays too.

(Even Intel can jump in this pool too, the water's fine!)

1688464897896.png


Since FSR / DLSS / XeSS all require game engine integration, this is an opportunity to reduce latency of frame gen by incorporating between-frame input reads. Design the API properly for that!
 
I hope they'll listen, Chief :D
That's why William (new writer) and I co-wrote an article.
There are hundreds of AMD & NVIDIA employees who are fans/followers, so hopefully I (and other advocates like LTT helping prod things along), help tip a few dominoes.

The question is now probably simply "when?" (which GPU generation).

I'm actually very tired of the frame rate incrementalism -- the worthlessness (to most outside esports) of small refresh rate differences such as 240-vs-360 as an example. 240Hz vs 360Hz is only a 1.1x motion blur difference due to slow GtG and high-frequency jitter adding extra motion blur (like fast-vibrating music string).

In this diminishing curve of returns, we need dramatic geometric upgrades in frame rates (60 -> 144 -> 360 -> 1000) since geometrics + "0ms GtG" are more gigantically visible to casual gamers, far beyond just esports. 240-vs-1000 is more visible to grandma than 144-vs-240, it's all about geometric multiples (as long as GtG=0) and fast motion content. We need to VHS-vs-8K this bleep, not 720p-vs-1080p, but in the temporal direction (framerate=Hz).

Adding reprojection to FSR/XeSS/DLSS/whatever/etc will be able to unlock massively more rapid frame rate progress, to allow us to replace eye-straining (to many) PWM-based strobe backlights, with ergonomic strobeless framerate-based motion blur reduction. But doubling frame rate and Hz only halves display motion blur, and only if GtG=0.
 
Last edited:
This will probably be a required thing in PS6. just so they can advertise all games at 60fps. Maybe even ps5 Pro.
 
AMD's documentation on frame generation says it only functions when the base render rate is at or exceeds 60fps.
That's probably very fair, since frame generation artifacts are really nasty for sub-60fps material. Most frame generation artifacts majorly diminish (depending on the quality of algorithm) when frame rates are above the flicker fusion threshold.

Certain kinds of "distort-undistort artifacts" like parallax reveals that flicker at 30Hz (from converting 30fps to 60fps) are rather nasty -- so sometimes it's best to enforce a minimum frame rate for framegen. This hides a TON of parallax-reveal glitches in any framegen tech, including in my tests of the 1000fps-capable reprojection demo linked from the article!

Improved AI-based realtime infilling for between-frame parallax reveals (like a simple version of realtime Stable Diffusion on those few specific pixels) can lower the frame rate threshold where you see artifacts. But, surprisingly, on fast-bandwidth GPUs (like latest AMD and NVIDIA GPUs), it is it's processing-cheaper to do simple reprojection of 100fps -> 500fps, than to use expensive interpolation algorithms to convert 50fps -> 100fps.

The compute overhead of the reprojection demo is, absurdly shocking low per reprojected frame! Brute-framerating those artifacts out is sometimes.... cheaper processing. Surprisingly. But fixing artifacts with cheap framegen requires extremely high starting frame rates, such as >100fps.

But I have a suggestion to AMD. Framegen should still temporarily work for a few moments during sub-100fps or even sub-60fps to "cover the brief dips" (e.g. texture streaming disk access stutters and shader compiler stutters). Advanced users should be able to override and permit somewhat lower (e.g. 48 or 50fps) thresholds, with a warning of much more visible artifacts. Artifacts might briefly flicker in-and-out, but that can be preferable to many over a stutter. So there probably should be a deadreckoning coast in the framegen for one-time stutters.

This is why the chapter "Developer Best Practices" for lagless reprojection-based frame generation (halfway down the new article, I'll let someone else link it), recommend a starting frame rate of minimum 100fps for frame generation. Thus, good-looking 1000fps will require a base frame rate of at least 100fps (ish).

This will probably be a required thing in PS6. just so they can advertise all games at 60fps. Maybe even ps5 Pro.
Funnily enough, true-240Hz-capable OLED living room televisions is expected to be introduced before PS6. For improved AI-based interpolation that many TV watchers still want for watching sports. While some of it may only be for video interpolation, it is also (eventually) expected that they'll add the ability to connect a PC at 240Hz too, as another selling point.

BOE demo'd a 576Hz-capable TV panel at DisplayWeek (large size, about 75") for future television manufacturers, and a few budget (sub-$500) TVs now support 120Hz console ops, at least during this Prime sale.

Would be sad if 60fps was still the console standard then. Boo.
 
Last edited:
That's probably very fair, since frame generation artifacts are really nasty for sub-60fps material. Most frame generation artifacts majorly diminish (depending on the quality of algorithm) when frame rates are above the flicker fusion threshold.

Certain kinds of "distort-undistort artifacts" like parallax reveals that flicker at 30Hz (from converting 30fps to 60fps) are rather nasty -- so sometimes it's best to enforce a minimum frame rate for framegen. This hides a TON of parallax-reveal glitches in any framegen tech, including in my tests of the 1000fps-capable reprojection demo linked from the article!

Improved AI-based realtime infilling for between-frame parallax reveals (like a simple version of realtime Stable Diffusion on those few specific pixels) can lower the frame rate threshold where you see artifacts. But, surprisingly, on fast-bandwidth GPUs (like latest AMD and NVIDIA GPUs), it is it's processing-cheaper to do simple reprojection of 100fps -> 500fps, than to use expensive interpolation algorithms to convert 50fps -> 100fps.

The compute overhead of the reprojection demo is, absurdly shocking low per reprojected frame! Brute-framerating those artifacts out is sometimes.... cheaper processing. Surprisingly. But only if starting frame rate is >100fps.

But I have a suggestion to AMD. Framegen should still temporarily work for a few moments during sub-100fps or even sub-60fps to "cover the brief dips" (e.g. texture streaming disk access stutters and shader compiler stutters). Advanced users should be able to override and permit somewhat lower (e.g. 48 or 50fps) thresholds, with a warning of much more visible artifacts. Artifacts might briefly flicker in-and-out, but that can be preferable to many over a stutter. So there probably should be a deadreckoning coast in the framegen for one-time stutters.

This is why the chapter "Developer Best Practices" for lagless reprojection-based frame generation (halfway down the new article, I'll let someone else link it), recommend a starting frame rate of minimum 100fps for frame generation. Thus, good-looking 1000fps will require a base frame rate of at least 100fps (ish).


Funnily enough, true-240Hz-capable OLED living room televisions is expected to be introduced before PS6. For improved AI-based interpolation that many TV watchers still want for watching sports. While some of it may only be for video interpolation, it is also (eventually) expected that they'll add the ability to connect a PC at 240Hz too, as another selling point.

BOE demo'd a 576Hz-capable TV panel at DisplayWeek (large size, about 75") for future television manufacturers, and a few budget (sub-$500) TVs now support 120Hz console ops, at least during this Prime sale.

Would be sad if 60fps was still the console standard then. Boo.
This sort of technology in my opinion is a great alternative to Adaptive Sync where Adaptive Sync is not supported. Have a 240hz TV but your game is running 60, why turn on VSync and deal with that mess as it multiplies input latency when you can instead turn on frame generation input latency would be worse than 60 native but way better than 60+VSync, and then you don't need to worry about screen tearing. Have a situation where you can find a happy middle ground and enable frame generation and adaptive sync all the better to keep things in a happy range that keeps things tear and blur free.
 
I’m still waiting for AMD and Nvidia to deliver selective frame gen. Why have the GPU bother with constantly rendering things like sky, mountains, far off buildings, wildlife and such. Have the game work on rendering the immediate surroundings then have the peripherals rendered at half or quarter speed with frame generation making up the difference. That way you are lightening up the load generated by otherwise static elements.
We already have geometry instancing, which was created precisely to alleviate the stress of rendering static parts of the frame. We also already have a method whereby objects in the distance are updating at a slower rate than the immediate surroundings. These objects can be interpolated in between updates to smooth out their animations. How would frame generation on those elements improve what we already have?
 
We already have geometry instancing, which was created precisely to alleviate the stress of rendering static parts of the frame. We also already have a method whereby objects in the distance are updating at a slower rate than the immediate surroundings. These objects can be interpolated in between updates to smooth out their animations. How would frame generation on those elements improve what we already have?
I feel this is a lot of saving CPU-CPU->Gpu calls then easier rastering.

The ability to detect what would need to be refreshed and not start with a clean buffer would not be that far from actually rendering it to know, any camera change (which usually occur at every frame) mean that all pixels on a 2d screen could need a different value, it could have changed a lot from back in the days too.
 
We already have geometry instancing, which was created precisely to alleviate the stress of rendering static parts of the frame. We also already have a method whereby objects in the distance are updating at a slower rate than the immediate surroundings. These objects can be interpolated in between updates to smooth out their animations. How would frame generation on those elements improve what we already have?
I don’t actually know I just remember it being one of their talking points about how they could essentially offload the rendering of static units and free up rendering resources for the objects that are changing constantly.
 
This sort of technology in my opinion is a great alternative to Adaptive Sync where Adaptive Sync is not supported. Have a 240hz TV but your game is running 60, why turn on VSync and deal with that mess as it multiplies input latency when you can instead turn on frame generation input latency would be worse than 60 native but way better than 60+VSync, and then you don't need to worry about screen tearing. Have a situation where you can find a happy middle ground and enable frame generation and adaptive sync all the better to keep things in a happy range that keeps things tear and blur free.
Framegen can go lagless, if integrated into engine -- far less latency than 60 native.

(...Spoiler: The truth is that it is actually lag-REDUCING!...)

1688595662145.png


FSR / XeSS / DLSS performs best if integrated into the engine, and this is just an "engine integration" improvement to include between-frame positionals.

(Above infographic is taken from the article published yesterday)

This can simultaneously perform the duty of say (DLSS + GSYNC + ULMB + PWM-Free + Lag-Free) combined.

Acts like DLSS, because reprojection warping is also frame generation.
Acts like GSYNC, because some reprojection algorithms can convert varying framerate to stutterless framerate=Hz
Acts like ULMB, because large-multiplier reprojection can eliminate motion blur strobelessly on brute-Hz displays.
Acts like PWM-Free, because reprojection works on sample and hold, no need for strobe backlight.
Acts like Lag-Free, because reprojection can undo rendering latency, as per above diagram.

Even object-aggregates (a hitbox worth of 3D geometry) such as enemy positionals are reprojectable in some more advanced reprojection algorithms, so you can fix the key lag locals and remotes. It's not just mouselook lag that is fixable. AI/neurals can help with the parallax-reveal infilling, but keeping originating frame rate (pre-reprojection) above flicker fusion threshold, hides a hell of a lot of framegen artifacts, and there are no reprojection double-images (unlike VR) when reprojecting on sample and hold.

This esports framegen is a potential Holy Grail, if can be done artifactlessly enough.

Bender would do a Fry "Take My Money!" and say "Bite My Shiny Metal Holy Grail!".

[H] members, please help do favour. We're lighting fires under the GPU/engine vendors this year. Every single one of them -- Intel, AMD, NVIDIA. Please share the new article widely to your favourite game developer friends who falsely say "frame generation can't be used in esports". FALSE! This is a latency-reducing framegen algorithm that reduces input lag while increasing frame rates. Drop the microphones to your dev friends. Stop then laughing with real science. Today.
 
Last edited:
Framegen can go lagless, if integrated into engine -- far less latency than 60 native.

(...Spoiler: The truth is that it is actually lag-REDUCING!...)

View attachment 581570

FSR / XeSS / DLSS performs best if integrated into the engine, and this is just an "engine integration" improvement to include between-frame positionals.

(Above infographic is taken from the article published yesterday)

This can simultaneously perform the duty of say (DLSS + GSYNC + ULMB + PWM-Free + Lag-Free) combined.

Acts like DLSS, because reprojection warping is also frame generation.
Acts like GSYNC, because some reprojection algorithms can convert varying framerate to stutterless framerate=Hz
Acts like ULMB, because large-multiplier reprojection can eliminate motion blur strobelessly on brute-Hz displays.
Acts like PWM-Free, because reprojection works on sample and hold, no need for strobe backlight.
Acts like Lag-Free, because reprojection can undo rendering latency, as per above diagram.

Even object-aggregates (a hitbox worth of 3D geometry) such as enemy positionals are reprojectable in some more advanced reprojection algorithms, so you can fix the key lag locals and remotes. It's not just mouselook lag that is fixable. AI/neurals can help with the parallax-reveal infilling, but keeping originating frame rate (pre-reprojection) above flicker fusion threshold, hides a hell of a lot of framegen artifacts, and there are no reprojection double-images (unlike VR) when reprojecting on sample and hold.

This esports framegen is a potential Holy Grail, if can be done artifactlessly enough.

Bender would do a Fry "Take My Money!" and say "Bite My Shiny Metal Holy Grail!".
I have no doubt it could be a very good approach if done at a low level and not as a bolt on after market mod.
It’s painfully obvious that rasterization speeds aren’t keeping pace with the demands placed on them so something needs to change.
FSR3 though is going to be driver level as is DLSS3, DLSS might be hardware accelerated but it’s still a CUDA library.

I’m sure that next generation engines will integrate it at a low level, but we’re not even at the current generation of engines. So it’s probably going to be a while before the industry can make good use of it, once Microsoft and Kronos have it built in as a core function of their respective graphics APIs developers can start to make a move but until then we wait.
 
Yes, it will take time.

It's a long term goal, and now a new prime goal of this decade's existence for my business.

It's fantastically important to the point that I'm eagerly wanting to spend Blur Busters earnings on this.

Making reality "ergonomic flicker-free display motion blur reduction" has now become now a Blur Busters "2020-2029" Prime Directive to make it happen by year 2030 (e.g. tipping dominoes + advocacy + engine tweaks and/or Vulkan API tweaks and/or convince GPU vendors / etc). I'm open to ideas on how to incubate certain enabler technologies, but you'll have to contact me off [H].

With 240Hz OLEDs being particularly effective in showing blur-reducing frame generation benefits, the early era has arrived today. The latest RTX can do 3x-4x framegen ratios, enough to reduce display motion blur by ~3x-4x on 240Hz OLEDs, not bottlenecked by LCD GtG. In reality, it's more like 2x-2.5x due to slight extra DLSS "blur" added (some people hate this), but you get the idea -- brute multipliers is the cat's beans.

Personally, after what I've seen in labs, I'm tired of the frame rate and refresh rate incrementalism. Diminishing curve requires 4x-8x differentials now to wow properly.

I also eagerly really want to putting the industry on notice that I expect the industry to target 8x on UE5-type content by mid-decade. An 8x multiplier with only extremely faint artifacts, would be fewer artifacts than reducing detail to PONG (to achieve the same frame rate via detail-reducing means). Or the flicker/stroboscopic/darkening artifact of a strobe backlight. We can say goodbye to that with massive-ratio framegen on ultra-high-Hz displays. The motion quality finally can be made to massively outweigh the visual costs of the frame generation tech -- and do so laglessly.

You, as a reader, can help by sharing the article to your game developer contacts (if you have any). Whether Twitter DM, Discord Messenger, email, Facebook Messenger, or other. Just at least let them know better framegen options are starting to exist. Users like you, spreading the word is one famous part of Blur Busters -- that's how (still ad-free) TestUFO gets well over a million unique views a month.

One dream is to help (or be subcontracted by) a vendor put up a 1000fps 1000Hz "convince-the-world" demo at an event such as SGI 2025 or NAB 2025 or DisplayWeek 2025 -- but who knows. Door is open to collaborations, and I put up an earlybird pitch on my LinkedIn.

4K 1000Hz can be done today by pointing 8 different 4K 120Hz strobed projectors to the same screen (projector stacking was common to increase brightness, but can be used as a refresh rate combining trick). Refresh rate combining is a technique that can bring tomorrow's refresh tates to today (at least for an exhibition demo, simulator, amusement ride, enterprise purposes). It's existing tech finally, and mostly an integration/software matter -- finally. Just high cost for a single realtime demo. I've been pitching this concept to a few vendors as we speak. Showing the world a demo, would go a long way to making many people stop laughing -- and start realizing this is now existing on-the-market technology stuff (at sufficient software and hardware budgets). Framegen-based reprojection warping was successfully tested at 1000fps on the latest GPUs. So the tech is finally here today, and just needs to be integrated into some kind of Demo (preferably open source) to "wow the world" and convince more people to do things this way.

I still love strobing, and have lots of projects in the fire on that, but I really want 50%+ of what I do, to be related to flickerless motion blur reduction.

There are personal preferences, yes.
I prefer my retro content (emulators) strobed without framegen, and I love 24p Hollywood Filmmaker Mode.
But things that needs to be motion blur reduced (VR should not have more motion blur than real life) still need flicker. Ouchie for some people.
And we badly need a flickerless method of blur busting.

Finally, all the technological pieces are here today -- it just needs some expensive rigs and expensive software development time. It's still difficult/expensive like doing one of those 1980s Japanese experimental HDTV demo, though. But all the tech pieces have finally arrived and just needs to be integrated. A vendor doing a 4K 1000fps 1000Hz UE5 path traced demo, via a modified UE demo + reprojection based framegen, to a low-Hz projector array round-robin strobing to the same projector screen. Put a LOT of eyeballs onto it, at a convention, to convince all the gpu/driver/gamedev people at a convention. That sorta thing, is kind of a dream of mine this decade, to help them out on.
 
Last edited:
Back
Top