AMD shows off FSR3 with Frame Interpolation

Here comes the crowing of how this rocks, when yesterday they called it a gimmick because only Nvidia did it. No eta either...
idk, if it's trash, it's trash. doesn't matter who's pushing it. heck, it was bad for movies and obviously worse for games. kinda of disappointed with amd for even coming out with it after the media backlash. but if nvidia's gonna keep pushing it and you know how braindead some of these zoomers are, i guess they got to compete? sux tho, it's like everything just keeps getting worse.

on a side note: on sony bravia 4k tv the change of one setting (it's the interpolation setting but can't remember what it's called) made it to where regular dvd's when played through a sony 1080p upscaling dvd player looks almost as good as blu-ray. when i first got the tv my old regular dvd player looked REAL bad, then bought upscaling one, looked better but still kind of off and had the "soap opera effect". but now that i got it working right, sometimes i'll stop at the thrift store close to my house grab dvd's for a dollar and they look as least as good as they did before the whole "flatscreen evolution" if not better.. if you're from the VHS generation you know what i'm talking about. no pixelization, artifacts or motion issues to take you out of the experience. i mean bluray's better, but it's actually pretty impressive how good a dvd can look with the right setup.
 
I like frame interpolation/soap opera effect for traditional animation + CGI animation stuff

It's not that it looks bad for live motion video - live motion video just looks weird to us (most) even at native higher frames without frame interpolation - check these (both filmed/shown at high framerate natively) it looks like a video game or workplace training video lol



 
Last edited:
I like frame interpolation/soap opera effect for traditional animation + CGI animation stuff

It's not that it looks bad for live motion video - live motion video just looks weird to us (most) even at native higher frames without frame interpolation - check these (both filmed/shown at high framerate natively) it looks like a video game or workplace training video lol






just think the kind of interpolation that needs to be done to take 24fps to 120fps... it doesn't even work out right mathematically, it's gonna have to drop frames and make it look janky

but some tv's work different what looks good on one may look different on another? i mean, the video from satellite box looks fine (for the most part) but dvd's looked like ass with the out of box settings
 
Last edited:


just think the kind of interpolation that needs to be done to take 24fps to 120fps... it's not doesn't even work out right mathematically, it's gonna have to drop frames and make it look janky

but some tv's work different what looks good on one may look different on another? i mean, the video from satellite box looks fine (for the most part) but dvd's looked like ass with the out of box settings


This is the part where we make a video instructing Tom Cruise on how to transfer money from his bank account, to our bank account.

just think the kind of interpolation that needs to be done to take 24fps to 120fps... it doesn't even work out right mathematically, it's gonna have to drop frames and make it look janky

but some tv's work different what looks good on one may look different on another? i mean, the video from satellite box looks fine (for the most part) but dvd's looked like ass with the out of box settings

On my two Vizios (2018-2019 models so can't speak for other ones) it does what I call 'slight interpolation' - where either it's not trying to replicate as many frames - or just the implementation/algo/whatever itself is better. It's just ever so slight of a noticeable soap opera effect, but looks not bad even for live video. That's the only implementation I've ever seen where I like it on live video. Basically the motion is smoother so you notice it, but without being able to pinpoint it as 'soap opera effect'.
 
Last edited:
I am interested in what Nvidia is proposing for this with targeted rendering. Using frame generation for static HUD elements and periphery freeing up performance for the areas where you are actively looking.

Using it for a whole screen only works single player and only when paired with FreeSync Premium or GSync monitors. I can say when paired with a GSync screen it feels better than with out one. I suppose you could use it for something like WoW where button mashing and GCD’s are a thing.
 
This is the part where we make a video instructing Tom Cruise on how to transfer money from his bank account, to our bank account.



On my two Vizios (2018-2019 models so can't speak for other ones) it does what I call 'slight interpolation' - where either it's not trying to replicate as many frames - or just the implementation/algo/whatever itself is better. It's just ever so slight of a noticeable soap opera effect, but looks not bad even for live video. That's the only implementation I've ever seen where I like it on live video. Basically the motion is smoother so you notice it, but without being able to pinpoint it as 'soap opera effect'.
The tech has gotten much better since it’s initial debut that’s for sure.
 
So motion interpolation and real-time frame generation are two different things. RTFG usually includes real motion vectors, user inputs and game logic to generate the 'fake' frame in a similar way to time warping is done in VR/HMDs. This is MUCH more information that typical SOE motion smoothing where 2d motion vectors derived from pixel data, are assumed based solely on previous frame data and future frame data, and generating an 'inbetween' frame. FSR/DLSS 3 are not trying to 'smooth' motion, but instead trying to predict motion that does not yet exist, and has no 'future frame' to draw from.

The two accomplish a similar thing (generating additional frames without traditionally rendered source data) but are completely different in terms of how they do it and the purpose of the output.
 
Trolling will get you Reply Banned from this thread on the first infraction.
 
On my two Vizios (2018-2019 models so can't speak for other ones) it does what I call 'slight interpolation'
The tech has gotten much better since it’s initial debut that’s for sure
i'm talking about a Sony Bravia, def not low end and it's not that old, i got it after pandemic i know that because it was after one of those stimulus checks went out. but if you like interpolation then that's perfectly fine, but i'm just going by what it looks like at the theatre and all the video equipment and trinitron tech that's came before it. i mean my pc monitor is actually a ben q 120hz nvidia 3D vision2 monitor and i play all the games that support it running at 120hz and while that does look way better than running the panel at 60hz, as far as film goes, on televisions, i just can't get behind interpolation. it's got on my nerves ever since it first came out. but i guess we're getting off topic, so coming back around, it looks like once again interpolation is being used to swindle uneducated consumers with a technology that not just does nothing, but makes things worse. and shame on both companies for pushing that crap, but only one of them started it.
 
i'm talking about a Sony Bravia, def not low end and it's not that old, i got it after pandemic i know that because it was after one of those stimulus checks went out. but if you like interpolation then that's perfectly fine, but i'm just going by what it looks like at the theatre and all the video equipment and trinitron tech that's came before it. i mean my pc monitor is actually a ben q 120hz nvidia 3D vision2 monitor and i play all the games that support it running at 120hz and while that does look way better than running the panel at 60hz, as far as film goes, on televisions, i just can't get behind interpolation. it's got on my nerves ever since it first came out. but i guess we're getting off topic, so coming back around, it looks like once again interpolation is being used to swindle uneducated consumers with a technology that not just does nothing, but makes things worse. and shame on both companies for pushing that crap, but only one of them started it.
Yeah the new hardware is great it looks decent like if it’s on more than not you don’t notice until you load up a DVD from the 90s.

But you look at the implications from the early 20-teens and sweet Jesus…
 
just think the kind of interpolation that needs to be done to take 24fps to 120fps... it doesn't even work out right mathematically,
Frame interpolation can't do that. The best it could do is 24 real frames 24 interpolated frames. Frame interpolation is a form of frame generation (used in TV's), but it works completely different compared to how DLSS 3 frames would be generated, so also likely completely differently than how AMD's frame generation would work. This is something we have heard about but (both probably) are superior to what frame interpolation does.. which looks like more of an averaging of the 2 frames. A blurry mess once the onscreen movement speed gets to a certain point.

Frame interpolation: watch the side by side video for about 20 seconds, to see how jank that kind of frame generation is: https://en.wikipedia.org/wiki/Motion_interpolation
It's only good for slow motion scenes.

So either the article title using the word 'Interpolation' is inaccurate, or AMD's frame generation is going to be pretty bad.

Reading the article, it appears AMD themselves is using the term 'interpolation'. Kind of surprising, I wouldn't have called it that. It says it uses motion vectors to assist, which is something DLSS3 uses. Since the game engine knows at each instant, the speed and direction of each pixel, that can be used to help calculate the intermediate frame, much more precisely than typical interpolation. It could even generate more than 1 intermediate frame. Hmm, reading the article a little more, they mention a doubling of frames, which does sound like interpolation. Might be that they are limiting the generated frame count to just 1 generated frame. Sounds like interpolation but maybe they will just be limiting it to the number of frames that can be most reliably generated from the source frames... If its some sort of blending of interpolation+motion vector analysis, might work as a 'cheaper' DLSS style of frame generation. i.e. one that costs less compute to achieve. Since the AMD gpu's don't have the tensor cores specifically designed for these types of tasks, this might be the best compromise.
 
Frame interpolation can't do that. The best it could do is 24 real frames 24 interpolated frames. Frame interpolation is a form of frame generation (used in TV's), but it works completely different compared to how DLSS 3 frames would be generated, so also likely completely differently than how AMD's frame generation would work. This is something we have heard about but (both probably) are superior to what frame interpolation does.. which looks like more of an averaging of the 2 frames. A blurry mess once the onscreen movement speed gets to a certain point.

Frame interpolation: watch the side by side video for about 20 seconds, to see how jank that kind of frame generation is: https://en.wikipedia.org/wiki/Motion_interpolation
It's only good for slow motion scenes.

So either the article title using the word 'Interpolation' is inaccurate, or AMD's frame generation is going to be pretty bad.

Reading the article, it appears AMD themselves is using the term 'interpolation'. Kind of surprising, I wouldn't have called it that. It says it uses motion vectors to assist, which is something DLSS3 uses. Since the game engine knows at each instant, the speed and direction of each pixel, that can be used to help calculate the intermediate frame, much more precisely than typical interpolation. It could even generate more than 1 intermediate frame. Hmm, reading the article a little more, they mention a doubling of frames, which does sound like interpolation. Might be that they are limiting the generated frame count to just 1 generated frame. Sounds like interpolation but maybe they will just be limiting it to the number of frames that can be most reliably generated from the source frames... If its some sort of blending of interpolation+motion vector analysis, might work as a 'cheaper' DLSS style of frame generation. i.e. one that costs less compute to achieve. Since the AMD gpu's don't have the tensor cores specifically designed for these types of tasks, this might be the best compromise.
Question is do we want that? NVidia’s Optical Flow SDK has been available for use for years, it was only made viable with hardware acceleration. It’s been something game developers could have been adding since Maxwell and there was a good reason why they didn’t. I would argue that DLSS 3’s implementation is the bare minimum of acceptable performance we should demand from a feature like this, a non accelerated knockoff sounds like a leap backwards.

I get that “yay open source!” But still, I expect it to take years for the community to make it work for AMD by which time they do AMD will have hardware with accelerators for these tasks and making it usable on the existing hardware will be moot, like ray tracing on non RTX cards. You could do it, the API will let you, you just get 2fps.
 
Oh I'm sure. It would be nice if this feature also gives a good uplift to older cards. I guess less of a "will it work" and more "how well will it work".
For how well it would work just find examples of the NVidia Optical Flow SDK in use on non accelerated hardware. They have had it available for use since Maxwell, the answer is it works great, but the result is nothing you want to play on.
Besides it is a tech that is best used when you already are getting greater than 60fps (as per the AMD documentation).
From personal experience with DLSS 3 I can say it’s good but only when using GSync, if I disable that it feels janky, I would expect AMD to have the same issues and then require a FreeSync display to clean that up as well.
 
Is it possible to interpolate regions of the frame, such as the periphery, i.e. where the viewer/player is less likely to be actively viewing and artifacts less likely to be noticed? For example, we'd alternate between a full traditionally rasterized frame, then a hybrid frame with periphery frame generation and a 1280x800 (or whatever according to scale/resolution) traditionally rasterized region in the center, and so on. I know we use something similar in VR of course.

We've already seen frame generation artifacts, particularly in sub-100 fps scenarios, especially when they're right smack in the middle of the player's screen. Could this potentially be a way to eat our cake, and have it too...? I recall Chief Blur Buster had a fantastic post on frame generation a while back, maybe he can comment on this.
 
Last edited:
Is it possible to interpolate regions of the frame, such as the periphery, i.e. where the viewer/player is less likely to be actively viewing and artifacts less likely to be noticed? For example, we'd alternate between a full traditionally rasterized frame, then a hybrid frame with periphery frame generation and a 1280x800 (or whatever according to scale/resolution) traditionally rasterized region in the center, and so on. I know we use something similar in VR of course.

We've already seen frame generation artifacts, particularly in sub-100 fps scenarios, especially when they're right smack in the middle of the player's screen. Could this potentially be a way to eat our cake, and have it too...? I recall Chief Blur Buster had a fantastic post on frame generation a while back, maybe he can comment on this.
Nvidia has released papers on the topic for DLSS3, I have yet to see any actual implementations of it.
Problem with that feature isn’t technical it’s legal, there are craploads of “patents” for using “technology” to do a partial render to improve performance.
 
Is it possible to interpolate regions of the frame, such as the periphery, i.e. where the viewer/player is less likely to be actively viewing and artifacts less likely to be noticed? For example, we'd alternate between a full traditionally rasterized frame, then a hybrid frame with periphery frame generation and a 1280x800 (or whatever according to scale/resolution) traditionally rasterized region in the center, and so on. I know we use something similar in VR of course.

We've already seen frame generation artifacts, particularly in sub-100 fps scenarios, especially when they're right smack in the middle of the player's screen. Could this potentially be a way to eat our cake, and have it too...? I recall Chief Blur Buster had a fantastic post on frame generation a while back, maybe he can comment on this.

That would be combining interpolation with some type of foveated rendering. Possible? probably. Will it get done? erhm.... not likely imho. But you never know :)
 
Frame interpolation can't do that. The best it could do is 24 real frames 24 interpolated frames. Frame interpolation is a form of frame generation (used in TV's), but it works completely different compared to how DLSS 3 frames would be generated, so also likely completely differently than how AMD's frame generation would work. This is something we have heard about but (both probably) are superior to what frame interpolation does.. which looks like more of an averaging of the 2 frames. A blurry mess once the onscreen movement speed gets to a certain point.

Frame interpolation: watch the side by side video for about 20 seconds, to see how jank that kind of frame generation is: https://en.wikipedia.org/wiki/Motion_interpolation
It's only good for slow motion scenes.

So either the article title using the word 'Interpolation' is inaccurate, or AMD's frame generation is going to be pretty bad.

Reading the article, it appears AMD themselves is using the term 'interpolation'. Kind of surprising, I wouldn't have called it that. It says it uses motion vectors to assist, which is something DLSS3 uses. Since the game engine knows at each instant, the speed and direction of each pixel, that can be used to help calculate the intermediate frame, much more precisely than typical interpolation. It could even generate more than 1 intermediate frame. Hmm, reading the article a little more, they mention a doubling of frames, which does sound like interpolation. Might be that they are limiting the generated frame count to just 1 generated frame. Sounds like interpolation but maybe they will just be limiting it to the number of frames that can be most reliably generated from the source frames... If its some sort of blending of interpolation+motion vector analysis, might work as a 'cheaper' DLSS style of frame generation. i.e. one that costs less compute to achieve. Since the AMD gpu's don't have the tensor cores specifically designed for these types of tasks, this might be the best compromise.
Is what they’re doing amounting to “frame dithering”?
 
Is it possible to interpolate regions of the frame, such as the periphery, i.e. where the viewer/player is less likely to be actively viewing and artifacts less likely to be noticed? For example, we'd alternate between a full traditionally rasterized frame, then a hybrid frame with periphery frame generation and a 1280x800 (or whatever according to scale/resolution) traditionally rasterized region in the center, and so on. I know we use something similar in VR of course.

We've already seen frame generation artifacts, particularly in sub-100 fps scenarios, especially when they're right smack in the middle of the player's screen. Could this potentially be a way to eat our cake, and have it too...? I recall Chief Blur Buster had a fantastic post on frame generation a while back, maybe he can comment on this.

VR does reprojection if that's what you're talking about, yes you can do that in flat games too.



There's a lot of things that can be done with the algorithm and the algorithms have improved a lot over time. It also doesn't need to be interpolation, it can be extrapolation so there isn't additional latency.
 
I like frame interpolation/soap opera effect for traditional animation + CGI animation stuff

It's not that it looks bad for live motion video - live motion video just looks weird to us (most) even at native higher frames without frame interpolation - check these (both filmed/shown at high framerate natively) it looks like a video game or workplace training video lol




So hard to explain why, but it really looks terrible to me :D
 
I've been using DLSS in Hitman 3 and Hogwarts and haven't noticed any artifacts, but I have no interest in fake frames from either Nvidia or AMD
 
So hard to explain why, but it really looks terrible to me :D

The motorcyle "fight" in Gemini Man looks bad because the physics are clearly fake. The real live action stuff looks great to me because you can actually see what's happening during action instead of it being a blurry mess.
At least it did when I watched the full uncompressed 60 FPS 4k HDR movie on my 77" OLED. The youtube version looks a little off.
 
So hard to explain why, but it really looks terrible to me :D
Do wonder if human that never saw 24-30 fps for a lifetime has that impression has well or a pure product of being use to the low fps.

I watched the full uncompressed 60 FPS 4k HDR movie
It was nice for the studio to give you a many (tens of ?) terrabyte harddrive with a raw uncompressed movie file ;)
 
Do wonder if human that never saw 24-30 fps for a lifetime has that impression has well or a pure product of being use to the low fps.


It was nice for the studio to give you a many (tens of ?) terrabyte harddrive with a raw uncompressed movie file ;)
Haha, yeah just not compressed more than it comes on the disc.
 
The motorcyle "fight" in Gemini Man looks bad because the physics are clearly fake. The real live action stuff looks great to me because you can actually see what's happening during action instead of it being a blurry mess.
At least it did when I watched the full uncompressed 60 FPS 4k HDR movie on my 77" OLED. The youtube version looks a little off.
I saw it on a 120 Hz theater screen and it is the best looking movie I've ever seen. High FPS needs to be embraced more in movies because it was such a game changer being able to clearly see what was going on the whole time.
 
does this mean it will be 4x the performance? 4x as much latency? How important is this config variable?
If this is true I would think it would mean you could have a frame slider. Generate one frame up to four. I wouldn't think it would look fantastic but I guess we'll find out.
What I find more interesting is the rumors that this will be driver side meaning FSR frame generation will be AMD only. I think AMD could do it driver side while still having the code itself be "open". If it doesn't suck it would be interesting to see if Nvidia or Intel implemented it into their own drivers. I mean Intel maybe... perhaps with their B or C generations they just give up on XeSS and embrace FSR (with Intel driver side tweaks). Nvidia would never admit defeat, so probably no free frame generation for older Nvidia GPU owners if the driver side rumor is true.
 
If this is true I would think it would mean you could have a frame slider. Generate one frame up to four. I wouldn't think it would look fantastic but I guess we'll find out.
What I find more interesting is the rumors that this will be driver side meaning FSR frame generation will be AMD only. I think AMD could do it driver side while still having the code itself be "open". If it doesn't suck it would be interesting to see if Nvidia or Intel implemented it into their own drivers. I mean Intel maybe... perhaps with their B or C generations they just give up on XeSS and embrace FSR (with Intel driver side tweaks). Nvidia would never admit defeat, so probably no free frame generation for older Nvidia GPU owners if the driver side rumor is true.
It can also be open but require dedicated hardware units too. I am mostly interested in knowing how they do the job without dedicated accelerators. DLSS3’s frame generation is at this stage what I would consider the bare minimum of acceptable and only when paired with a GSync display. So for FSRs frame gen to be acceptable it’s going to need to do better simply being more accessible isn’t going to be enough.
 
Frame interpolation can't do that. The best it could do is 24 real frames 24 interpolated frames. Frame interpolation is a form of frame generation (used in TV's), but it works completely different compared to how DLSS 3 frames would be generated, so also likely completely differently than how AMD's frame generation would work. This is something we have heard about but (both probably) are superior to what frame interpolation does.. which looks like more of an averaging of the 2 frames. A blurry mess once the onscreen movement speed gets to a certain point.
FSR/DLSS 3 are not trying to 'smooth' motion, but instead trying to predict motion that does not yet exist, and has no 'future frame' to draw from.
For both statement I am really not sure to understand, why would frame interpolation can only do 24 interpolate frames from 24 real frame, why would it have any limit, you can generate 3-4 (2000 why not) in between frame. And DLSS3 do also use the future frame to create an inter one I am 90% confident, thus the cannot be completely removed significant added latency in some scenario (sequential frames is an input, the motion vector were more part of the DLSS2 generated image, according to some nvidia panel):
FdJBabiXoAABney?format=jpg&name=4096x4096.jpg


That why the frame generated on a camera change was so strange at launch, because it blended 2 completely different images
 
Last edited:
It can also be open but require dedicated hardware units too. I am mostly interested in knowing how they do the job without dedicated accelerators. DLSS3’s frame generation is at this stage what I would consider the bare minimum of acceptable and only when paired with a GSync display. So for FSRs frame gen to be acceptable it’s going to need to do better simply being more accessible isn’t going to be enough.
We'll find out soon? I guess. Seems like a long time ago AMD said it was coming doesn't it. lol
IMO Nvidia has like their RT implementation really really overengineered something that wasn't that complicated. Like Linus (or his script writers anyway) pointed out, we have had tech like Async Reprojection for a long time... and that seems like a far easier, and likely much better solution then trying to use on the fly AI. At the very least it seems to me the AI could be used much more efficiently in combination with something like reprojection to limit what the AI needs to consider by a magnitude.
 
Back
Top