AMD Radeon Super Resolution is FSR but for all games

I use e-sport just to describe it. If you want you can see the noise in every game, if you don't care about it - never mind :)
But I, can't accept buying an expensive video card, which will lower quality on the games that I play just to get a few FPS more - it's pointless for me.
So, it's a personal choice in the end :)
At sufficient frame rates, DLSS noise can be made to mostly disappear.

While we're nitpicking artifacts caused by frame rate amplification technologies such as DLSS/FSR/etc, I should comment in some technical depth on the "noise" issue...

When you do 240fps 240Hz or 360fps 360Hz, the noise cycling of ultra high framerate of DLSS can become so rapid, that it's fainter than even DLP temporal dithering (which is binary on-off pixel cycling, combined with simple RGB color switching (colorwheel or color LED/laser cycling). Obviously, DLSS is often mainly enabled in ray traced games like Cyberpunk 2077, so you're jumping 30fps to 60fps, and still seeing those DLSS artifacts. But try DLSS at 300fps, and it's totally a completely different ballgame.

Also keep in mind, some games inject GPU shader noise into the screen (like artificial analog film grain noise), which can cause DLSS quality to degrade somewhat, so that feature needs to be disabled in the game to greatly reduce the magnitude of DLSS quality degradation.

Games that use DLSS needs to put postprocess filters (e.g. noise) after the DLSS processing stage, to fix this degradation (much like how Netflix filters noise from original movies, compresses the movies, then re-adds an algorithmically-generated noise back to the films. It's a pretty impressive trick that Netflix does for compressing film grain noise better -- exclude the noise from compression. Likewise, engines sometimes need to move some filters such as film-grain and other filters (like chromatic aberration filters or JJ Abrams cake frosting filters) downstream, *after* the DLSS processing. Not all games do properly, so sometimes you get more DLSS degradation in some games than others.

Spatial method of removing DLSS noise
Or, unconventionally, you can even use the brute-force scaling trick to help filter DLSS noise, by DLSS-ing to a supersampled image (e.g. 4K) and downconverting it to 1080p for your 1080p screen, the downscaling reduces the noise. In some cases, this actually still increases frame rate above-and-beyond native 1080p rendering while looking better quality than native 1080p rendering. The downscaling blends the noise out, as an example, if you're using the DLSS+Supersampled trick. Perhaps this is not what you do for esports, but you'd do it when you wanted beautiful DLSS-accelerated supersampled AA (rendering graphics higher resolution than monitior and downscaling to monitor) -- as a method of compensating and blending out the DLSS noise. Given sufficient frame rate amplification factors, it can be a technique of improving image quality at the same or slightly higher frame rate (with even less noise than the non-DLSS image, simply because or redirecting DLSS horsepower to brute supersampling instead, as supersampled AA is the best AA you can do, but very killer on performance if you don't use DLSS to accelerate the supersampled AA). It's a blunt hammer, but some gamers do that to improve visuals in some games like Cyberpunk 2077 by using DLSS+Supersampling as the AA method. So that option exists -- a hybrid technique of using DLSS to improve AA instead of improve frame rate.

Temporal method of removing DLSS noise
Or the other approach, you attack noise temporally -- speed up the noise cycling to make the noise invisible. So pushing noise speed to far faster speeds (noise more rapidly cycling), can push the noise visibility above human-visibility thresholds. Noise that was visible at 30fps or 60fps can become invisible at 480fps at 480Hz. Just like it's hard to see DLP noise in an E-Cinema projector at the movie theater, especially with ultra-high-resolution 4K DLP projectors from at least middle-of-theater seating, as the DLP pixels binary-cycles about 1440 times to 1920 times a second to generate the color spectrum.

So DLSS noise (that bothers some people) can become invisible when frame rates go up by a massive amount (e.g. 10x). And noise of DLSS isn't as "binary" as DLP noise or other forms of noise (e.g. bad 35mm film noise or weak-analog-TV noise).

Given bad noise, indeed noise will become visible again. But noise becomes more invisible (for a given noisefloor), the higher the framerate is -- DLSS noise is 1/10th as visible at 200fps than it was at 20fps, for example (assuming exact same game, exact same DLSS neural training set, exact same frame detail level, exact same noise intensity, e.g. pixel color error relative to adjacent pixels).

Some people glance at DLSS, see the DLSS noise, and just discard DLSS. But DLSS noise can be 10x fainter under the right circumstances, neural training set, sufficient frame rate increases, and quality of game integration, so your mileage will vary. Others see only faint noise and don't really care. While for others it's super distracting in esports especially when detail levels is reduced and the solid colors of low detail make the noise-visibility easier to see.

Obviously you can fiddle DLSS settings to make the DLSS noise much more blatant. But at the end of the day, when DLSS 4.0 or 5.0 (on a future RTX 6000 or 7000 series) amplifies frame rates by 5x-10x instead of just only 2x, the compromises of DLSS noise falls quite a bit when we're in triple or quadruple digit frame rates for the 1000Hz displays of the 2030s doing 8K 1000Hz UE5 graphics (yes, there's actually an engineering path to get there).

The RTX 4090 is rumored to be able to do 4K 400fps in some games, after all.

NVIDIA also, credited me on Page 2 of Temporally Dense Ray Tracing science paper. The paper also illustrates where more temporal samples per second ("temporally dense" = high Hz) greatly reduces the temporal-noise issues. While DLSS noise is a very different topic, the noise-visibility drops fairly dramatically when refresh rates & frame rates go up 5-10x. Cycle the noise faster (aka higher frame rates), and the noise become even less and less visible. The flickering noise pixels flicker faster than a human's flicker detection threshold, and noise looks more solid / fainter. Experimental displays already exists to show how this continues to scale.

If you make it 10x noisier (like really bad photon shot noise), it will require quite dramatic framerate increases to eliminate the noisiness of that, but for the current amounts of DLSS noise, it's possible to brute the noise-visibility out via large frame rate amplification factors (i.e. distant future DLSS increasing frame rates by 5-10x instead of just 2x for DLSS 2.0. For example, RTX 4090 DLSS "3.0" is rumored elsewhere to be a 3x-4x, but don't vouch me on that).

In other words, they're apparently finding ways to make DLSS "suck less", for an ever-expanding set of use cases, including non-esports use cases and esports use cases. Lag-wise, artifacts-wise, even better frame rates, maybe even ability to activate in non-DLSS games -- like DLSS 3.0 that's probably part of RTX 4080, etc.

At the diminishing curve of returns of this refresh rate race, refresh rate increases needs to be 3x-4x bigger to remain human visible benefits (e.g. 360Hz-vs-1000Hz 0ms-GtG is much more visible than the faint 240Hz-vs-360Hz at current GtG pixel response). Retina refresh rates is not until well into the quadruple digits (or quintuple for the case of sample-and-hold VR due to the way wide-FOV retina resolution displays really amplifies Hz limitations of sample-and-hold displays). So there is long humankind progress needed for continued refresh rate improvement and GPU improvement.

Current DLSS 1.x / DLSS 2.x / FSR are just metaphorically merely early-bird Wright Brothers 2x frame rate amplification technologies as a precursor to upcoming algorithms capable of increasing frame rates by 5x-10x (e.g. 1000fps UE5 quality within one human generation) -- there is already research going on towards this.

It is great that AMD is leapfrogging NVIDIA (somewhat) at least until the hypothetical next-generation DLSS 3.0 assumed as part of RTX 4090, but they must keep leapfrogging and one-upping. We need 5x-10x frame rate amplification ratios in future smart FSR/DLSS algorithms, for the 1000fps 1000Hz displays of the 2030s, and there are multiple algoritthmic engineering paths to get there via long-term progress of frame rate amplification technologies.

Oh, and semantical correction -- official Associated Press guidelines defines it as "esports" with no capital (not "eSports") or hyphen (not "e-sports"), unless it's the first word of a sentence, where it becomes "Esports". Most businesses in the industry, including Blur Busters, has now followed suit. It's a nitpick nuance that I typically don't care much about and "just rolled with it", but -- umm -- friendly heads up
 
Last edited:
Denoising of Ray Tracing. Had me confused for a allot of your post :LOL:
Not classical spatial denoising algorithms that are commonly used.

The TL;DR:

Many kinds of temporal noise decrease at more samples per second (aka more framerate & more Hz).

This includes film grain noise, pixel jitter noise, temporal AA noise (edge flickering), DLSS noise, new temporal denoising of ray tracing (which may be done addition to classical spatial denoising), etc -- all of them.

Cycling noise faster (of the same noise intensity) = noise becomes fainter and fainter as frame rate of the noise increase (assuming the magnitude of per-frame noise is unchanged).

Note: Self-exercise on a 144Hz monitior -- you can also witness this yourself by going to YouTube to play a very filmgrain-noisy 24fps YouTube at 6x speed (via a Chrome extension that allows playback speeds faster than 2x), and you'll notice the filmgrain noise quite a lot at ultra high frame rates -- you're cycling the filmgrain noise 6x faster, and the noise is much more invisible, especially if the noise wasn't very blatant to begin with. Likewise, the same is true for any method of temporal noise artifacts generated by GPUs.
 
Last edited:
I agree with everything you said Chief Blur Buster but I fear it's going to go above most people's head, and really not relevant unless we are talking about 10 years in the future.

To put it simply, the problem is e-peen (I believe Associated Press says the hyphen is the correct spelling). PC gamers want native resolution at "max settings" even if it doesn't actually look any better and tanks performance to the point of unplayability.

Just look at how PC gamers talk about consoles. "It's not 'real' 4k". No it's not, but it looks nearly as good and the game is smooth and honestly there are PS4 games that look better than anything that has ever came out on the PC (Until Dawn, The Order 1886, etc.).

PS5 is great. The games look great and are smooth, mostly at 60 fps and 4K (or close to it). If you could buy at $500 it would be way better (or even at scalper prices) than spending $3,000 or $4,000 on a PC to get a slightly better experience.

Look at the Unreal Engine 5 Matrix Awakens demo. It's running on a $500 console and it looks better than anything ever. Better than movie special effects from 15 or 20 years ago. On a $500 console.

These "upscaling" technologies just offend people. They think it's "cheating" or not "real" 4K, whatever that means. You know what, they look great. DLSS and FSR (on full quality) actually look better than native resolution and you gain performance.

On lower settings we could be talking about getting 200% performance increase and still having an acceptable decent looking image (even if a little blurry) but that is still better as you can add additional effects like ray tracing and global illumination which enhance the experience more than additional pixels.

But that doesn't make your e-peen grow. That's the heart of the issue.
 
Yep. It's a culture problem, not a technical one.
Depends on community, actually --

The culture of high-hz VR e-peens for example, don't seem to care as much as long as there are fewer issues than keeping it turned off -- as long as the frame rate amplification (3dof and 6dof reprojection) helps reduce stutter without adding lag. Various VR use various kinds of frame rate amplification now, as stutters is much more of a motion-sickness inducing thing there. It's quite disconcerting to see stutters while head turning, so even Quest 2 uses 3dof reprojection to spherically rotate at 90fps (to stay in sync with head turns), independently of whatever the game frame rate is, even despite being somewhat inferior to Rift's 6dof reprojection.

Even when ASW 2.0 style algorithms are avoided/disabled, spherical rotation is often used in VR games to keep headturns in sync with scenery at framerate=Hz, to prevent sudden nausea from stuttery head turning in VR. Head turning at 30fps in VR makes people barf (throw up), so they decoupled headturn frame rate from game frame rate in VR. That frame rate amplification trick solved a major VR motion sickness problem by keeping headturn framerate guaranteed at framerate=Hz independently of game frame rate.

Due to comfort considerations, stutter-free headturns quietly became essentially a mandatory feature for being considered in inclusion in the Oculus store, so frame rate amplification can prioritize certain parts of the rendering (like display imagery panning/scrolling on a VR microdisplay caused by head turning in VR).

In fact, this is still ontopic -- as FSR, DLSS, and other algorithms are now increasingly utilized by new PCVR developers for new PCVR games, and more eagerly swallowed up by VR-motion-sick users -- to keep game frame rates high too, for secondary motion sickness causes (e.g. stuttery game characters are a lesser evil versus stuttery head turns), whac-a-moling all those nausea-inducing stutter weak links. The real world does not stutter, and a Holodeck does not stutter, it's really motion-sickness inducing.

When you include Quest 2's flavours of proprietary frame rate amplification technologies (center-foveated rendering, tile based low-resolution rendering for peripheral vision, 3dof reprojection, etc) -- then you can consider various frame rate amplification technologies is permanently enabled in more than 15-20 million PCVR and standalone VR headsets combined, with few complaints -- aka majority of modern post-2016 VR headsets have kept their various frame rate amplification technologies enabled.

That community of e-peens (VR) despite their nitpicks, generally already widespreadly more unanimously agrees the benefits of FSR-like/DLSS-like algorithms outweigh the evils (VR motion sickness).
 
Last edited:
Depends on community, actually --

The culture of high-hz VR e-peens for example, don't seem to care as much as long as there are no artifacts -- as long as the frame rate amplification (3dof and 6dof reprojection) helps reduce stutter without adding lag. Various VR use various kinds of frame rate amplification now, as stutters is much more of a motion-sickness inducing thing there. It's quite disconcerting to see stutters while head turning, so even Quest 2 uses 3dof reprojection to spherically rotate at 90fps (to stay in sync with head turns), independently of whatever the game frame rate is, even despite being somewhat inferior to Rift's 6dof reprojection.

Even when ASW 2.0 style algorithms are avoided/disabled, spherical rotation is often used in VR games to keep headturns in sync with scenery at framerate=Hz, to prevent sudden nausea from stuttery head turning in VR. Head turning at 30fps makes people barf (throw up), so they decoupled headturn frame rate from game frame rate in VR, and that frame rate amplification trick solved the problem. Stutter-free headturns became essentially a mandatory feature for being considered in inclusion in the Oculus store.
Thanks for the informative post :). I was mainly referring to regular pc gaming.
 
Thanks for the informative post :). I was mainly referring to regular pc gaming.
True. That said, the VR programming techniques often filter to PC gaming. The VR stutter improvements that Microsoft did to Flight Simulator, dramatically improved the game's fluidity on PCs. Far less stutters in Flight Simulator on the PC monitor now, even if you never use its VR mode.

Although Microsoft has not yet added DLSS/FSR, it's a great example of how VR software improvements can filter down to PC gaming improvements.

This is happening to a larger number of game codebases and game engines that were designed for both VR and non-VR operation.

Also, it's not just VR community. The PC sim/arcade racing communities appears more open to frame rate amplification technologies already used in some sim racing engines. Various in-engine basic frame rate amplification tricks already used by the latest Forza Motorsports (The improved FM7 Dynamic Optimization setting, for example) internally -- how it realtime changes graphics quality on the fly, to keep frame rate high and consistent to avoid nasty framerate dips. Absolute lag offset is also less critical than lag variances -- as realistic cars have built-in inherent lag (e.g. delayed steering wheel response, delayed throttle response). The quality of scenery away from the racetrack is much less critical than that distant shooter in FPS esports, for example, so non-distracting realtime variances in quality of off-tracetrack details (realtime changes to poly counts of models, soft downrezzing of trees, rendered view distance, etc), as sudden stutters and framerate fluctuations can be quite distracting to the ability to steer your car precisely through a hairpin turn.

At some brute overkill point, 4K downrezzing temporarily to 1440p during a high-detail hairpin turn, is more acceptable than a steering-mistake-inducing stutter -- as latency variances can be killer in timing turns at high speeds on the world's most difficult racetracks -- a car moving 300 feet per second (200mph -- google conversion) is 1 feet mistimed if latency varies 3ms -- at times, you need to really suddenly spin a steering wheel very fast at a very precise time for some kinds of turns in some tracks, So you're anticipating a perfect timing of a steering-wheel turn, because of your sheer familiarity of your racecar's handling and lagbehind behavior. Trusting your car to turn at the absolutely precise time. A steering-wheel-turn mistime (caused by unexpected latency behaviors from frame rate fluctuations) can put a road-edge-hugging car tire off the road onto grass at 200mph, and tspinning you out as you're hugging a hairpin at tight tolerance.

If you trained to drive on a pre-trained known latency of a specific car. The latency yo-yo effect (of frame rate fluctuations) is a no-no in high speed sim racing. 50fps varying to 100fps can be a very ugly 10ms latency change, bigger than the 3ms above. You're forced to race a bit slower to tolerate the latency yo-yo, and you perform worse. The lag chances can be more dramatic than the difference between two muscle cars that you must train long-term separately on, to understand their lagging behaviours in real life (how the throttle spools up, how the real-world steering lags, etc). A large lag change (from a large frame rate change) means you're driving a totally different car with totally different handling mechanics -- just simply because the frame rate changed. Casual racing generally do not care, but critical esports racing do -- consistency (of latency, of frame rate) here often becomes more important than the magnitude of the latency -- to preserve your race-training. Car lag behaviors do fluctuate in real life, but ideally you want to use consistency to limit it to the game mechanics (e.g. simulated car lagbehind behaviors), not whatever crap the GPU/CPU is throwing at you to make that familiar musclecar feel like a completely unfamiliar car model in handling.

By using FSR/DLSS in such a racing sim game, it is possible to have the (potentially semi-distracting) visible downrezzing or viewing-distance scenery-popping be completely avoided. So such frame rate amplification technologies are more likely to be swallowed up, if it is an improvement over status quo of ugly real-time detail-level changes designed to keep frame rates consistent in some racing games.

Long-time car ractrack drivers (virtual and real) are expecting extremely consistent latency behaviors (steering wheel latency behaves predictably) for a specific car speed through specific known pre-trained car turns, trying to edge their racing speeds up with sheer skill of rock-stable precise driving. Some of the sim or arcade racing engines already use game-developer-proprietary frame rate amplification tricks (like uprez/downrez, viewing distance, varying model polycounts, etc, many tricks utilized even before DLSS or FSR existed). I'd wager that racing e-peens are more welcoming of frame rate amplification technologies that improve consistency of high frame rates and consistency of latency, as long as it reduces those blatantly visible (e.g. viewing distance popping, etc) artifacts.

Sure, these are more niche PC esports than CS:GO esports, but it's a PC gaming community fairly open to frame rate amplification technologies because of the different priorities refined over the many years.

Yes, FSR/DLSS needs to improve to have fewer artifacts, but -- let's face it, there are many communities where the artifacts of DLSS/FSR is already a lesser evil than existing techniques/problems. The esports world isn't just FPS...
 
Last edited:
The TL;DR:

Many kinds of temporal noise decrease at more samples per second (aka more framerate & more Hz).

This includes film grain noise, pixel jitter noise, temporal AA noise (edge flickering), DLSS noise, new temporal denoising of ray tracing (which may be done addition to classical spatial denoising), etc -- all of them.

Cycling noise (of the same intensity) faster = noise becomes fainter and fainter as frame rate of the noise increase (assuming the magnitude of per-frame noise is unchanged).

Note: Self-exercise on a 144Hz monitior -- you can also witness this yourself by going to YouTube, play a very filmgrain-noisy 24fps YouTube at 6x speed (via a Chrome extension), and you'll notice the filmgrain noise quite a lot -- you're cycling the filmgrain noise 6x faster, and the noise is much more invisible. Likewise, the same is true for any method of temporal noise artifacts generated by GPUs.
Ah, I got ya. Interesting stuff.
 
DLSS and FSR (on full quality) actually look better than native resolution and you gain performance.

This is just factually incorrect. SOME narrow aspects of DLSS CAN look better than native resolution depending on the title and textures used, but other aspects can look a little strange.

FSR will always look worse when used, regardless of the quality settings. It's like popping a low res image into photoshop and scaling it up. This is impossible to do without quality loss. You can minimize that quality loss with the highest quality settings, but it will always be worse than native even at its best. Even at ultra settings there is still added pixelation and blur compared to native. FSR is not magic. It is still upscaling. It's a matter of trading off some quality for greater framerate. There is nothing wrong with making that that tradeoff if you feel you need it, but let's call it what it is.

1641313650795.png


DLSS is closer to magic because of the AI aspects, but it is not perfect, makes mistakes and results in odd artifacts on occasion.

When it comes to upscaling, native resolution will always be the gold standard no matter what. You can get increasingly closer to it by using better and better quality settings, but rendering at a lower resolution and upscaling will ALWAYS come with sacrifices.

Actually, scratch that. The gold standard is running at 4x DSR and DOWNSCALING. Nothing looks better.
 
Last edited:
G-Sync now is basically a certification standard like THX or Dolby, when G-Sync came out there was nothing that compared to what it did or how well those monitors performed or looked, they were beautiful. As the hardware that went into standard monitors started getting better and the average quality of pannels improved those features were gradually rolled into the VESA standard but even now G-Sync monitors look beautiful and are very nice to game on. But with Freesync if you are Freesync, Freesync Premium, or Freesync Premium Pro certified you are looking at very different performance ranges and you really need the performance pro models to compare against the G-Sync ones. G-sync and Freesync obviously offer variable refresh rates but that is the least interesting part of the certification at this stage.
FreeSync is literally apart of the VESA standard. It literally was created from it. Not GSync. See below.

https://www.anandtech.com/show/1379...-adaptive-sync-with-gsync-compatible-branding
 
FreeSync is literally apart of the VESA standard. It literally was created from it. Not GSync. See below.

https://www.anandtech.com/show/1379...-adaptive-sync-with-gsync-compatible-branding
Right. FreeSync was AMD's brand name, but it's basically always been a VESA open standard (Nvidia was just stubborn to support it).

Which is also why I think some version of FSR will win out in the end. Because it is open source and it works on anything: AMD, Nvidia, and Intel. Even cheap low-end GPUs from 10 years ago can work with FSR. It will win.
 
  • Like
Reactions: kac77
like this
When it comes to upscaling, native resolution will always be the gold standard no matter what.
Actually, my point is that it's not. Resolution is not the only metric to look at, and in fact is not as important as people think.

For example, VHS movies were 333 × 480 pixels, and they looked totally real. I would actually prefer playing a game at VHS quality, if it was photo-realistic.

By reducing resolution and increasing frame rate, you have the ability to use more advanced lighting, such as ray tracing and global illumination, that have a much huger effect on the experience than a couple extra pixels.

So what I am saying is that the total experience, the graphics in general and the smoothness of the game, are better with these technologies. Not that there are not some artifacts or weird things you can find in Photoshop with 8x zoom.

And, anyhow, at ultra quality, I think you would be hard pressed to find a difference in a blind A/B test. Yes, there is some difference (and I have seen cases where it looked perceptually better with FSR/DLSS, not technically better).

So it doesn't really matter. Maybe you do lose some small amount of picture quality (that may not even be perceptually noticeable) but gain fps and potentially increase graphics with RT/GI, so I would say it's a big win overall.
 
FreeSync is literally apart of the VESA standard. It literally was created from it. Not GSync. See below.

https://www.anandtech.com/show/1379...-adaptive-sync-with-gsync-compatible-branding
FreeSync isn't a VESA standard. Adaptive-Sync is.
Right. FreeSync was AMD's brand name, but it's basically always been a VESA open standard (Nvidia was just stubborn to support it).

Which is also why I think some version of FSR will win out in the end. Because it is open source and it works on anything: AMD, Nvidia, and Intel. Even cheap low-end GPUs from 10 years ago can work with FSR. It will win.
FreeSync isn't open source, it's open standard. There is a difference. And it's more than just a brand name. FreeSync includes the software that interfaces between the hardware, which is written and maintained by AMD.
 
Did you read the article? VESA added Adaptive-Sync into the DisplayPort spec, I believe in 2014. AMD then created the FreeSync brand name, but it was always based on the VESA standard and they just added the software/driver component to work with their GPUs.
 
And, anyhow, at ultra quality, I think you would be hard pressed to find a difference in a blind A/B test. Yes, there is some difference (and I have seen cases where it looked perceptually better with FSR/DLSS, not technically better).

With DLSS sure. You can get things that look sharper in certain circumstances, but there are also weird artefacts. So "looks better" will depend.

With FSR absolutely not. I have spent lots of time with FSR. Even with the ultra settings image quality degradation is immediately apparent particularly with pixelation but also with some blur.

All else being equal, FSR is definitely an image quality downgrade, even at max settings, but to your point all else is rarely equal.

It becomes a judgment call. what do you find more offensive? Upscaled pixelation or having to drop to lower quality settings?
 
Last edited:
FreeSync is literally apart of the VESA standard. It literally was created from it. Not GSync. See below.

https://www.anandtech.com/show/1379...-adaptive-sync-with-gsync-compatible-branding
Adaptive frame rates have been part of the VESA standard since day one as they have been part of the video display standards since the 1970’s as it was needed for many types of CRT displays. AMD gave it a catchy nickname of FreeSync and developed their software interface for it, Adaptive sync technology is cool sure but it isn’t what makes the GSync displays as nice as they are. When NVidia started the GSync certification process nobody was putting CPU’s in displays capable of doing Adaptive Framerates, they certainly weren’t factory calibrating and verifying colour accuracy (in gaming monitors at least) they weren’t guaranteeing refresh rates and latency times, black to white timing and bleed through. All things that a monitor needs to pass to certify for GSync certification, these things are done to a lesser extent in the FreeSync Premium Pro certification, but simply saying Freesync is adaptive framerates, so it is part of the VESA standard is a vast understatement of the work that went into making these things actually possible.
 
Last edited:
Adaptive frame rates have been part of the VESA standard since day one as they have been part of the video display standards since the 1970’s as it was needed for many types of CRT displays. AMD gave it a catchy nickname of FreeSync and developed their software interface for it, Adaptive sync technology is cool sure but it isn’t what makes the GSync displays as nice as they are. When NVidia started the GSync certification process nobody was putting CPU’s in displays capable of doing Adaptive Framerates, they certainly weren’t factory calibrating and verifying colour accuracy (in gaming monitors at least) they weren’t guaranteeing refresh rates and latency times, black to white timing and bleed through. All things that a monitor needs to pass to certify for GSync certification, these things are done to a lesser extent in the FreeSync Premium Pro certification, but simply saying Freesync is adaptive framerates, so it is part of the VESA standard is a vast understatement of the work that went into making these things actually possible.
Never said it was better than GSync. I'm stating Freesync is based off of the VESA standard. That's just a fact.
 
Adaptive frame rates have been part of the VESA standard since day one as they have been part of the video display standards since the 1970’s as it was needed for many types of CRT displays. AMD gave it a catchy nickname of FreeSync and developed their software interface for it, Adaptive sync technology is cool sure but it isn’t what makes the GSync displays as nice as they are. When NVidia started the GSync certification process nobody was putting CPU’s in displays capable of doing Adaptive Framerates, they certainly weren’t factory calibrating and verifying colour accuracy (in gaming monitors at least) they weren’t guaranteeing refresh rates and latency times, black to white timing and bleed through. All things that a monitor needs to pass to certify for GSync certification, these things are done to a lesser extent in the FreeSync Premium Pro certification, but simply saying Freesync is adaptive framerates, so it is part of the VESA standard is a vast understatement of the work that went into making these things actually possible.
Seems people have also forgotten that until Gsync certification was a thing, we had a ton of freesync monitors that were nothing but flickering hot messes making the rounds.
 
Actually, scratch that. The gold standard is running at 4x DSR and DOWNSCALING. Nothing looks better.
I know some people use DLSS/FSR as a faster version of supersampled AA, instead of for frame rate benefit.

Once it's downscaled, the FSR/DLSS artifacts, noises, and most other issues disappear, and you gain the pretty supersampled AA at a higher frame rate than native supersampled AA.

Using it as a means to improve quality with less framerate sacrifice -- native supersampled AA really viciously slaughters frame rates.
___

But you ain't seen nothing yet.

Imagine.

GPU AI gets smarter and smarter, AI will be able to be smart enough to Photoshop a VHS image as an artificial artist into a 4K image that looks perfectly 4K ("[GPU AI brain thinking]...that looks like a low-resolution human, let's make sure it looks like a proper high-resolution human consistent with previous frames, so I'm gonna repaint the whole human much more sharply into a new artist canvas in a new frame buffer....oh, that looks like a blurry brick wall....let me use my existing 5000-brickwall library of textures to fix it up and do a little airbrushing myself to retouch the colors, tint, clarity, AA, edges, non-repetitions, angles, skew, tilt, seams properly....okay, let me run my "it looks incorrect to humans" checklist and fix the remaining errors....etc.etc..." (crudely simplified for humans to read)

And be able to perform all this artificial automated Photoshopping in just 1/1000sec.

(assisted by timesaving tricks like previous frames to speed up work on current frames more flawlessly, via various true-3D AI-based perfect-parallax reprojection algorithms, so it does less reworking every frame, using less horsepower per second to get more frame rate at ever higher detail and higher resolutions).

And achieve 1000fps 1000Hz, as a super GPU AI high-speed version of the world's best human Photoshop artist.

AI is amazing, some of it is starting to do some pretty crazy shit in frame rate amplification technologies. I also even imagine, in a human generation or two into the future -- that a future video codec (e.g. H.268 or H.269 or H.270) will use various kinds of AI compression algorithms, to compress 8K 1000fps video of the future at incredibly low bitrates, by a detailed AI knowledge of what scenes look like and the ability to perfectly re-artist a low-resolution image (with an AI mind of verifying the images probably looks correct to a human) to a very high resolution.

Over the long-term (10-20+ years), this sort of stuff is important for ever-more-realistic VR that looks photorealistic, since you need crazy amounts of resolution & frames to even remotely begin to trick a human into a more realistic Holodeck feel.

In current ongoing research (much like 1980s Japanese HDTV researchers), the future of 8K 1000fps UE5 is based on a lot of AI frame rate amplification algorithms. This is the gist, simplifed into ELI5. That's what the top researchers are already actually researching on, even if portions of this will take over a decade. In the interim, AI-based frame rate amplification will get smarter and smarter in an incremental way.

TL;DR: Future advanced frame rate amplification technologies will look increasingly more and more perfect over the long term. Literally, just imagine GPU AI smart enough to recognize a frame looks incorrect to humans and re-AI-photoshops the frame sharper & intentionally surgically removes artifacts -- knowing it will now probably look correct to humans.
 
Last edited:
With DLSS sure. You can get things that look sharper in certain circumstances, but there are also weird artefacts. So "looks better" will depend.
Well I am talking about a subjective experience. I prefer a soft image, like games look on consoles. So "better" to me is not a sharper image.
 
Gsync is also based on the VESA standard, Freesync and GSync are just different implementations of said VESA standard.
It is now, but originally it was based on Nvidia's proprietary G-Sync module.

You can see Nvidia's announcement of G-Sync here in October of 2013:
https://www.nvidia.com/en-us/geforc...volutionary-ultra-smooth-stutter-free-gaming/

And the whitepaper announcement of VESA Adaptive-Sync (added to the DisplayPort 1.2a spec) was in March of 2014:
https://www.vesa.org/wp-content/uploads/2014/07/VESA-Adaptive-Sync-Whitepaper-140620.pdf

So no, I do not think you have your information correct.
 
  • Like
Reactions: kac77
like this
It is now, but originally it was based on Nvidia's proprietary G-Sync module.

You can see Nvidia's announcement of G-Sync here in October of 2013:
https://www.nvidia.com/en-us/geforc...volutionary-ultra-smooth-stutter-free-gaming/

And the whitepaper announcement of VESA Adaptive-Sync (added to the DisplayPort 1.2a spec) was in March of 2014:
https://www.vesa.org/wp-content/uploads/2014/07/VESA-Adaptive-Sync-Whitepaper-140620.pdf

So no, I do not think you have your information correct.

Both of you are correct (to an extent).

G-SYNC added some proprietary extras. G-SYNC native chips were connected to VESA VRR panels. So it added a proprietary link to an otherwise VESA chain.

VESA VRR and FreeSync are 100% compatible with each other though, as you can force FreeSync onto a VESA VRR panel, and you can connect a VESA VRR panel to an AMD card outputting FreeSync -- it's full-chain binary compatible. FreeSync simply enforces some quality minimums on top of a VESA standard, as well as plug-and-play signalling (e.g. a monitor able to tell the GPU what VRR range it supports).

But even in an EDID-less VESA VRR panel, one can use ToastyX to force VRR via a Windows Registry based EDID override (at least with a Radeon card) -- that's how some of us got FreeSync VRR to work on certain analog MultiSync CRT tubes (via a HDMI-to-VGA adaptor), as it's just generically a variable-sized Back Porch (as the method of varying the temporal spacing between refresh cycles) as a minor modification to old raster delivery.

I have confirmed Compaq 1210 CRT and Mitsubishi Diamond Pro 2070B CRT can support FreeSync (from roughly ~56Hz through max Hz), as long as framerate fluctuations are not rather severe (a fixed horizontal scanrate combined with a vertical refresh rate slewing gradually in realtime will avoid triggering the multisync refresh-rate-change blackout circuitry, e.g. next refresh cycle is 67Hz, then 68Hz then 70Hz, etc, rather than suddenly 56Hz to 100Hz). So you successfully get a fluctuating CRT refresh rate in sync with fluctuating frame rate, while playing a game -- FreeSync on some multisync CRTs work!

It's impressive how VESA Adaptive-Sync VRR is just a minor modification on a very old 100-year raster delivery sequence (used since the dawn of TVs).

The only thing that generic VRR does (VESA Adaptive-Sync, FreeSync, HDMI VRR) is vary the number of scanlines in the Vertical Back Porch, as the method of temporally spacing the refresh cycle. The scanrate (number of pixel rows per second) remains unchanged.

The frames of a refresh cycle are simply serializations of a 2D image over a 1D wire or broadcast, pixels transmitted left-to-right, top-to-bottom.

VideoSignalStructure[1].png


This is actually 100 years old -- the first 1920s and 1930s analog televisions used this signal delivery sequence all the way back to the invention of the first electronic TVs. The Horizontal Sync triggered the CRT electron beam to go to the next row, and the Vertical Sync triggered the CRT electron beam to go back to the top of the screen. The porches were simply padding (overscan).

When the world switched from analog to digital, all of the world display signal standards simply temporally digitized the analog raster delivery sequence because it's still a natural way to deliver a 2D image over a 1D wire -- one pixel at a time, left-to-right, top-to-bottom, delivering pixels like reading a book. Temporally, analog and digital has no difference in the timing and speed of pixel delivery for the same resolution on either (calculated with the same VESA standard such as DMT, CVT, CVT-R) -- it is a 1:1 temporally perfect mapping. Porches were just paddings and sync intervals is just one big comma-separator.

To more easily understand the image above: Just imagine your computer desktop as a higher resolution with secret resolution above top edge, bottom edge, left edge and right edge. The pixels of the whole signal layout is delivered one pixel at a time at the exact Pixel Clock, at the rate of number of pixel rows per second matching the Horizontal Scan Rate (67KHz scan rate = one pixel row transmitted over a video cable in 1/67000sec = 67000 pixel rows per second including offscreen pixel rows you can't see, but are simply paddings. The syncs/paddings is useful to signal and give the monitor time to initialize a new refresh cycle or initialize a new pixel row. So even though we don't have CRT beams, the exact same analog signaling is preserved temporally in the digital domain, as defacto instructions & time for the display to process the refresh cycle)

Now, remember they temporally preserved this in both the analog and digital domains. The current VESA VGA/SVGA/XGA/etc modes and ATSC HDTV standards have the necessary paddings to allow digital to be dumb-convertered to analog for compatibility. This allow proper 1:1 converters between HDMI-vs-component, and component-vs-HDMI. Or VGA-to-HDMI or vice versa.

It also made it easier to design cards that had both VGA and DVI outputs outputting pixels in sync with each other in both analog and digital domains. So this standardization symmetry stuck temporally during the transition from analog to digital. Even as pixels became higher color depth (HDR), the temporals are still there even in 2020s DisplayPort.

Temporally, it still maps perfectly 1:1 analog to digital and vice versa.

Then came VRR. Which was an super simple modification: Just vary the number of offscreen pixel rows between refresh cycles. Bigger Back Porch = more time between refresh cycles. By mathematically adding/removing pixel rows from the Back Porch, you change the time interval between refresh cycles, so that each refresh cycle has its unique Hz. Yet you're still piggybacked on an old raster signal delivery.

That's why VRR is preserved during a symmetric 1:1 analog:digital conversion.

The sheer audacious simplicity of the VRR modification to fixed-Hz delivery is why VRR works fine on some flexible multisync CRTs.

(ignoring the lack of plug-n-play EDID, which you need to put as an override into the Windows Registry via utilities such as ToastyX CRU, or by creating a custom Linux modeline).

For those familiar with Signal Timings (those old-fashioned legacy numbers you see in Custom Resolution), that's how dead-simple VRR modification to an old raster-delivery sequence.

Much of G-SYNC chip is the astoundingly complex dynamic overdrive algorithms they use for superior VRR overdrive. Proprietary G-SYNC still does this, it's just some propretary signalling & some supercharged overdrive algorithms (>100x more complicated than simple Overdrive Gain). That 256 kilobytes on an original G-SYNC monitor -- that's nearly 100% all overdrive related! For advanced technical reading, see the spoiler:
Very good VRR overdrive is stupendously complex -- especially if you're doing 3D or 4D OD LUTs combined with algebra. (e.g. Computing an intentional overdrive overshoot/undershoot artifact-prevention compensation from 3 or 4 or 5 variables.

Such as pixel color of 2 frames ago & frametime between 2 frames and 1 frames ago & pixel color of 1 frame ago & frametime between 1 frame ago and current frame & pixel color of current frame).

Varying frametimes creates OD artifacts (ghosting/coronas that blatantly appears and disappears). And we need different OD for different (original,destination) pixel colors. So imagine a 64 kilobyte LUT just for a 2-variable lookup. The memory requirements can scale astronomically so OD LUTs are often compressed (e.g. 17x17 instead of 256x256) and then interpolated, to save memory.

Now imagine 3D,4D or 5D LUT. Now you sometimes need to throw in algorithms (e.g. curve-fitted via quadratic regression or cubic regression formulas on thousands of oscilloscope measurements), to turn LUTs into algebra instead to save memory. But then at 5D, you might end up needing a LUT of thousands of different algebra formulas that were automatically calculated from a very long automated testing session (of various different color combos and frametime combos, using a photodiode oscilloscope).

Tons of tricks to compress theoretically-terabyte LUTs into 256K of FPGA-accessible memory. Who knows what NVIDIA did? It's proprietary. But we can say this because we already know "theoretically perfect" overdrive scales to astoundingly complex leagues.

Most FreeSync monitor ignores frametime and only use a 2D OD LUT of only 17x17 (that is interpolated) -- originally standardized a long time ago in this research paper.

But higher Hz, VRR, and strobing, have shown that the common 17x17 interpolated 2D LUT (used in almost all LCD panels output by AUO, Innolux, and others) are woefully inadequate.

G-SYNC native goes far beyond this and replaces the common scaler overdrive with a vastly superior NVIDIA proprietary overdrive algorithm that is running off a hell of a lot more memory -- the things you hear about 256 kilobytes or more of memory in a G-SYNC chip -- that's all overdrive LUT related! Sometimes even sometimes pixel color and frametime from 2 frames ago still screws around with overdriving the current frame.

Even with just 3-variable or 4-variable OD algorithms, that's why G-SYNC chips have a lot of RAM -- for all those overdrive LUTs. Much of proprietary G-SYNC is a lot of engineering man-hours in superior real-time dynamic overdrive.

The remainder is some 2-way signalling as well as NVIDIA-GPU-locking DRM layers. Interestingly, it is nowadays possible to use a GPU shader to create overdrive algorithms (similar to old ATI Radeon Overdrive from a long time ago, for screens with no overdrive), since you can digitally calculate pixel overshoot colors as RGB colors mapped to voltages, and use that as your software-based overdrive.

Also, as panels get faster, G-SYNC native VRR overdrive gets less and less benefit over simpler VRR overdrive, so the extra engineering of the G-SYNC premium may no longer be as worthwhile as it used to be. However, people picky about VRR overdrive artifacts, will generally be more attracted to a G-SYNC native monitor, with the G-SYNC cost premium being more worthwhile.

G-SYNC native chip monitors are actually more than 90% VESA VRR standard
If you exclude the stupendous engineering work on improved dynamic overdrive of native G-SYNC (as well as the bonus ULMB feature), most of G-SYNC native piggybacks >90% VESA. That said, newer G-SYNC monitors now unlocks full VESA compatibility, so you can use FreeSync on many newer native G-SYNC chipped monitors nowadays.
 
Last edited:
It is now, but originally it was based on Nvidia's proprietary G-Sync module.

You can see Nvidia's announcement of G-Sync here in October of 2013:
https://www.nvidia.com/en-us/geforc...volutionary-ultra-smooth-stutter-free-gaming/

And the whitepaper announcement of VESA Adaptive-Sync (added to the DisplayPort 1.2a spec) was in March of 2014:
https://www.vesa.org/wp-content/uploads/2014/07/VESA-Adaptive-Sync-Whitepaper-140620.pdf

So no, I do not think you have your information correct.
The Gsync module is how they chose to implement the technology, in 2013 when NVidia did this the VESA standard regarding Adaptive Sync was an ugly mess and manufacturers had far more leeway in its final implementation. There were others trying to do what NVidia did but they did it poorly and it was bad, in 2014 all VESA did was add the adaptive sync specification to Display Port, at that time it was already in the HDMI, VGA, and DVI specifications it just worked like garbage because it was an optional part of the VESA specification so actual support for it was all over the map.
 
Okay, fine, but G-Sync was still a proprietary implementation even if the idea was based on tech that existed. FreeSync on the other hand was compliant with the open standard.
 
Did you read the article? VESA added Adaptive-Sync into the DisplayPort spec, I believe in 2014. AMD then created the FreeSync brand name, but it was always based on the VESA standard and they just added the software/driver component to work with their GPUs.
Exactly my point. It's not FreeSync without AMD's software component.
 
I know some people use DLSS/FSR as a faster version of supersampled AA, instead of for frame rate benefit.

Once it's downscaled, the FSR/DLSS artifacts, noises, and most other issues disappear, and you gain the pretty supersampled AA at a higher frame rate than native supersampled AA.

Using it as a means to improve quality with less framerate sacrifice -- native supersampled AA really viciously slaughters frame rates.
___

That is interesting.

So essentially, DLSS it up to a 4x DSR resolution and then scale it back down to monitor resolution? I hadn't even considered trying that. (then again, I don't have a DLSS compatible GPU yet, so I haven't really experimented)


But you ain't seen nothing yet.

Imagine.

GPU AI gets smarter and smarter, AI will be able to be smart enough to Photoshop a VHS image as an artificial artist into a 4K image that looks perfectly 4K ("[GPU AI brain thinking]...that looks like a low-resolution human, let's make sure it looks like a proper high-resolution human consistent with previous frames, so I'm gonna repaint the whole human much more sharply into a new artist canvas in a new frame buffer....oh, that looks like a blurry brick wall....let me use my existing 5000-brickwall library of textures to fix it up and do a little airbrushing myself to retouch the colors, tint, clarity, AA, edges, non-repetitions, angles, skew, tilt, seams properly....okay, let me run my "it looks incorrect to humans" checklist and fix the remaining errors....etc.etc..." (crudely simplified for humans to read)

And be able to perform all this artificial automated Photoshopping in just 1/1000sec.

(assisted by timesaving tricks like previous frames to speed up work on current frames more flawlessly, via various true-3D AI-based perfect-parallax reprojection algorithms, so it does less reworking every frame, using less horsepower per second to get more frame rate at ever higher detail and higher resolutions).

And achieve 1000fps 1000Hz, as a super GPU AI high-speed version of the world's best human Photoshop artist.

AI is amazing, some of it is starting to do some pretty crazy shit in frame rate amplification technologies. I also even imagine, in a human generation or two into the future -- that a future video codec (e.g. H.268 or H.269 or H.270) will use various kinds of AI compression algorithms, to compress 8K 1000fps video of the future at incredibly low bitrates, by a detailed AI knowledge of what scenes look like and the ability to perfectly re-artist a low-resolution image (with an AI mind of verifying the images probably looks correct to a human) to a very high resolution.

Over the long-term (10-20+ years), this sort of stuff is important for ever-more-realistic VR that looks photorealistic, since you need crazy amounts of resolution & frames to even remotely begin to trick a human into a more realistic Holodeck feel.

In current ongoing research (much like 1980s Japanese HDTV researchers), the future of 8K 1000fps UE5 is based on a lot of AI frame rate amplification algorithms. This is the gist, simplifed into ELI5. That's what the top researchers are already actually researching on, even if portions of this will take over a decade. In the interim, AI-based frame rate amplification will get smarter and smarter in an incremental way.

TL;DR: Future advanced frame rate amplification technologies will look increasingly more and more perfect over the long term. Literally, just imagine GPU AI smart enough to recognize a frame looks incorrect to humans and re-AI-photoshops the frame sharper & intentionally surgically removes artifacts -- knowing it will now probably look correct to humans.

I think that is true. While traditional algorithmic scaling will always be a downgrade from native resolution, the AI based techniques have a lot of potential, and we are only seeing the infancy of it now.

The limitation will be the ability to train the systems on like content. Because all an AI upsampling method is doing is making educated guesses of what should be in places where lower resolutions lack sufficient detail, and without adequate like training, those guesses can easily be wrong.

Other things I am thinking about are what the computational cost might be to accomplish these future high capability AI methods, compared to the computational cost of just rendering at higher quality to begin with.

That, and if there were a way to train these AI upsampling algorithms yourself, independent of the game development, that would be huge. You know, run the game in a demo loop at a ridiculously high resolution that is unplayable, just to get a screen recording of what it looks like at those resolutions, and use that to train the AI upsampling , without having to sit around and wait for game devs to do it for you. You could save it to a profile and use it yourself, or maybe even share it in the community. If that were an option, I think it could be amazing. It would remove one of my biggest criticisms of the technology, namely the limited number of titles it works on.
 
Last edited:
So there is already technology that can do this, just not in real time.

For example, this app can convert 24 fps anime into 60 fps and the results are very good.



Or this video from Intel researchers. This technology is working and maybe only a few years away from being used in games.

 
So there is already technology that can do this, just not in real time.

The problem with motion blur and other low-FPS stuff is that it's not really possible to measure it subjectively. It looks better than FPS in person, but it doesn't look better on paper.

In one of my last AMD events when they confirmed they were killing off Crossfire, it was because things like motion blur scaled wonderfully, but they couldn't sell image quality to a benchmark-hungry audience.
 
Yes, this is what I've been getting at in this thread. People are fixated on the numbers. Don't even display the FPS, don't look at the resolution. Just play the game and have a good time.
And that's a subjective gaming experience and while that's great. I'm never buying hardware based on a subjective experience. Put down numbers and pump out a objectively great image. DLSS, FSR, XeSS, TAAU, TSR etc etc are all about bailing you as a gamer out when your stuck. Not replacing great native performance. And pushing towards a direction where you have to use an upscaler/reconstruction tech to get decent performance... is a rabbit hole I hope we never go down.

*edited to strike it out. Axman is right*
 
Last edited:
Well by that metric Counter-Strike looks better than Far Cry 6 because you can run it at higher resolutions and more FPS.
Yeah... I could have worded that better lol. I've striked it out because I see where your coming from.
 
As much as I realize PC gaming is better, and I love the PC, I see the appeal of consoles. You put a game in and you play it. What you see is what you get.

I love tweaking settings, but honestly, I don't even play games anymore. I buy a bunch of games and benchmark them and to be frank, it's a waste.

Games are an experience, completely subjective. You are either having a good time or you are not. Who cares if it's 4K or 1800p with some fancy algorithm?
 
As much as I realize PC gaming is better, and I love the PC, I see the appeal of consoles. You put a game in and you play it. What you see is what you get.

I love tweaking settings, but honestly, I don't even play games anymore. I buy a bunch of games and benchmark them and to be frank, it's a waste.

Games are an experience, completely subjective. You are either having a good time or you are not. Who cares if it's 4K or 1800p with some fancy algorithm?
I don't know man. For the record setting insane pricing currently being charged, it better deliver without having to use upscaling tech.
 
As much as I realize PC gaming is better, and I love the PC, I see the appeal of consoles. You put a game in and you play it. What you see is what you get.

I love tweaking settings, but honestly, I don't even play games anymore. I buy a bunch of games and benchmark them and to be frank, it's a waste.

Games are an experience, completely subjective. You are either having a good time or you are not. Who cares if it's 4K or 1800p with some fancy algorithm?
Just pretend there aren't any options, congrats, you now have a console.

Or enable the Nvidia GeForce experience optimize thing that automatically configures games.
 
And that's a subjective gaming experience and while that's great. I'm never buying hardware based on a subjective experience. Put down numbers and pump out a objectively great image. DLSS, FSR, XeSS, TAAU, TSR etc etc are all about bailing you as a gamer out when your stuck. Not replacing great native performance. And pushing towards a direction where you have to use an upscaler/reconstruction tech to get decent performance... is a rabbit hole I hope we never go down.
Unfortunately (or fortunately, depending on POV) -- that's the direction industry is going. In modern GPU engineering, some of the native GPU rendering is already automatically using shortcuts based on known memory of previous frames, so the progress has already (secretly) started, so it is already too late...

Thinking ahead, what does "native performance" truly really mean?

Consider that, Netflix and Disney+ is already just 1 frame per second, when H.264/H.EVC is already just faking the data between the fully encoded frames (and much more perfectly than terrible old-fashioned TV interpolation, I must add). It's a compression algorithm.

Video compression is full of predicted/interpolated frames (P and B frames) between those standalone fully-preserved frames (I frames), even at lightly-compressed E-Cinema bit rates at the local movie theater.

IBBPBB_inter_frame_group_of_pictures.svg1_-690x173.png


One asks oneself, given sufficient perceptually-lossnessness, does it matter as long as it looks identical to native?

Upcoming frame rate amplification technologies are simply GPU-3D equivalents of a multilayered hierarchical rendering pipelines of various lossless AND lossy compression uses. All of this is necessary for combining retina resolutions simultaneously with retina refresh rates & frame rates, more mimicking analog real-life motion in a more perfect Holodeck.

There are some forms of near-perfectly-lossless frame rate amplification algorithms that has fewer artifacts than mipmapping or aniostropic filtering (also arguably tricks that turns native rendering into non-native rendering). Even the various forms of AA algorithms are various forms of faking already going on (even for non-DLSS AA). Now, various frame rate amplification technologies can be considered kind of native (such as a theoretically perfect reprojection algorithm) -- the Planet Earth doesn't recreate all the Earth atoms and molecules when you move your head a millimeter.

So, metaphorically (Napkin Exercise Time!), from that point of view -- why should GPUs artist a whole brand new frame, completely all over again from scratch, if in the future -- a perfect artifactless parallax-perfect reprojection algorithm is invented, and used instead? Thus, it is then, from that point of view, considered "native performance". Skip redrawing the triangles, just reproject, if it can now finally be done with full perfection. Look at the big jump in quality between Oculus ASW 1.0 and Oculus ASW 2.0 -- in some cases, the 45fps->90fps reprojection is perceptually lossless now for some well-designed VR games. Now, adding a multilayered Z-Buffer that remembers graphics behind graphics, the reprojection can become even more perfect/lossless -- and also removes stutters in moving models/objects (floating VR hands) too at the same time -- it's just shifting the atoms of the 3D world rather than redrawing the atoms. Just think. How is that NOT native, when done perfectly? ;)

The venn diagram of frame rate amplification technologies and native performance is a very large overlap. Some future frame rate amplification algorithms can be considered more perfect than things like classic native aniostropic filtering, classic native mipmapping algorithm or classic native texture filtering algorithms, or classic ray tracing denoising algorithms, etc.

Like the two pill choices in Matrix, what is real "native performance", and what is not? In twenty years, it will be very hard to define what the threshold of "native performance" really means?

Current GPU architectures are literally Schroedinger Cat material that is simultaneously native performance and non-native performance. Hey, in plain Intel chip engineering, "predictive branching" in CPUs used to be voodoo Schroedinger Cat material too! The way that modern CPUs already literally run parallel universes of two simultaneous execution paths until the real one is committed. What is real, and what is not, indeed!?

And even with well-known algorithms more publicized (Rather than Schroedinger Cat tricks) -- even mipmapping / aniostropic filtering is a shortcut tantamount to "lossy compression" for 3D rendering, because it adds various kinds of artifacts/blurs. More algorithms (AI and non-AI) are being added to GPUs all the time, and likely will continue to do so until end of humankind, after all.

Given the crazy massively-more-complex silicon tricks they've already started using in GPUs since the GTX 1000 series (and well before) -- you just don't notice. The sheer complexity AMD/NVIDIA/Intel has crammed into GPU engineering is astounding.

Think of, how you exactly define "native performance" -- one perspective is that "native performance" actually no longer really truly monolithically exist anymore in current GPUs, when you look under the hood (with transistor understanding) -- depending on how you define "native performance".

And long-term (e.g. later in century), graphics may even essentially go resolutionless and/or refershrateless in the pursuit of better VR (Holodeck) perfection -- so even "native resolution" (too) may eventually be irrelevant, too.

</devils-advocate>
 
Last edited:
Back
Top