cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,054
Variable Rate Shading (VRS) is a powerful new API that gives the developers the ability to use GPUs more intelligently. Shaders are used to calculate the color of each pixel in a screen. Shading rate refers to the resolution at which these shaders are called (which is different from the overall screen resolution). A higher shading rate means better visual fidelity at the cost of using more GPU power. All pixels in a frame are affected by the game's shading rate. VSR allows developers to choose which areas of the frame are more important and increase the visual fidelity, or set parts of the frame to have lower fidelity and gain extra performance. Lowering the fidelity of parts of the scene can help low spec machines to run faster.

There are two tiers of support for VRS. First of all the VRS API lets developers set the shading rate in 3 different ways: per draw, within a draw by using a screenspace image, or within a draw, per primitive. The hardware that can support per-draw VRS hardware are Tier 1. There's also a Tier 2, the hardware that can support both per-draw and within-draw variable rate shading. VRS support exists today on in-market NVIDIA hardware and on upcoming Intel hardware. AMD is rumored to be working on support for the feature.

For example, foveated rendering, rendering the most detail in the area where the user is paying attention, and gradually decreasing the shading rate outside this area to save on performance. In a first-person shooter, the user is likely paying most attention to their crosshairs, and not much attention to the far edges of the screen, making FPS games an ideal candidate for this technique. Another use case for a screenspace image is using an edge detection filter to determine the areas that need a higher shading rate, since edges are where aliasing happens. Once the locations of the edges are known, a developer can set the screenspace image based on that, shading the areas where the edges are with high detail, and reducing the shading rate in other areas of the screen.
 
Probably going to be a while before we start seeing any titles that use it, my money would be that Unreal gets it there first. Could be a pretty big deal if implemented for newer titles designed for XBox and could really help out lower end PC's in general. I mean yeah parts of the screen wont look as good but it would be one of those trade off settings, pair that with the eye tracking hardware that is coming to market and in a few years I could see this being a pretty useful thing.
 
LeftEyeFovCombined.png
 
In VR this makes sense if the headset has eye tracking. The headset can track your eyes and adjust the view accordingly. This technology seems perfect for that scenario!

In a regular video game... Unless you're playing on a potato... Why would you want this? The blurring of the world except for the area around the crosshair seems terrible. What if someone pops up out of a trench to the right? I want to see him in full HDR; not low resolution.

Maybe I'm getting old. This new 2018 - 2019 trend of lowering video fidelity to turn on extra eye candy seems counterintuitive. Maybe by 2020 it will make more sense to me. :) I want my video cards fast and powerful! Throw more transistors at the problem!
 
What really matters is how much they lower the resolution of the out-of-view image.
 
In VR this makes sense if the headset has eye tracking. The headset can track your eyes and adjust the view accordingly. This technology seems perfect for that scenario!

In a regular video game... Unless you're playing on a potato... Why would you want this? The blurring of the world except for the area around the crosshair seems terrible. What if someone pops up out of a trench to the right? I want to see him in full HDR; not low resolution.

Maybe I'm getting old. This new 2018 - 2019 trend of lowering video fidelity to turn on extra eye candy seems counterintuitive. Maybe by 2020 it will make more sense to me. :) I want my video cards fast and powerful! Throw more transistors at the problem!
I see potential with Microsoft's own hololens 2 if they can use it with zoom/auto focus and post process regions you're eyes are focused on imagine the hololens doing a bit of post process on a 4K screen your looking at in real time along with other cool stuff like custom configurable UI hud's another cog in the wheel.
As for counter intuitive not so much. I think certain parts of scenes can be more adaptive to make performance more suitable overall. I don't care if a lot of stuff in MIPMAP LOD regions and outer regions of peripheral view get reduced a bit for the sake of enhancing more important regions on screen that are more vital the majority of the time. I think it could be especially great with eye tracking combined.
 
Last edited:
So, compute power growth is too slow to the point such cheats are required ?

Even if software had all the compute power it could handle. Why burn resources and power for nothing. Our own brains use this cheat at all times... using natures "cheats" is only logical.
 
So, compute power growth is too slow to the point such cheats are required ?
Yes, I am surprised it took the slowdown of Moore's law for ingenuity to return to software. What coders use to do in the 80's was amazing.
While Moore's law was the observation of doubling in transistor density every 12-18 months, coders have been getting lazier and lazier resulting in the optimal use of the resources being waste. There was a paper I read some time ago which had a figure around x1.1 improvement in software performance at the same time as hardware improvements. That means in real terms the code is getting worse
 
Even if software had all the compute power it could handle. Why burn resources and power for nothing. Our own brains use this cheat at all times... using natures "cheats" is only logical.
Why do you want less fidelity on the periphery of your vision in a video game to mimic reality? Don't you want to go beyond reality? I thought that's what the industry has been aiming for all along...
 
Why do you want less fidelity on the periphery of your vision in a video game to mimic reality? Don't you want to go beyond reality? I thought that's what the industry has been aiming for all along...
Why would you want to waste resources on something a human is typically incapable of perceiving. The light spectrum is infinite but we more or less stopped at 1.07bn colours from a 32-bit palette and no one is asking for more.
 
Why would you want to waste resources on something a human is typically incapable of perceiving. The light spectrum is infinite but we more or less stopped at 1.07bn colours from a 32-bit palette and no one is asking for more.
Can't speak for everyone, but I think it's lack of trust developers will use this wisely. The assumption is you're always looking dead center at your game and they'll only downscale unnoticed content. Here's some scenarios I can see developers overlooking:

-Not having detail on any content past a 16:9 ratio because it didn't occur to them people would run it at 21:9
-As gxp500 pointed out, maybe enemies are coming at you from your periphery and you actually did need some of that detail
-Maybe the default FOV is horrendously low, so someone is using a mod to make it suitable and now there's no detail on the edges
-Maybe you want to look at background details you like during a cutscene with locked camera angles

Remember, this is the same industry that brought us tinting the entire screen brown, overblown bloom, chromatic aberration on everything, depth of field for nice screenshots that makes you feel nearsighted, zoomed-in FOV, 30fps caps, physics tied to framerate, etc.
 
Yes, I am surprised it took the slowdown of Moore's law for ingenuity to return to software. What coders use to do in the 80's was amazing.
While Moore's law was the observation of doubling in transistor density every 12-18 months, coders have been getting lazier and lazier resulting in the optimal use of the resources being waste. There was a paper I read some time ago which had a figure around x1.1 improvement in software performance at the same time as hardware improvements. That means in real terms the code is getting worse
I think this is true, and I wonder if x86 suffered more years of this than ARM so far.
So many of the programs I downloaded for android are mere MBs reminds me of the DOS, and 3.1 windows .
Im glad so efficient methods are being implemented.
 
Why do you want less fidelity on the periphery of your vision in a video game to mimic reality? Don't you want to go beyond reality? I thought that's what the industry has been aiming for all along...

Why do you use compression for audio and video.when it removes fidelty?
It improves overall efficiency.
the concept should really be easy to understand.

but lets see the actual execution
 
It only took me 2-3 seconds to identify the image on the right was the one w/o it being used on a 1440p display. Granted that was a still and not moving so no guarantees I'd perceive it in motion. I am, however, one of those obsessed with clarity and sharpness in games so at 4k I'm pretty sure I'd notice, especially after doing a lot of testing with DLSS in the last few months. If your bottom line is frames then then VRS and DLSS are great compromises but if not then it's another half step backwards. The positive I see in this is that if it's an option it becomes another tool gamer's can use to optimize per their need or resources.

I've commented numerous times recently how all these new features and their various combinations (1080p/1440p/4k, HDR, DLSS, RT, DX11, DX12, Vulkan) are adding an incredulous amount of testing overhead to PC game reviewer's metrics now. It occurred to me that instead of itemizing RT+DLSS, and now VRS, some simplification could be used. How about 2 tiers of, 'everything on' and sharp as can be, or 'all compromises used' and fast & blurry it can be rendered.
 
Trading IQ in area's you don't care much about for higher IQ in area's you'd rather see improved seems like a great compromise. Done well it can be a very good thing done poorly it can be a rather ugly thing.
 
These technologies like VRS/Adaptive Resolution etc... seem like good tools to benefit game streaming. Makes me wonder if it's why they're being developed, nothing I'm interested in.
 
Back
Top