DSR / VSR, WHY do they work?

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,862
Hey all,

So I have been thinking about this for a while now, and thought this could make for an interesting discussion.

So, we have learned from people who are very happy with DSR/VSR, and thus we know THAT it works.

We also know HOW it works, by internally rendering at a higher resolution, then downscaling it to fit on screen.

The question that is mostly unanswered, is the WHY.


When I first heard about DSR, my gut reaction was that it was dumb. It's still going to display at the same resolution, and thus it is going to look the same, but now GUI elements and text are going to be hopelessly small. My take was that it was just a silly way for Nvidia and AMD to justify ever more powerful GPU's for a market of mostly console ports played at 1080p, with little to no actual benefit.

Then I saw some examples of it working, and I was actually impressed. No, 4x DSR/VSR at 1080p (4k resolution internally) is never going to look as good as displaying on a real 4k monitor, but surprisingly it actually looked pretty damned awesome.

So, I have some theories as to why this is, but none of them really fully explain it.


Theory 1.) Supersampling AA on steroids. Most of us old school folks on here will remember when AA was first introduced on video cards. Back then one of the only options was SSAA, or supersampling AA. DSR and VSR essentially wind up being supersampling AA on steroids, with a much higher internal resolution, and thus fantastic results.

Theory 2.) More accurate per pixel colors. The way 3d rendering works, the engine determines the color for each individual pixel, but they may not blend together smoothly. Working internally at a higher resolution and then downsampling averages together a region of pixels that may not have smooth color transitions from pixel to pixel and thus makes them smooother and look better.

While the process of how the pixels get their colors is very different I liken it a little bit to photography where I have some experience. On my DSLR with its ridiculous high megapixel number, if I zoom all the way in, and view an image at 100%, the pixels can look a little wonky splotchy and dithered, with ISO noise, etc. Scale it down to normal viewing sizes - however - and it looks great. I get the impression this is what is typically done in digital film processing as well. Record at a higher resolution, then down scale for the final master.

Any other thoughts or theories?
 
These are great questions, and I'd love to participate in the discussion, unfortunately AMD reneged on their promise to release VSR support for GCN 1.0 back in January and are only now adding it to the 300 series cards in a driver which they aren't allowing me to install, even though it clearly supports GCN 1.0 cards (370).
 
These are great questions, and I'd love to participate in the discussion, unfortunately AMD reneged on their promise to release VSR support for GCN 1.0 back in January and are only now adding it to the 300 series cards in a driver which they aren't allowing me to install, even though it clearly supports GCN 1.0 cards (370).

Wow, really? Artificial locking out based on model numbers rather than actual hardware capability sounds like such an Nvidia thing to do, not AMD...

Is it possible 300 series boards have some different off-GPU component necessary to make it work well?

If not, maybe you can flash it with the firmware from an equivalent 300-series board? :p
 
Basically theory 2. You can sample more points and thus get a more accurate line rendering. Then, when downsampling, the one pixel that was originally 4 pixels will have a bit of each of the original 4 pixels.

Imagine you're rendering a line. There's a certain cutoff on whether a pixel gets drawn or not, resulting in either a fully drawn pixel or not. Let's say the cutoff for some condition is 50%, and the full pixel contains 40% of that condition, the line at that pixel doesn't get drawn.

At 4x the pixels representing the same area, maybe 2 pixels meet the criteria and 2 don't. When downsampling back to the original resolution, you'll take the average of these 4 pixels and maybe draw a pixel that's 50% transparent, resulting in a smoother look.

Now imagine doing this with not just line and polygon rendering, but with everything else the engine does, whatever that might be. You're going to get something that looks better than just straight up running it at the original resolution.

As far as I know, DSR/VSR and SSAA are basically the same thing with different names.
 
Vsr/dsr render at real 4k whereas Sampling is an algorithm. Why most use vsr/dsr without AA.
 
Vsr/dsr render at real 4k whereas Sampling is an algorithm. Why most use vsr/dsr without AA.

Yeah, Antialiasing makes little sense with DSR/VSR as you are already getting that effect "for free" as it were by using VSR/DSR.
 
I believe SuperSamplingAA is pretty much equivalent to DSR, as all of your frame buffers are actually sized to the higher resolution (unlike multi-sample). I guess the difference would be the point/level in the rendering pipeline at which these things take place? I'm probably wrong though, this is just super entry-level DX programming knowledge talking.
 
If you use SSAA 4x at 1080p you'll end up rendering at 4k then downsampling, just like VSR/DSR

Sampling is an algorithm therefore it estimates pixels based on rendered 1080p or whatever resolution you use. So it is not as exact as rendering at the higher resolution albeit close and a bit more taxing and resource hungry.
 
Sampling is an algorithm therefore it estimates pixels based on rendered 1080p or whatever resolution you use. So it is not as exact as rendering at the higher resolution albeit close and a bit more taxing and resource hungry.

Rendering at 4k then downsampling is still estimating pixels. Instead of the directx or the engine implementing the algorithm, your OS / graphics driver does.
 
There is a difference between DSR and Supersampling but the way they work is quite similar. Taking high resolution sample and determining the antialiased pixel from there. DSR is essentially just "dumb" version of supersampling that depends in filters (bilinear, gaussian etc...), but unlike SSAA its not limited 2x and 4x the resolution. Everything in between works.

Yeah, Antialiasing makes little sense with DSR/VSR as you are already getting that effect "for free" as it were by using VSR/DSR.

Yes when you are downsampling from 4K to 1080 (which divides evenly) AA may not be necessary but it does help. Especially when you are downsampling from odd resolutions AA will help removing the jaggies DSR misses. Granted using MSAA with DSR is just dumb waste of power but post processing AA, even FXAA, is practically free jaggie elimination without downsides. (FXAA blur is practically invisible when downsampling from very high resolutions)
 
Back
Top