Zarathustra[H]
Extremely [H]
- Joined
- Oct 29, 2000
- Messages
- 38,862
Hey all,
So I have been thinking about this for a while now, and thought this could make for an interesting discussion.
So, we have learned from people who are very happy with DSR/VSR, and thus we know THAT it works.
We also know HOW it works, by internally rendering at a higher resolution, then downscaling it to fit on screen.
The question that is mostly unanswered, is the WHY.
When I first heard about DSR, my gut reaction was that it was dumb. It's still going to display at the same resolution, and thus it is going to look the same, but now GUI elements and text are going to be hopelessly small. My take was that it was just a silly way for Nvidia and AMD to justify ever more powerful GPU's for a market of mostly console ports played at 1080p, with little to no actual benefit.
Then I saw some examples of it working, and I was actually impressed. No, 4x DSR/VSR at 1080p (4k resolution internally) is never going to look as good as displaying on a real 4k monitor, but surprisingly it actually looked pretty damned awesome.
So, I have some theories as to why this is, but none of them really fully explain it.
Theory 1.) Supersampling AA on steroids. Most of us old school folks on here will remember when AA was first introduced on video cards. Back then one of the only options was SSAA, or supersampling AA. DSR and VSR essentially wind up being supersampling AA on steroids, with a much higher internal resolution, and thus fantastic results.
Theory 2.) More accurate per pixel colors. The way 3d rendering works, the engine determines the color for each individual pixel, but they may not blend together smoothly. Working internally at a higher resolution and then downsampling averages together a region of pixels that may not have smooth color transitions from pixel to pixel and thus makes them smooother and look better.
While the process of how the pixels get their colors is very different I liken it a little bit to photography where I have some experience. On my DSLR with its ridiculous high megapixel number, if I zoom all the way in, and view an image at 100%, the pixels can look a little wonky splotchy and dithered, with ISO noise, etc. Scale it down to normal viewing sizes - however - and it looks great. I get the impression this is what is typically done in digital film processing as well. Record at a higher resolution, then down scale for the final master.
Any other thoughts or theories?
So I have been thinking about this for a while now, and thought this could make for an interesting discussion.
So, we have learned from people who are very happy with DSR/VSR, and thus we know THAT it works.
We also know HOW it works, by internally rendering at a higher resolution, then downscaling it to fit on screen.
The question that is mostly unanswered, is the WHY.
When I first heard about DSR, my gut reaction was that it was dumb. It's still going to display at the same resolution, and thus it is going to look the same, but now GUI elements and text are going to be hopelessly small. My take was that it was just a silly way for Nvidia and AMD to justify ever more powerful GPU's for a market of mostly console ports played at 1080p, with little to no actual benefit.
Then I saw some examples of it working, and I was actually impressed. No, 4x DSR/VSR at 1080p (4k resolution internally) is never going to look as good as displaying on a real 4k monitor, but surprisingly it actually looked pretty damned awesome.
So, I have some theories as to why this is, but none of them really fully explain it.
Theory 1.) Supersampling AA on steroids. Most of us old school folks on here will remember when AA was first introduced on video cards. Back then one of the only options was SSAA, or supersampling AA. DSR and VSR essentially wind up being supersampling AA on steroids, with a much higher internal resolution, and thus fantastic results.
Theory 2.) More accurate per pixel colors. The way 3d rendering works, the engine determines the color for each individual pixel, but they may not blend together smoothly. Working internally at a higher resolution and then downsampling averages together a region of pixels that may not have smooth color transitions from pixel to pixel and thus makes them smooother and look better.
While the process of how the pixels get their colors is very different I liken it a little bit to photography where I have some experience. On my DSLR with its ridiculous high megapixel number, if I zoom all the way in, and view an image at 100%, the pixels can look a little wonky splotchy and dithered, with ISO noise, etc. Scale it down to normal viewing sizes - however - and it looks great. I get the impression this is what is typically done in digital film processing as well. Record at a higher resolution, then down scale for the final master.
Any other thoughts or theories?