More DLSS...

So there's a more direct answer to FSR in the newer drivers. You literally just enable it in the control panel and use a lower resolution. It's a spatial upscaler just like FSR.

It's alright, I guess - but DLSS is clearly way ahead. I tried with 1080p to 1440p and it doesn't look great...
 
Yeah I'm sure it was a real joke. And by the way it is "you are" not "your" and "than" not "then" but please act like my intelligence is in question here..
I spent the last year with Wake Forest fighting a brain infection , so yes there are side effects to my intelligence .
 
So there's a more direct answer to FSR in the newer drivers. You literally just enable it in the control panel and use a lower resolution. It's a spatial upscaler just like FSR.

It's alright, I guess - but DLSS is clearly way ahead. I tried with 1080p to 1440p and it doesn't look great...
Nvidia's solution looks cleaner than FSR. When I enable it and "overlay" option green letters NIS appear so let's call it NIS for short.
By cleaner I mean susceptibility for artifacts. FSR seems better at trying to reconstructing lines at expense of at times looking too much like walking in candy land.

Also NIS seems to have sheetload of sharpening even at 0% sharpening. Probably what people these days want anyway because all screenshots of FSR I saw had it at levels at least this high. People these days generally like to get quite high with different things that cause visual artifacts 🤣

All in all I quite like NIS. Not sure if I will use it because that would require running game at 1440p and I already am forced to run games at 1080p where integer scaling is my preferred upscaling method. Also Lanczos implementation that Nvidia uses when "Image Scaling" is not enabled but GPU scaling is was already pretty good. In any way NIS makes using FSR in not supported games pretty pointless even more so than GPU scaling Lanczos did. Solutions like Magpie or Lossless scaling add input lag, reduce performance and do not support VRR making games stutter making NIS immediately superior regardless of anything and all in all FSR does not look that good. Maybe in games which already have candy look it would fit better :)

For games which do have FSR it might be still better solution but it depends on amount of artifacts, performance, how much having sharper HUDs or post processing done in native resolution really matter, etc. I mean even here it is not immediately obvious FSR will be better because if performance hit from using FSR is big compared to using NIS or GPU scaling and game forces more sharpening than I want then FSR would not be the best choice.

BTW. What upscaling algorithm AMD uses for GPU upscaling / Radeon Image Sharpening?

ps. DLSS should not even be compared to DLSS. DLSS is years ahead of anything else on the front of both upscaling and anti-aliasing. Especially with DLSS 2.3
 
If your seeing artifacting with NIS it's likely because it's regular Lanczos. Part of why FSR is favoured so heavily is because it resolved many of the "artifacts" or issues with using Lanczos. Notably rining, which could be mistaken for the classic over sharpening artifact of the same name.

The sharpening filter used by AMD in both FSR and Radeon Image Sharpening is called CAS = Contrast Adaptive Sharpening.

*edit*
It would be nice if devs let you control the sharpness slider when using FSR. Not everyone likes over sharpened images.
 
If your seeing artifacting with NIS it's likely because it's regular Lanczos. Part of why FSR is favoured so heavily is because it resolved many of the "artifacts" or issues with using Lanczos. Notably rining, which could be mistaken for the classic over sharpening artifact of the same name.

The sharpening filter used by AMD in both FSR and Radeon Image Sharpening is called CAS = Contrast Adaptive Sharpening.

*edit*
It would be nice if devs let you control the sharpness slider when using FSR. Not everyone likes over sharpened images.
Nvidia already used Lanczos when using GPU to upscale lower resolutions rather than monitor. Maybe the difference is in amount of taps...
6X8luWOh.jpg


I do not particularly like oversharpened images either and test FSR at 0% RCAS.

Ok so "Lossless Scaling" updated to add NIS. So I gave it a test using the AMID EVIL demo. If this is nVidia response to FSR... they have lost the plot. It looks like hot garbage. The ringing artifacting is crazy!

You also can in the Lossless Scaling app. Which also just added NIS, so I'll have to give that a whirl for comparisons sake later :)
Paste screenshot (at least part of it with most offending part if its very large) so we can assess it too :)

I am comparing FSR and NIS (done via drivers so I cannot make screenshots) and yes NIS has permanent sharpening built-in even at 0% sharpening while FSR has not.
The main difference between the two is that Nvidia does not hide the sharpening but this also means image looks more consistent, otherwise both upscaling methods look very similar.

I would say FSR is higher quality and would stick with this conclusion except one slight annoyance: at certain times certain scene elements stick out way too much with FSR. When I test FSR in Cyberpunk people look like cartoon cutouts with FSR while they do not do that with NIS or Lanczos (previous GPU upscaler - though better looking than Irfan View's Lanczos), or on my monitor scaler, bilinear or ingeter/point scaling. Only FSR does that...
1637339090846.png

1637339097335.png

1637339104584.png

1637339111520.png

Does FSR looks good here? To me FSR's clipping artifacts look ridiculous.
NIS does have ringing artifacts but they make image look more consistent. The same way older GPU upscaling, also Lanczos based, looked more consistent.

You can see the same effect on pretty much all screenshots of FSR games.
It is of course much more pronounced at 2x scale ("Performance" preset) than when doing something like 1440p->2160p which is more sensible use case but effect being less visible does not make it gone.

Really for now it all comes down to what is available and what looks the best for person who uses it. Obviously when DLSS can be enabled it is the best option (even for quality - DLSS can be always made to run at native with DSR and it does get rid of any blurriness DLSS might normally have💡) and when it is not available it is some sort of compromise. Except maybe when using integer scaling because it is technically exactly like playing at native resolution just on monitor which has lower resolution and less screen door effect :)

EDIT://
There is quick solution to sharpen-mask-like effect of NIS upscaling.
Not something tools like Lossless Scaling will be able to do it (unless they implement it directly in to their pipeline) but for genuine Nvidia card users just enable FXAA
FXAA's inherent blur cancels out NIC's inherent sharpening 🤩
It also improves anti-aliasing beyond what TAA normally gives so it is a win-win.

Not quite DLSS but I guess for not DLSS enabled games it will have to do :)
 
Last edited:
Nvidia already used Lanczos when using GPU to upscale lower resolutions rather than monitor. Maybe the difference is in amount of taps...
View attachment 414172

I do not particularly like oversharpened images either and test FSR at 0% RCAS.


Paste screenshot (at least part of it with most offending part if its very large) so we can assess it too :)

I am comparing FSR and NIS (done via drivers so I cannot make screenshots) and yes NIS has permanent sharpening built-in even at 0% sharpening while FSR has not.
The main difference between the two is that Nvidia does not hide the sharpening but this also means image looks more consistent, otherwise both upscaling methods look very similar.

I would say FSR is higher quality and would stick with this conclusion except one slight annoyance: at certain times certain scene elements stick out way too much with FSR. When I test FSR in Cyberpunk people look like cartoon cutouts with FSR while they do not do that with NIS or Lanczos (previous GPU upscaler - though better looking than Irfan View's Lanczos), or on my monitor scaler, bilinear or ingeter/point scaling. Only FSR does that...
View attachment 414162
View attachment 414163
View attachment 414164
View attachment 414165
Does FSR looks good here? To me FSR's clipping artifacts look ridiculous.
NIS does have ringing artifacts but they make image look more consistent. The same way older GPU upscaling, also Lanczos based, looked more consistent.

You can see the same effect on pretty much all screenshots of FSR games.
It is of course much more pronounced at 2x scale ("Performance" preset) than when doing something like 1440p->2160p which is more sensible use case but effect being less visible does not make it gone.

Really for now it all comes down to what is available and what looks the best for person who uses it. Obviously when DLSS can be enabled it is the best option (even for quality - DLSS can be always made to run at native with DSR and it does get rid of any blurriness DLSS might normally have💡) and when it is not available it is some sort of compromise. Except maybe when using integer scaling because it is technically exactly like playing at native resolution just on monitor which has lower resolution and less screen door effect :)

EDIT://
There is quick solution to sharpen-mask-like effect of NIS upscaling.
Not something tools like Lossless Scaling will be able to do it (unless they implement it directly in to their pipeline) but for genuine Nvidia card users just enable FXAA
FXAA's inherent blur cancels out NIC's inherent sharpening 🤩
It also improves anti-aliasing beyond what TAA normally gives so it is a win-win.

Not quite DLSS but I guess for not DLSS enabled games it will have to do :)
According to this the version in the latest driver is using 6 taps versus 2 in the old version, along with 4 directional scaling and adaptive sharpening filters.

https://www.nvidia.com/en-us/geforce/news/nvidia-image-scaler-dlss-rtx-november-2021-updates/
 
  • Like
Reactions: XoR_
like this
lol you caught my earlier reply. No worries. I removed the top part of that reply because I realized I left sharpening enabled with NIS in the Lossless Scaling app. Which it REALLY doesn't need as the Lanczos method has ringing artifacting on it's own. It's way less intense when it's off and the two look very close.

Anyway all these methods are using Lanczos. The difference is that FSR de-rinings, does a pass to try to retain edge details and then use's RCAS to sharpen(or not if you disable it)

I'll try to nab some screens shots.
 
Ok so here we go. Took a little effort, but these are the same scene and I switched between methods. The trees swayed abit, flames flickered and the sword bobbed abit, but they are really close.

Game: AMID EVIL Demo in DX12 mode
Resolution: 3200x900 scaled to 3840x1080
Taken with: Lossless Scaling App
FSR with with 0.5 sharpening and NIS is with 0.0 Sharpening.
(No sharpening on the FSR felt kinda unfair to NIS as it will ring a little by default)

FSR


NIS
 
  • Like
Reactions: XoR_
like this
Try uploading PNG. Jpeg artifacts are way more visible than actual upscaling details 😅
From what I can see they look almost the same except two notable differences
1. Floor below is sharper on NIS
2. Ringing artifacts on temple far away - perhaps 50% sharpening is waaaaay too much for fair comparison :)

From these shots NIS looks better

...and after latest post...
You can see how the NIS lost some colour information due to ringing.
Where?
FSR looks softer here

Please paste 400% zoom of this part of image also
1637349103178.png

It looks to have too much sharpening in FSR shot
 
For the colour loss, look at the gibs of the dead enemies. The spear is almost black and white on the NIS image. And the edges of all the shadows are now overtly harsher than they should be.

But it's really close. And yeah I probably should have just uploaded PNG's for the wide shots and not uploaded to IMGUR as they re-compress the images on upload. But the zoom's are 24-bit PNG's attached to my post.
 
The FSR image looks better to me, but they are close. Notice on the edge of the floor or the shadow, there is a lot of aliasing on Nvidia but AMD looks softer and more smooth.

Nvidia also seems to alter the color and it doesn't match the scene as well. Such as on the ball hanging on the top left or the dead enemy in the middle. The FSR shot looks more natural and closer to what you'd expect.
 
Yeah, let's say which shot is more true to the original colors without seeing reference non-scaled image. You guys funny 🤪

Color, yes, this is the very thing that upscaler should do, mess with colors.
The major difference in these shots is red edge on FSR shot. Maybe blood is also more red. Maybe these screenshots are bullshit. No color differences in any of my tests.

In any way, my recommendation for "any game upscaling" is using FXAA with NIS. It just works.
FXAA works well with FSR also but performance with forcing FSR via Magpie is bad. No VRR.
Once AMD makes FSR astheir GPU upscaling it should work as well with MLAA. Maybe even MLAA will be better suited for FSR with MLAA already being sharper and more edge focused.

To my taste FSR has to many clipping artifacts, especially at 2x scale and it makes edges look off. Do not like this effect. The more I test FSR the less I buy its any good.

But of course 2x scale is overdoing it. Not so much for NIS but who cares for proprietary solution which just works. FSR I can use on Intel iGPU 🤪
 
They are going to be exceedingly close because they use the same under lying Lanczos method. Honestly we are being pretty picky to find the differences. They are subtle and if I had a GeForce series I would just use NIS when it makes sense. And on an AMD GPU use FSR when it makes sense in the same/similar fashion.

But this is a DLSS thread. So I'll stop distracting, we can always pixel peep further in the FSR thread :)
 
  • Like
Reactions: XoR_
like this
They are going to be exceedingly close because they use the same under lying Lanczos method. Honestly we are being pretty picky to find the differences.
Differences are very subtle or very noticeable depending on the case.
To learn how these algorithms differ you should however use larger scaling ratios.

By learning I mean like in both being able to see real world using these algorithms and how to mitigate their effects. I mean you have sort of pixels in your eyes and they do not necessarily match number of sort of pixels (I call them 'nodes') in your visual cortex and if the latter is bigger you need some sort of upscaling. Actually there are many layers and structures and they can process while dowscaling or upscaling or having roughly the same amount of 'nodes'. Ideally you wold always downscale and still end up with enough information but eyes (and certainly in case of artificial content like games XD) are not so good so some upscaling is needed. The best is of course temporal solution similar to DLSS and in fact once I figured to train that (it was many years ago, even before DLSS was a thing) my eyesight improved dramatically. Not that brain does not do this for everyone already to some degree but there is a difference in things you train and which you have no idea about.

Likewise there is nothing new in either how FSR work or NIS. I must only say that generally I do not add sharpening but I have ringing artifacts due to Lanczos type of upscaling. My normal kernel for eyesight sight is mostly similar to supersampling with DLSS but very rough and producing excess of information, then dowscaling, then adding some sort edge artifact removal kernel like FXAA and then upscaling to where the information flows using something most similar to NIS. It is then not a big surprise to me that doing exactly that gave me best most natural results.

What FSR does on the other hand reminds me of my early 20-ties when I literally began modifying how my eyesight works by removing ringing artifacts. This part has place in eyesight but not in main stream and is more suited for color processing, not luma processing. It also explains all this nonsense about colors being different between FSR and NIS. Except some differences on some edges colors are exactly the same. The brain responds differently and different areas are stimulated hence different color impression. The general feel of for these algorithms I got in seconds. It would be perhaps best to treat luma and chroma separately with different algorithm. Sharp edges like FSR has on colors would not cause any rejection reaction from my luma processing kernels.

And yes, until you can identify your kernels you do not really know how to use your brain 🤣

But this is a DLSS thread. So I'll stop distracting, we can always pixel peep further in the FSR thread :)
By DLSS you mean literal pixel magic? 🤩
1-necromunda-hired-gun-scaling-techniques-compared.png


Best way to do DLSS yourself is in VR headset setting texture LOD bias to very high negative value. High refresh rate with motion blur reduction and constant movement of viewport make pixels of distant textures to flicker a lot a lot but you can see all the distant details. Now to do what DLSS does you only have to integrate what you see. It is possible to make yourself not perceive flickering and instead stable image. Or play Quake 2 RTX with denoising disabled...

DLSS is, and this I am certain of!, the same algorithm Nvidia developed for de-noising RT

Over time as RT part of GPU's becomes more powerful Nvidia's kernels will be more oriented at self-training rather than executing fixed kernels and this will be DLSS 3.0. Not that they do not already have rudimentary self training. It is there but not on the level it could be and this aspect is limited entirely by hardware performance. Given direction of industry Nvidia will invest more silicon for RT cores and that will allow better and most of all 'smarter' algorithms. Brain can make person see details entirely from memory selected based on errors generated when integrating said details in to the perceived image. If we had GPUs with significantly more capable RT engine then DLSS could use data not only from last few frames but from the whole play-through. Nvidia already tried using more data and use it to reconstruct images but it was fixed pre-computed data and whole method was primitive. Self-training is computationally expensive.

In any way I know what Nvidia guys are thinking "the more you play the better it looks" and because I already do this and I am totally not a robot time will only prove me right 🙃
 
Here are 400% zooms of those shots.

View attachment 414217View attachment 414218

You can see how the NIS lost some colour information due to ringing.
It's really hard to say anything based on those two shots. You upscaled them poorly (multiple times by the looks of it or forum actually downscaled them since they were too large):
crop.png


Imgur ones are just a bunch of jpg compression artifacts and chroma subsampling kills the rest of what little detail there was.
 
Last edited:
It's really hard to say anything based on those two shots. You upscaled them poorly (multiple times by the looks of it or forum actually downscaled them since they were too large):
View attachment 414532

Imgur ones are just a bunch of jpg compression artifacts and chroma subsampling kills the rest of what little detail there was.
Well crap, I thought I did the best I could. Well if anyone can recommend a place I can upload the png's where they won't get messed with I'd be happy to upload them and you guys can zoom and pixel peep away.

*edit* or I could take a smaller crop. What the image size limits for the forum to prevent a resize?
*edit2* Although despite the resize the issue remains the same. The natural ringing present in the NIS caused colour loss on the fine details because thats a downside to Lanczos and ringing in general. That makes sense and is still there despite the resize caused by the forum.
 
Last edited:
According to the Nvidia site, several games should have been updated to the 2.3 version but only Shadow of the Tomb Raider has for me.

I fired up Cyberpunk 2077 earlier and no update for that either even though it was supposed to be updated 4 days ago according to Nvidia: "In Cyberpunk 2077, which updates to NVIDIA DLSS 2.3 today, it more smartly uses motion vectors to reduce ghosting on fine detail in motion."

EDIT: Ok what the hell is this nonsense? Apparently I have to use GFE to get the update for the DLSS 2.3.

Cyberpunk 2077 Will Not Be Updated to Implement DLSS 2.3. It is the NVIDIA GeForce Experience App Responsible For Injecting the DLSS 2.3 (.dll) Version into the Game Files
https://www.reddit.com/r/cyberpunkg...erpunk_2077_will_not_be_updated_to_implement/
 
Last edited:
You don't even need that. Google it, there are sites that show you how to swap in newer DLSS 2.X versions into different games that have DLSS 2.X. Nice that nVidia is starting to do it them selves, but not needed.
 
You don't even need that. Google it, there are sites that show you how to swap in newer DLSS 2.X versions into different games that have DLSS 2.X. Nice that nVidia is starting to do it them selves, but not needed.
I know how to manually swap them out. The POINT was it was supposedly going to be a game update but the game itself is not updating the file and even if you go through GFE it will show the old DLSS version in the game files. You have to use something like process explorer to even see if the damn game is using the newer version which is silly. I don't want to use GFE and hope its not going to be the way they implement new DLSS versions going forward.
 
I know how to manually swap them out. The POINT was it was supposedly going to be a game update but the game itself is not updating the file and even if you go through GFE it will show the old DLSS version in the game files. You have to use something like process explorer to even see if the damn game is using the newer version which is silly. I don't want to use GFE and hope its not going to be the way they implement new DLSS versions going forward.
If game developers do not have anything to update in the game it is very unlikely they will bother releasing path with new DLSS version. Especially that in order to do that they would need to put game with new file they would have to somehow test it.

If after paths DLL will still be in old version then we can and should get mad :)
 
According to the Nvidia site, several games should have been updated to the 2.3 version but only Shadow of the Tomb Raider has for me.

I fired up Cyberpunk 2077 earlier and no update for that either even though it was supposed to be updated 4 days ago according to Nvidia: "In Cyberpunk 2077, which updates to NVIDIA DLSS 2.3 today, it more smartly uses motion vectors to reduce ghosting on fine detail in motion."

EDIT: Ok what the hell is this nonsense? Apparently I have to use GFE to get the update for the DLSS 2.3.

Cyberpunk 2077 Will Not Be Updated to Implement DLSS 2.3. It is the NVIDIA GeForce Experience App Responsible For Injecting the DLSS 2.3 (.dll) Version into the Game Files
https://www.reddit.com/r/cyberpunkg...erpunk_2077_will_not_be_updated_to_implement/
I take it that means launching the game through GFE to get the update. If anybody installs GFE remember to go into the settings and disable automatic "optimizing" for your games, disable shadow play, disable streaming, disable free style, and disable Ansel. Ansel in particular can play havoc with your games.
 
Yesterday, Nvidia added DLDSR in the new drivers. Its an AI assisted version of DSR. It supposedly allows you to downscale with a trivial peformance loss. The exampel they show is to downscale 1620p to 1080p.
https://www.nvidia.com/en-us/geforc...&ranSiteID=kXQk6.ivFEQ-8vx6zqxUyqtoey5TKKE6mA

I am trying exactly that and it doesn't seem to be working. It looks just like regular DSR and gives me a big performance hit, just like DSR. **actually, using regular DSR is smoother for me. With DLDSR, the frametimes are jacked up and it gives me a slight rubber-banding effect.

1642298917582.png
 
Last edited:
Yesterday, Nvidia added DLDSR in the new drivers. Its an AI assisted version of DSR. It supposedly allows you to downscale with a trivial peformance loss. The exampel they show is to downscale 1620p to 1080p.
https://www.nvidia.com/en-us/geforc...&ranSiteID=kXQk6.ivFEQ-8vx6zqxUyqtoey5TKKE6mA

I am trying exactly that and it doesn't seem to be working. It looks just like regular DSR and gives me a big performance hit, just like DSR. **actually, using regular DSR is smoother for me. With DLDSR, the frametimes are jacked up and it gives me a slight rubber-banding effect.

View attachment 432715
Thats not quite how it functions.
Its reported to give DSR 4x quality when using DLDSR 2.25x, with the normal performance hit of DSR 2.25x.
Its benefit is to give better quality than the old DSR, it doesnt change the performance hit.
However many people see no quality difference when used above 1080p screen res.
Seems there are bugs to work out.
 
Thats not quite how it functions.
Its reported to give DSR 4x quality when using DLDSR 2.25x, with the normal performance hit of DSR 2.25x.
Its benefit is to give better quality than the old DSR, it doesnt change the performance hit.
However many people see no quality difference when used above 1080p screen res.
Seems there are bugs to work out.
Well then the screenshot and wording on the page are not clear.
 
Back
Top