RTX Video Super Resolution Out Now; Testing Shows Significantly Sharper Results

polonyc2

Fully [H]
Joined
Oct 25, 2004
Messages
25,858
The AI-based technology was unveiled at CES 2023 when NVIDIA promised that RTX VSR could upscale 1440p (or lower, down all the way to the minimum of 360p) resolution videos up to 4K with near-native quality...the technology combines software (AI algorithms) and hardware (RTX Tensor Cores) to enhance the clarity and sharpness of videos watched through the Chrome and Edge browsers, provided you have updated them to version 110.0.5481.105 and 110.0.1587.56, respectively...it achieves this goal by upscaling the resolution and cleaning up compression artifacts

In our testing, we checked out the same Twitch video FullHD recording (Avalanche Software's Tech Test feat. Guest Host!) on both Chrome and Mozilla Firefox, which doesn't support RTX Video Super Resolution...as you can see, alt-tabbing between the two windows shows an immediately noticeable sharper picture when using Chrome...everything, from the faces of the hosts to the finer details in the background, including the sword's pommel, is much clearer with the enhancement provided by RTX VSR at the maximum level of quality

this could be a game-changer for Netflix users who don't care about HDR or Dolby Vision and would like to watch content in near-4K quality without having to pay for the increasingly expensive Premium plan...

https://wccftech.com/rtx-video-super-resolution-out-now-testing-shows-significantly-sharper-results/
 
how do I download/save the upscaled version

edit: oh github link

double edit: i'm not downloading waifu weeb apps

it does appear to only do 2D still images according to Google Translate of the readme

it appears to do 'animation' video too, but i think algo'd towards animation not live action video
 
Last edited:
Just wait for streaming services to use this as an excuse to lower resolution and distribute subpar bitrate video :/
 
Can we expect this technology to improve with continued AI training just like with DLSS?
It is trained in a similar fashion and will be over time trained over more content, using more different algorithm and compression level and so on and will certainly get better.

But stuff very similar to that in some way (like AI upscaling in the Nvidia shield since 2019) and dlss in video game has well, so yes it will be continually trained in a very similar fashion (different content and different compression level-algo, etc...) and will get better, but it is probably a bit easier to do and more mature, meaning that the huge jump from early dlss 1 to now could be less of a thing.

It start from a traditionally upscaled to 4k image that it correct and add details with an AI generated one, so all the big issue of dlss (trail, missing shadow, weird bug, UI vs 3d, etc...) will not be there to start with
 
Can we expect this technology to improve with continued AI training just like with DLSS?

Hope so

Works with Plex through Chrome, tried it out, not impressed at all, especially for 150watts of something is trying to impress me (as reported by GFE Performance Overlay)
 
I would not expect it to trend as well as DLSS. That has access to extra data for reconstruction beyond what is present in a video feed.
 
I would not expect it to trend as well as DLSS. That has access to extra data for reconstruction beyond what is present in a video feed.
Are we talking about the motion vector ?

A big advantage here would be having access to the next frames would they want (a 4 frames delay being acceptable in a lot of case, sometime a whole second would not matter) has a source of information and how much time it has to do its work, if you go from 3ms to 20ms to do the upscaling
 
Are we talking about the motion vector ?

A big advantage here would be having access to the next frames would they want (a 4 frames delay being acceptable in a lot of case, sometime a whole second would not matter) has a source of information and how much time it has to do its work, if you go from 3ms to 20ms to do the upscaling
Yes, that's what I was getting at to a first order. Also that this upscaling algorithm is working with a lossy stream already.

Not an expert in the field, just have worked on some similar things in the past.
 
Back
Top