- Joined
- Aug 20, 2006
- Messages
- 13,003
Be patient, says NVIDIA: gamers who are unimpressed with the company’s Deep Learning Super Sampling (DLSS) technique should understand the technology is still in its infancy and that there is plenty of potential to be realized in the coming future. As Andrew Edelsten (Technical Director of Deep Learning) explains, DLSS is reliant on training data, which only continues to grow. It’s part of the reason why the technique is less impressive on lower resolutions, as the focus was on 4K. Edelstein advises gamers may want to avoid TAA due to “high-motion ghosting and flickering.”
We built DLSS to leverage the Turing architecture’s Tensor Cores and to provide the largest benefit when GPU load is high. To this end, we concentrated on high resolutions during development (where GPU load is highest) with 4K (3840x2160) being the most common training target. Running at 4K is beneficial when it comes to image quality as the number of input pixels is high. Typically for 4K DLSS, we have around 3.5-5.5 million pixels from which to generate the final frame, while at 1920x1080 we only have around 1.0-1.5 million pixels. The less source data, the greater the challenge for DLSS to detect features in the input frame and predict the final frame.
We built DLSS to leverage the Turing architecture’s Tensor Cores and to provide the largest benefit when GPU load is high. To this end, we concentrated on high resolutions during development (where GPU load is highest) with 4K (3840x2160) being the most common training target. Running at 4K is beneficial when it comes to image quality as the number of input pixels is high. Typically for 4K DLSS, we have around 3.5-5.5 million pixels from which to generate the final frame, while at 1920x1080 we only have around 1.0-1.5 million pixels. The less source data, the greater the challenge for DLSS to detect features in the input frame and predict the final frame.