Until now NVidia has been very circumspect about how DLSS is supposed to improve performance. But there were more beans spilled today, and it looks like their real reason is they run at lower resolution and upscale: https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/ Lower input resolution, and half the shading work. Imply checkerboard rending to me. So no surprise it is much faster, when it is likely cutting to half size rending, like some PS4 games. Also we now find out there is a DLSS 2X mode, that seems to render at Native resolution and then does more complex DL network operations for much higher quality: It kind of looks like were were marketed the performance benefits of regular DLSS (without being told it runs at lower resolution), and quality benefits of DLSS 2X, which was unknown until today, and comes with an unknown performance hit. I am a little less impressed with DLSS than I was before.