Right now machine learning is all the rage. TPU, Tensor, ML cores are being added to GPUs and even Mobile SoCs.
For GPUs NVidia has launched RTX cards with RT cores and Tensor cores. The latter used to "aid" Ray Tracing by running Denoising networks and DLSS networks.
But results are mixed. BFV devs said they don't used Tensor cores to denoise. It also seems like Q2-RTX uses some kind of Temporal algorithm to denoise, not Tensor cores. DLSS has failed to deliver any benefit more than a simple resize and process, but requires specific training.
So how do you see Tensor Cores on Consumer GPUs. A benefit, a bust, or potential future benefit?
For GPUs NVidia has launched RTX cards with RT cores and Tensor cores. The latter used to "aid" Ray Tracing by running Denoising networks and DLSS networks.
But results are mixed. BFV devs said they don't used Tensor cores to denoise. It also seems like Q2-RTX uses some kind of Temporal algorithm to denoise, not Tensor cores. DLSS has failed to deliver any benefit more than a simple resize and process, but requires specific training.
So how do you see Tensor Cores on Consumer GPUs. A benefit, a bust, or potential future benefit?