By now its clearly apparent that Raytracing is a HUGE performance killer.
It seems to me nvidia needs to at least double the RTX/Tensor performance to bring it to acceptable levels.
But even with a die shrink, there may not be enough space available to cram enough RTX/Tensor cores.
So I was thinking, why not a RT accelerator card?
About a 3rd of Turing die space is used (wasted?) in RTX/tensor cores. So having a separate RT card advantage would have several advantages
1. Turing would have either more room for extra CUDA cores or could be made smaller and therefore cheaper.
2. People won't have to pay for features they don't want
3. People that do want RT could have the performance they need.
4. nvidia could cram at least twice the RTX/Tensor cores for much needed RT performance increase. And still be smaller and cheaper.
5. you could SLI RT cards for even more performance.
While this may not be very atractive to gamers, I'm sure 3d content creators would jump into it right away.
It seems to me nvidia needs to at least double the RTX/Tensor performance to bring it to acceptable levels.
But even with a die shrink, there may not be enough space available to cram enough RTX/Tensor cores.
So I was thinking, why not a RT accelerator card?
About a 3rd of Turing die space is used (wasted?) in RTX/tensor cores. So having a separate RT card advantage would have several advantages
1. Turing would have either more room for extra CUDA cores or could be made smaller and therefore cheaper.
2. People won't have to pay for features they don't want
3. People that do want RT could have the performance they need.
4. nvidia could cram at least twice the RTX/Tensor cores for much needed RT performance increase. And still be smaller and cheaper.
5. you could SLI RT cards for even more performance.
While this may not be very atractive to gamers, I'm sure 3d content creators would jump into it right away.