Not sure why everyone iss being hostile. The thread seems innocent enough to me. "why does ray tracing suck".
Lets all just be honest... NV was the first to market with this. AMD has been working on this for as long but have no product on the market yet. NO doubt both are true.
The bottom line is right now. Only a few games are ready with tracing elements. Those that are on the market force even peope with $1200 video cards to turn other settings down to get acceptable frame rates. So yes it sucks. As some others have said so did Anti Alising at first... so did tesselation. New IQ features tend to push the hardware.
No matter what bits of the GPU we all agree are doing the actual work... the bottom line is it sucks because its hard. Its hard because computation "cores" are designed to handle big bits of data 64bit in general. Ray tracing doesn't need all those numbers. There are a few ways to do the math, but in general 8 bit or 16 bit number registers are more then enough. NV solutiuon is to use "RT cores" and Tensors to relatively quickly do the dirty math and clean up the out put. It doesn't matter if we want to believe behind the driver NV developed some amazing cutting edge "RT Core" or if they found a way to control large tensor matrixes in smaller float point batches (Which we know they have cause they sell exactly that feature to the AI folks).
How is AMD going to tackle the same problem ? We don't know yet for sure. They may indeed make their own tensor part. That isn't impossible. Regardless if they develop a "RT core" as NV marketing claims... they will still need tensors for denoise it would seem according to NV marketing. However they just finished detailing how their RDNA works and it seems to suggest they have found ways to lower the FP precision of regular shader computation. Which would be another way to go about solving the issue of needing to do a ton of low precision math.
So all the fanboi fighting... give it up. The OP wanted to know why it sucks. Its first generation... and there is no getting around the fact that even doing a small bit of Ray tracing in a hybrid scene is going to involve a ton of math. Its not impossible to speed it up. Ray tracing leads to branching math that is solvable in hardware by CPU cores / Shader cores or tensor cores. Current methods use BVH which speeds detection of colissions over a full path trace that would happen for say a Pixar movie. The main advantage of BVH usage is that each bit of cacluation only requires a few bytes of data. This is great for memeory usage and computation. The disadvantage is that computing cores are not designed in general to operate on a few bytes of data. CPUS suck for real time tracing and collision deteticion because they are designed to operate at high float point precision. This is also why GPU compute is much better for game stuffs like collision detection. The GPU can chunk many more small bits of data then a CPU can per clock. Shaders likewise aren't designed to operate at insanely low FP precision, they do have some advantages like hard fused math and in general a standard shader will be faster then a CPU for ray calculation. Tensors are possible to use for ray tracing as they are also capable of setting up a BVH... and don't get me wrong tensorflow 1.0 from Google wouldn't be great at calcualting rays either. NV HAS improved on the tensorflow design. They have allowed for tensor matrixs to be created at lower FP precision which makes for much faster AI training when that precision isn't required.... and also allows for faster denoising of rays for Ray tracing (That is from NV themselves) I could be wrong... (and I admit it) its possible NV has designed some interesting actual RT core that somehow interfaces with the Tensors on their SOC to do denoising with no cache. Of course its more likely I am 100% correct and there RT cores are simply blocks of their tensor cores running at lower FP precision and the onboard GPU microprocessor dynamically allocates hardware bits.
So ya there is no AMD v NV fight to have here. Yes NV was/is first to market. Its early and in a couple generations or less it won't really matter who was first. They both seem to have different ways of tackling the problem. AMDs long term plan is streaming for high end ray tracing... that isn't speculation they have said as much in presentation slides. Navi+ for tracing... and full scene ray tracing via streaming after that. No one knows if that will go anywhere... but that is their plan. I would assume NVs plans to up their RT game with their next 7nm chip as well. Perhaps like AA and Teselation the second generation will be a major upgrade. With those techs the seocnd generation got better in large part because once the engineers see how software developers are really using those things its easier to tweak their hardware design.
RT is a nothing but fluf right now... I agree that Cyberpunk looks like it is going to be the first must have ray tracing title. I just doubt even the 2080ti is going to be able to run with even medium ray tracing turned on at over 60fps at 1080p. When that game releases.... it might sell alot of NV 7nm 3080tis though. Perhaps navi+ as well if AMD can actually hit that time frame. (which even hardcore AMD fans will admit isn't likely) I also don't find it likely that Cyberpunk developed by CPR the folks running GOG are going to sign on with Stadia either... but it seems to me that might be the only way we see AMD ray tracing Cyberpunk at launch.
Lets all just be honest... NV was the first to market with this. AMD has been working on this for as long but have no product on the market yet. NO doubt both are true.
The bottom line is right now. Only a few games are ready with tracing elements. Those that are on the market force even peope with $1200 video cards to turn other settings down to get acceptable frame rates. So yes it sucks. As some others have said so did Anti Alising at first... so did tesselation. New IQ features tend to push the hardware.
No matter what bits of the GPU we all agree are doing the actual work... the bottom line is it sucks because its hard. Its hard because computation "cores" are designed to handle big bits of data 64bit in general. Ray tracing doesn't need all those numbers. There are a few ways to do the math, but in general 8 bit or 16 bit number registers are more then enough. NV solutiuon is to use "RT cores" and Tensors to relatively quickly do the dirty math and clean up the out put. It doesn't matter if we want to believe behind the driver NV developed some amazing cutting edge "RT Core" or if they found a way to control large tensor matrixes in smaller float point batches (Which we know they have cause they sell exactly that feature to the AI folks).
How is AMD going to tackle the same problem ? We don't know yet for sure. They may indeed make their own tensor part. That isn't impossible. Regardless if they develop a "RT core" as NV marketing claims... they will still need tensors for denoise it would seem according to NV marketing. However they just finished detailing how their RDNA works and it seems to suggest they have found ways to lower the FP precision of regular shader computation. Which would be another way to go about solving the issue of needing to do a ton of low precision math.
So all the fanboi fighting... give it up. The OP wanted to know why it sucks. Its first generation... and there is no getting around the fact that even doing a small bit of Ray tracing in a hybrid scene is going to involve a ton of math. Its not impossible to speed it up. Ray tracing leads to branching math that is solvable in hardware by CPU cores / Shader cores or tensor cores. Current methods use BVH which speeds detection of colissions over a full path trace that would happen for say a Pixar movie. The main advantage of BVH usage is that each bit of cacluation only requires a few bytes of data. This is great for memeory usage and computation. The disadvantage is that computing cores are not designed in general to operate on a few bytes of data. CPUS suck for real time tracing and collision deteticion because they are designed to operate at high float point precision. This is also why GPU compute is much better for game stuffs like collision detection. The GPU can chunk many more small bits of data then a CPU can per clock. Shaders likewise aren't designed to operate at insanely low FP precision, they do have some advantages like hard fused math and in general a standard shader will be faster then a CPU for ray calculation. Tensors are possible to use for ray tracing as they are also capable of setting up a BVH... and don't get me wrong tensorflow 1.0 from Google wouldn't be great at calcualting rays either. NV HAS improved on the tensorflow design. They have allowed for tensor matrixs to be created at lower FP precision which makes for much faster AI training when that precision isn't required.... and also allows for faster denoising of rays for Ray tracing (That is from NV themselves) I could be wrong... (and I admit it) its possible NV has designed some interesting actual RT core that somehow interfaces with the Tensors on their SOC to do denoising with no cache. Of course its more likely I am 100% correct and there RT cores are simply blocks of their tensor cores running at lower FP precision and the onboard GPU microprocessor dynamically allocates hardware bits.
So ya there is no AMD v NV fight to have here. Yes NV was/is first to market. Its early and in a couple generations or less it won't really matter who was first. They both seem to have different ways of tackling the problem. AMDs long term plan is streaming for high end ray tracing... that isn't speculation they have said as much in presentation slides. Navi+ for tracing... and full scene ray tracing via streaming after that. No one knows if that will go anywhere... but that is their plan. I would assume NVs plans to up their RT game with their next 7nm chip as well. Perhaps like AA and Teselation the second generation will be a major upgrade. With those techs the seocnd generation got better in large part because once the engineers see how software developers are really using those things its easier to tweak their hardware design.
RT is a nothing but fluf right now... I agree that Cyberpunk looks like it is going to be the first must have ray tracing title. I just doubt even the 2080ti is going to be able to run with even medium ray tracing turned on at over 60fps at 1080p. When that game releases.... it might sell alot of NV 7nm 3080tis though. Perhaps navi+ as well if AMD can actually hit that time frame. (which even hardcore AMD fans will admit isn't likely) I also don't find it likely that Cyberpunk developed by CPR the folks running GOG are going to sign on with Stadia either... but it seems to me that might be the only way we see AMD ray tracing Cyberpunk at launch.