The die impact (size) of RTX cores on Turing

Interesting indeed.

Also interesting how NVIDIA seems to have focused on shortning of the signal paths.
I remember (think it was around G80 launch) that I wachted a video from a reseacher at NVIDIA about how the most energy consumption was not form computations, but actually moving data around the die.
Seems like they are tweaking all the parameters they can.
 
I pointed this out in another thread somewhere because NVIDIA themselves actually provided a relatively high res shot of the TU102 die in their whitepaper. Now that we actually have something to compare against, though, it's nice to see the actual numbers. Fact is that replacing the ray tracing hardware with more CUDA isn't going to provide as much performance as people think.
 
People complained mostly because of price increase. They just wanted 2080 to be 499$ and 2080Ti 799$...
Then if they released DXR for Pascal and demos at the RTX card launch this whole "RTX is useless, get 1080Ti instead" would happen much less often if at all...
 
People complained mostly because of price increase. They just wanted 2080 to be 499$ and 2080Ti 799$...
Then if they released DXR for Pascal and demos at the RTX card launch this whole "RTX is useless, get 1080Ti instead" would happen much less often if at all...

I guess that is why they relased the new driver...with "lesser" RT for Pascal GPU's...so people can seethe performance impact of RT on non-RTX cards.

Even though I fear most people will just look at the FSP...and forget it's not the same quality....you know the "NVIDIA BAD, mkay!"-crowd ;)
 
Back
Top