Yes a GPU does have a lot to do with the speed of games. But it still runs into the same limits of transistors size & cost. Of course a GPU can make better use of parallelism, so while throwing more cores at a CPU gives you less than doing similar to a GPU, there is still only so much you can gain. RTX had a large die... 7nm should allow more transistors in the same space. This isn't perfectly linear of course, so 30% transistor density doesn't automatically mean 30% more performance. It needs more bandwidth to feed it and who knows how much more was set aside to improve raytracing, etc. Unless they increased the die by 40%, you aren't going to see a 70% increase. And if they increase the die 40% on a smaller node, you can expect $$$ increase. I think 30% is decent given these constraints. Unless you want $2500 video cards, I wouldn't be surprised selecting to many 70% increases, it's just not realistic at this point. I mean, I would be happy to be wrong, but unless they come up with a better mouse trap, brute forcing by adding more CUs will only get you so far. Increasing memory bandwidth is also required if you have that much more compute/raster performance... So again, $$$.What a stupid analogy. A CPU makes very little difference in the games that we play nowadays but the GPU has a massive effect and you know that so stop being ridiculous. Not to mention that a typical desktop CPU is a fraction of the cost of a high-end GPU.