Maybe on training, but on inference those card being good at it will be a selling point for games and not something you want to necessarily gimp, what Tensor core will be on the 5000 series could open interesting DLSS and other use case like their AI run game demo, they could want the 5060 to nearly double the PS5 pro 300 int8 Tops and by the end of the line you end up with quite the inference machine. Unlike coin mining, AI inference has been a crucial part of Nvidia gaming card since DLSS 2 got good enough and will grow as about all hard to run new game support it now.I think the 5090 FE will land at $1599. The performance increase over previous gen is generally expected. They've built LHR cards, so if the consumer GPU's can be kept out of the AI boom in the same way, demand will not be insane and prices should be normal.
Keep in mind, Nvidia wants to sell the chips that are going to get used for AI work at a much higher price, so I suspect they will put in limiters for consumer GPU's to keep them from being repurposed.
We can hope I guess.
The 2 big gimp they can and will continue to do is the interconnect between card and the amount of vram, they removed NVLINK completely on consumer Lovelace and did not increase the VRAM. We can suspect something similar for Blackwell, modest increase with skus going from only 12gig to 36 gig would not be surprising.
And outside memory limit with gaming cards, even if the model fit when you look at benchmark, the pro line seem to scale better than the gaming GPUs
https://lambdalabs.com/gpu-benchmarks
When using in FP32
1xGPU
RTX 3090 : 1.49
RTX A40..: 1.36
2xGPU
RTX 3090 : 2.68
RTX A40..: 2.68
4xGPU
RTX 3090 : 4.08
RTX A40..: 5.20
8xGPU
RTX 3090 : 6.88
RTX A40..: 10.56