Are we headed back to 1 year GPU process cycles?

I would think that the performance improvement from 8nm to 7nm would be relatively small for a given architecture... so that you be a minor refresh.
However, being able to actually buy the product would be a major improvement.
 
I would think that the performance improvement from 8nm to 7nm would be relatively small for a given architecture... so that you be a minor refresh.
However, being able to actually buy the product would be a major improvement.

Samsumg 8nm is really 10nm+
 
To me, those yearly refreshes were akin to Apple's yearly iPhone releases. Every 2 years is a major upgrade, every other year is an "S" model that's just an improved version of what's already out there.
 
I didn't know yearly refreshes went away...RTX Super, Ti, Titan etc

Super was a minor refresh in my mind, I was thinking more along the lines of full process changes. Titans and TI are just new products in the same process usually.
 
Super was a minor refresh in my mind, I was thinking more along the lines of full process changes. Titans and TI are just new products in the same process usually.


Then the answer is no.

It's going to take Nvidia 2 years to get cards as huge as the 3090 working on 5nm at under $2000. If anything, I would expect them to use it on AI processors first, then scale up to their Supercomputer 100-series GPU a year later, and then give us 5nm consumer cards six to twelve months after that

Just because we keep seeing new process nodes doesn't mean the costs are not astronomically higher on-release. They need to stay on a fab node for twice as long now (and designing new scalable architectures also takes longer than it ever has).

Apple and Qualcomm eat-up the costs of a new process node by producing hundreds of millions of chips for $1500 phones. Nvidia has to wait until the node has higher yields, so it can make the high die engineering costs economically -viable when you're only making 20 million units a year (at 5x larger size).

It's much easier to mass-produce smaller dies on a new process node. The smallest Turing die size is twice as large as a cell phone chip.
 
Last edited:
Then the answer is no.

It's going to take Nvidia 2 years to get cards as huge as the 3090 working on 5nm at under $2000. If anything, I would expect them to use it on AI processors first, then scale up to their Supercomputer 100-series GPU a year later, and then give us 5nm consumer cards six to twelve months after that

Just because we keep seeing new process nodes doesn't mean the costs are not astronomically higher on-release. They need to stay on a fab node for twice as long now (and designing new scalable architectures also takes longer than it ever has).

Apple and Qualcomm eat-up the costs of a new process node by producing hundreds of millions of chips for $1500 phones. Nvidia has to wait until the node has higher yields, so it can make the high die engineering costs economically -viable when you're only making 20 million units a year (at 5x larger size).

It's much easier to mass-produce smaller dies on a new process node. The smallest Turing die size is twice as large as a cell phone chip.

Makes sense, and I prefer the 2 years major cycle with minor refreshes every off year like Domingo mentioned.

I think this year might be a little different though as Nvidia has already produced datacenter Ampere on 7nm and there were rumors they were testing it this summer for gaming Ampere. So it might be an easy jump to produce 7nm cards mid next year.

AMD at 5nm in 2021 seems a little quick unless it is for a cut down part like they did with Navi followed by Big Navi a year later.
 
I don't see it happening for the upper end gaming GPUs. Data center cards yes where the premium would cover the much higher costs. I do see Nvidia switching to a true or truer 7nm either Samsung or TSMC, probably Samsung if yields can be decent as an update next year. 5nm will be more for AMD CPU's and maybe their data center GPUs depending upon the competition. Wild ass guess, Nvidia will have a good refresh next year, AMD will also have some options as well such as 7nm+, 384bit bus maybe or DDR6x if power demands are decreased, HBM2e also an option. I am expecting AMD to have sizable fps lead on the lower resolutions, 1080p and 1440p, low latency/high fps competitive professional level performance due to the new Infinity CacheTM while Ampere performance is not that strong at the lower resolutions.
 
Makes sense, and I prefer the 2 years major cycle with minor refreshes every off year like Domingo mentioned.

I think this year might be a little different though as Nvidia has already produced datacenter Ampere on 7nm and there were rumors they were testing it this summer for gaming Ampere. So it might be an easy jump to produce 7nm cards mid next year.

Samsung 8nm = TSMC 7nm. There is so little density difference between the two, you might as well call them the same.

The reason Nvidia went Samsung for the big cores this time around is because TSMC didn't have spare 7nm capacity. A100 sells in the thousands/year, so it's a lot less of a drain than having both GA102 and GA104 produced there in the millions.

The rumprs about poor yields are just bunk - as AMD has aggressively cut prices with RX 5600 XT, there is no sudden problem with yields. Big Navi delay could be due to yields, but I prefer to blame it on AMD havng three new cores to prioritize over Big Navi.
 
Last edited:
  • Like
Reactions: Epos7
like this
Back
Top