Hopper Architecture

I would have thought that AMD would be first in releasing an MCM type consumer GPU.
 
So we going to hit the $1000 for a mid range GPU?

In theory, chiplets can make unit production less expensive because yields improve as individual die sizes drop.

Now, prices won't likely drop much if at all, however, performance should increase so long as the effects of breaking a typical GPU die up into pieces are accounted for.
 
Chiplets has more to do with yields than performance. For instance a Threadripper CPU would be next to impossible to achieve on a monolithic die.

A chiplet design makes a lot of sense for nvidia as it could have separate chiplets for RTX, tensor and CUDA cores.
 
Chiplets has more to do with yields than performance. For instance a Threadripper CPU would be next to impossible to achieve on a monolithic die.

A chiplet design makes a lot of sense for nvidia as it could have separate chiplets for RTX, tensor and CUDA cores.

Very Unlikely.

An MCM design that eventually ships will organize much like the monolithic chips do.

If it was advantageous to have separate blocks of RTX, Tensor and CUDA cores,the chips would already be organized that way.

Intead it's organized into GPC blocks which are divided into SM blocks as follows:


Each GPC includes a dedicated raster engine and six TPCs, with each TPC
including two SMs. Each SM contains 64 CUDA Cores, eight Tensor Cores, a 256 KB register file,
four texture units, and 96 KB of L1/shared memory which can be configured for various capacities
depending on the compute or graphics workloads.
Source: https://www.nvidia.com/content/dam/...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf

This is likely beneficial for locally sharing resources like the register file, and just having locality of processing, keep distances in the chip short, to keep timings tight, and it aids in building different size chips by repeating a common block that has everything you need.

A TU102 has 6 GPCs, you could imagine that a future MCM GPU module might have 4 GPCs, and theoretically you could use 1, 2, 3, or 4 of them. Yielding designs with from 4 to 16 GPCs. There would probably be one master chip that possibly handled memory access and the Video engine and integrates the tiles.
 
Very Unlikely.

An MCM design that eventually ships will organize much like the monolithic chips do.

If it was advantageous to have separate blocks of RTX, Tensor and CUDA cores,the chips would already be organized that way.

Intead it's organized into GPC blocks which are divided into SM blocks as follows:


Each GPC includes a dedicated raster engine and six TPCs, with each TPC
including two SMs. Each SM contains 64 CUDA Cores, eight Tensor Cores, a 256 KB register file,
four texture units, and 96 KB of L1/shared memory which can be configured for various capacities
depending on the compute or graphics workloads.
Source: https://www.nvidia.com/content/dam/...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf

This is likely beneficial for locally sharing resources like the register file, and just having locality of processing, keep distances in the chip short, to keep timings tight, and it aids in building different size chips by repeating a common block that has everything you need.

A TU102 has 6 GPCs, you could imagine that a future MCM GPU module might have 4 GPCs, and theoretically you could use 1, 2, 3, or 4 of them. Yielding designs with from 4 to 16 GPCs. There would probably be one master chip that possibly handled memory access and the Video engine and integrates the tiles.

I think it would be "easier" to have separate rtx/tensor/gpu chiplets than several full integrated chiplets.

take into account that a chiplet with separate full cores would effectively be like on chip sli, meaning that each chiplet would appear to the OS as a separate GPU.

scaling could be an issue.

AFAIK Intel is working on a multigpu solution that effectivile appear as a single GPU. So maybe nvidia could do the same.
 
I think it would be "easier" to have separate rtx/tensor/gpu chiplets than several full integrated chiplets.

take into account that a chiplet with separate full cores would effectively be like on chip sli, meaning that each chiplet would appear to the OS as a separate GPU.

scaling could be an issue.

AFAIK Intel is working on a multigpu solution that effectivile appear as a single GPU. So maybe nvidia could do the same.

The people arguing that MCM is coming soon are arguing that using EMIB or similar connector, will connect the different chips together electrically as if they were one monolithic chip and appearing no different to the OS than a single monolithic chip.

Also if you break it up into funcitonal units, if you want scalable, you still end up using multiple of each type of chip. So two RT chiplets, two tensor chiplets, two shader chiplets, etc...

You haven't made it scalable, you have made it a nightmare.

IMO there is ZERO chance that any MCM design will consist of separate rt/tensor/shader chiplets.

It's going to consist GPC blocks, that are the building block of current GPUs, they will just be extended to MCM chiplets. Then you have a very scalable GPC chiplet that can add more GPCs as needed(or AMD/Intel equivalent).

The only other kind of chiplet will be the master that cooridinates, memory manages, does video, and final output.

Though I could see the very first attempt just being a full GPU chip that is capable of pairing with one more full GPU chip so it would only be standalone or paired. Though unlike current dual chip cards this one would pool memory and have enough bandwidth, low latency to share it without excess duplication.
 
Last edited:
I would like that option too. 1 maybe 2 video cards and i could use them for the better part of what 6 years or longer? I think the lead scientist said even at 5 nm it did not make sense to make gpu chiplets. But i do wonder what 5 years from today will bring. I also wonder what kind of performance we are going to see in Chiplets. I think we are going to see Very high end GPU that is $3,000 for 3d rendering later down the line. Gpu cores out the ass.
 
I'm moving on to a different hobby if that happens anytime soon.

I feel ya. Here I am, on a 1060 3GB that I originally bought to hold me over a few months. Then the RTX cards seemed crap value to me, and no games I play really supported it. The 16 series seemed equally garbage - full price for barely any improvement over my 1060. The Super cards haven't done much to change my mind, so here I am, nearly 3 years later, on a freaking 1060 3gb that still plays like a champ at 2560x1080 medium-high settings. On top of that, I have a 4 month long trip for work in the spring, so there was 0 point of me buying a 16 series super card this fall and not use it for the better part of a half a year.

Now I'm locked in for the 2020 GPUs. There better be a decent value, raytracing enabled (however low level) sub-$300 card from either AMD or Nvidia, or this 1060 will be stretched yet another year. It's funny: when I didn't have any money, I was always struggling to save to buy a new GPU, because generational improvements were stupid good. Now that I have more money than I consider prudent to spend on PC parts, I'm actually spending less because each generational upgrade for the past 4 years has seemed like an overpriced joke for anyone playing 1080p, 16/9 or ultrawide that doesn't need 144hz for competitive reasons. Isn't that ironic.
 
Last edited:
I feel ya. Here I am, on a 1060 3GB that I originally bought to hold me over a few months. Then the RTX cards seemed crap value to me, and no games I play really supported it. The 16 series seemed equally garbage - full price for barely any improvement over my 1060. The Super cards haven't done much to change my mind, so here I am, nearly 3 years later, on a freaking 1060 3gb that still plays like a champ at 2560x1080 medium-high settings. On top of that, I have a 4 month long trip for work in the spring, so there was 0 point of me buying a 16 series super card this fall and not use it for the better part of a half a year.

Now I'm locked in for the 2020 GPUs. There better be a decent value, raytracing enabled (however low level) sub-$300 card from either AMD or Nvidia, or this 1060 will be stretched yet another year. It's funny: when I didn't have any money, I was always struggling to save to buy a new GPU, because generational improvements were stupid good. Now that I have more money than I consider prudent to spend on PC parts, I'm actually spending less because each generational upgrade for the past 4 years has seemed like an overpriced joke for anyone playing 1080p, 16/9 or ultrawide that doesn't need 144hz for competitive reasons. Isn't that ironic.

I agree. If you game at 1080p you don't have to spend 200+ dlls to get great performance. But once you hop into 2k and 4k, that's another story.
 
I agree. If you game at 1080p you don't have to spend 200+ dlls to get great performance. But once you hop into 2k and 4k, that's another story.

True. Technically I use 2K, but I get bigger (32"+) 16/9 monitors and then custom res ultrawide 1080p inside them. Best of both worlds, for 21:9 for games, 16:9 for my work (custom res at 32" ends up being more or less a 29" ultrawide).
 
So we going to hit the $1000 for a mid range GPU?
I clearly doubt that.
Ampere will be a huge jump in performance, as announced by Nvidia but, however, the prices will drop, especially on the high end, and with larger Vram. Looks like Nvidia managed to produce at Samsung very cheap GPU of large size on 7nm EUV. Expect bad case scenario for AMD yet again. AMD is just prepared to announce an Nvidia Killer but when it will be on market Nvidia will be at another level of performance with Ampere compared to Turing and with cheaper products. LOL !
 
AMD is just prepared to announce an Nvidia Killer but when it will be on market Nvidia will be at another level of performance with Ampere compared to Turing and with cheaper products. LOL !

A broken clock is right twice a day -- and Nvidia has been doing this to AMD's GPU division for a decade like clockwork. Given that we're talking about the architecture after Ampere, which will almost certainly be on a process revision in the 7nm / 5nm class, and will likely include a chiplet configuration, Nvidia has every opportunity to increase performance and decrease cost significantly across all segments of the market.
 
A broken clock is right twice a day -- and Nvidia has been doing this to AMD's GPU division for a decade like clockwork. Given that we're talking about the architecture after Ampere, which will almost certainly be on a process revision in the 7nm / 5nm class, and will likely include a chiplet configuration, Nvidia has every opportunity to increase performance and decrease cost significantly across all segments of the market.

I hate that saying. A broken clock is also wrong 86,398 times a day. And don't even get me started on quanta lol
 
A broken clock is right twice a day -- and Nvidia has been doing this to AMD's GPU division for a decade like clockwork. Given that we're talking about the architecture after Ampere, which will almost certainly be on a process revision in the 7nm / 5nm class, and will likely include a chiplet configuration, Nvidia has every opportunity to increase performance and decrease cost significantly across all segments of the market.

While this is true, and Nvidia is also probably 2 years ahead design wise thanks to AMD not really doing much in the way of competing for the last 6 years, On the other hand, now that AMD has gotten itself back into position with the cpu side of the business, they will have funds to actually invest in moving the gpu division forwards. Again we're talking roadmaps for 3-5 years down the line but it's coming. I don't believe they will ever be able to catch up to Nvidia when it comes to perf/watt, but for consumers this is less problematic. Where it really matters is in the data center, oem market, and laptops, which are all massive in themselves. With more specialized processors, gpus, ect coming from a crowding market, gpus are going to be more valuable than ever in the autonymous age we're transitioning into. They would be leaving far too much money on the table not to at least try to maintain the guise of parity with Nvidia. In the terms of TAM, it's a trillion billion zillion dollar market we're targeting! JUST LOOK AT THE TAM!
 
There's plenty of 'market' for AMDs GPUs, at least until Intel gets into gear -- but technologically speaking, they're out of date on every front. Even their latest GPUs are behind Intel's aged Skylake when it comes to stuff like transcoding that's actually useful for most consumers, especially those with power-sensitive applications.

Three to five years is really their make or break.
 
Back
Top