erek
[H]F Junkie
- Joined
- Dec 19, 2005
- Messages
- 10,890
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I would have thought that AMD would be first in releasing an MCM type consumer GPU.
So we going to hit the $1000 for a mid range GPU?
So we going to hit the $1000 for a mid range GPU?
Chiplets has more to do with yields than performance. For instance a Threadripper CPU would be next to impossible to achieve on a monolithic die.
A chiplet design makes a lot of sense for nvidia as it could have separate chiplets for RTX, tensor and CUDA cores.
Very Unlikely.
An MCM design that eventually ships will organize much like the monolithic chips do.
If it was advantageous to have separate blocks of RTX, Tensor and CUDA cores,the chips would already be organized that way.
Intead it's organized into GPC blocks which are divided into SM blocks as follows:
Each GPC includes a dedicated raster engine and six TPCs, with each TPCSource: https://www.nvidia.com/content/dam/...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf
including two SMs. Each SM contains 64 CUDA Cores, eight Tensor Cores, a 256 KB register file,
four texture units, and 96 KB of L1/shared memory which can be configured for various capacities
depending on the compute or graphics workloads.
This is likely beneficial for locally sharing resources like the register file, and just having locality of processing, keep distances in the chip short, to keep timings tight, and it aids in building different size chips by repeating a common block that has everything you need.
A TU102 has 6 GPCs, you could imagine that a future MCM GPU module might have 4 GPCs, and theoretically you could use 1, 2, 3, or 4 of them. Yielding designs with from 4 to 16 GPCs. There would probably be one master chip that possibly handled memory access and the Video engine and integrates the tiles.
I think it would be "easier" to have separate rtx/tensor/gpu chiplets than several full integrated chiplets.
take into account that a chiplet with separate full cores would effectively be like on chip sli, meaning that each chiplet would appear to the OS as a separate GPU.
scaling could be an issue.
AFAIK Intel is working on a multigpu solution that effectivile appear as a single GPU. So maybe nvidia could do the same.
So we going to hit the $1000 for a mid range GPU?
I'm moving on to a different hobby if that happens anytime soon.
I feel ya. Here I am, on a 1060 3GB that I originally bought to hold me over a few months. Then the RTX cards seemed crap value to me, and no games I play really supported it. The 16 series seemed equally garbage - full price for barely any improvement over my 1060. The Super cards haven't done much to change my mind, so here I am, nearly 3 years later, on a freaking 1060 3gb that still plays like a champ at 2560x1080 medium-high settings. On top of that, I have a 4 month long trip for work in the spring, so there was 0 point of me buying a 16 series super card this fall and not use it for the better part of a half a year.
Now I'm locked in for the 2020 GPUs. There better be a decent value, raytracing enabled (however low level) sub-$300 card from either AMD or Nvidia, or this 1060 will be stretched yet another year. It's funny: when I didn't have any money, I was always struggling to save to buy a new GPU, because generational improvements were stupid good. Now that I have more money than I consider prudent to spend on PC parts, I'm actually spending less because each generational upgrade for the past 4 years has seemed like an overpriced joke for anyone playing 1080p, 16/9 or ultrawide that doesn't need 144hz for competitive reasons. Isn't that ironic.
I agree. If you game at 1080p you don't have to spend 200+ dlls to get great performance. But once you hop into 2k and 4k, that's another story.
I would have thought that AMD would be first in releasing an MCM type consumer GPU.
There will always be $300-400 cards.
I would like a $4000 card that’s 3x faster than a $1000 card. It’d be nice to have that option. Chiplets could make that a thing.
Yeah, that scenario looks more realistic.too bad you end up with a card that's 15% faster than a 1000 dollar card for 4000 dollars :S
I clearly doubt that.So we going to hit the $1000 for a mid range GPU?
AMD is just prepared to announce an Nvidia Killer but when it will be on market Nvidia will be at another level of performance with Ampere compared to Turing and with cheaper products. LOL !
A broken clock is right twice a day -- and Nvidia has been doing this to AMD's GPU division for a decade like clockwork. Given that we're talking about the architecture after Ampere, which will almost certainly be on a process revision in the 7nm / 5nm class, and will likely include a chiplet configuration, Nvidia has every opportunity to increase performance and decrease cost significantly across all segments of the market.
A broken clock is right twice a day -- and Nvidia has been doing this to AMD's GPU division for a decade like clockwork. Given that we're talking about the architecture after Ampere, which will almost certainly be on a process revision in the 7nm / 5nm class, and will likely include a chiplet configuration, Nvidia has every opportunity to increase performance and decrease cost significantly across all segments of the market.