Socket GPU's

Joined
Apr 22, 2011
Messages
867
Was thinking (or not), why do I have to pay for more HBM memory and another gpu "motherboard" when nextgen GPU's arrive if I already have high end stuff on my Titan V? I only want the GPU chip!

GPU's should be modular, change my mind.
 
Too many different supporting chips/diodes/caps etc. for each spec of GPU makes it a bad idea.
 
Socketed GPUs do exist: take for example the Nvidia SXM2 (on ebay, the P100/V100 sxm2 are cheap, too. But good luck for finding cheap sxm2 mobos/servers).
 
Was thinking (or not), why do I have to pay for more HBM memory and another gpu "motherboard" when nextgen GPU's arrive if I already have high end stuff on my Titan V? I only want the GPU chip!

GPU's should be modular, change my mind.

your hbm is part of the gpu, how could it be socketed. You're really just paying for the mainboard.
 
This position isn't tenable if you know how graphics cards are made.
Each graphics cards board is specific to that GPU. It takes AMD and nVidia a decent amount of time to figure out the routing on the board for each new incoming chip.
The board itself is like a motherboard for that specific GPU. Each pin and pinout is different on each GPU. Entirely different pinouts for power (with specific voltages) RAM, etc, on down the line.
(EDIT: As an aside, this is so complex that back in the nVidia Ti4200/Ti4400/Ti4600 days I knew a technical writer for nVidia, and for every graphics processor they made they had to essentially write a document that was over 300 pages writing out line by line how all interactions with the chip worked from a pin perspective. This is so board partners if they wanted to make custom boards could figure out their own routing or make modifications. This is also why a lot of board partners don't bother, because it's not worth the effort to try and optimize further and redo work that has already been done.)

In order for socket-able GPU's to exist there would have to be standardization across all GPUs (at least from the same manufacturer). Which is something they really don't want to do.
In reality what would likely happen is that every GPU generation they would simply have to create a "new socket", more or less defeating the whole point of what you want to accomplish anyway.

There isn't a way to make this cost any less. GPU's being swap-able on a PCIE card slot is already amazing. Being able to socket GPU's on anything other than a dev card is out of the question.

So, I'm not gonna change your mind. You can think whatever you want to think. But from a technical standpoint, even if 100% of people wanted a socket-able GPU, manufacturers wouldn't bother because it wouldn't serve much of a purpose short of simply being able to replace it with the same GPU in the case of a failure.
 
Last edited:
Nope, I've looked at it, the HBM are their own memory chips placed around the GPU socket.

HBM is integrated into the GPU MCM.

https://en.wikipedia.org/wiki/High_Bandwidth_Memory

I personally think HBM is a terrible idea, BGA packages already have awful problems with breakage due to ROHS solder. Adding more stacks of PCBs and chiplets is just a nightmare scenario. Not to mention they'll run smokin' hot being literally right next to the GPU die itself, and memory does not like heat.

HBM just pushes electronics farther into the problem of massive amounts of E-Waste. If a GDDRx chip dies on a video card, you can remove and replace it. If a HBM stack dies, the whole card has to be thrown out. You can't exactly call up AMD/Nvidia and order a GPU die replacement.

There's literally no advantage to using HBM memory other than proximity to the GPU reducing latency. GDDR5 and GDDR6 have similar bandwidth when used conventionally.
 
Back
Top