Socket GPU's

Discussion in 'Video Cards' started by Stryker7314, Oct 29, 2019.

  1. Stryker7314

    Stryker7314 Limp Gawd

    Messages:
    266
    Joined:
    Apr 22, 2011
    Was thinking (or not), why do I have to pay for more HBM memory and another gpu "motherboard" when nextgen GPU's arrive if I already have high end stuff on my Titan V? I only want the GPU chip!

    GPU's should be modular, change my mind.
     
    AceGoober likes this.
  2. Wat

    Wat n00b

    Messages:
    18
    Joined:
    Jun 23, 2019
    There is no real reason a GPU couldn't be made to fit a sock. But then it would smell like feet
     
  3. DrLobotomy

    DrLobotomy [H]ardness Supreme

    Messages:
    6,507
    Joined:
    May 19, 2016
    Too many different supporting chips/diodes/caps etc. for each spec of GPU makes it a bad idea.
     
    Stryker7314 likes this.
  4. Stryker7314

    Stryker7314 Limp Gawd

    Messages:
    266
    Joined:
    Apr 22, 2011
    Maybe do a cpu/motherboard chipset system?
     
  5. balnazzar

    balnazzar [H]Lite

    Messages:
    112
    Joined:
    Mar 13, 2013
    Socketed GPUs do exist: take for example the Nvidia SXM2 (on ebay, the P100/V100 sxm2 are cheap, too. But good luck for finding cheap sxm2 mobos/servers).
     
    Stryker7314 likes this.
  6. sabrewolf732

    sabrewolf732 2[H]4U

    Messages:
    4,043
    Joined:
    Dec 6, 2004
    your hbm is part of the gpu, how could it be socketed. You're really just paying for the mainboard.
     
    pendragon1 likes this.
  7. Stryker7314

    Stryker7314 Limp Gawd

    Messages:
    266
    Joined:
    Apr 22, 2011
    Nope, I've looked at it, the HBM are their own memory chips placed around the GPU socket.
     
  8. UnknownSouljer

    UnknownSouljer [H]ardness Supreme

    Messages:
    6,046
    Joined:
    Sep 24, 2001
    This position isn't tenable if you know how graphics cards are made.
    Each graphics cards board is specific to that GPU. It takes AMD and nVidia a decent amount of time to figure out the routing on the board for each new incoming chip.
    The board itself is like a motherboard for that specific GPU. Each pin and pinout is different on each GPU. Entirely different pinouts for power (with specific voltages) RAM, etc, on down the line.
    (EDIT: As an aside, this is so complex that back in the nVidia Ti4200/Ti4400/Ti4600 days I knew a technical writer for nVidia, and for every graphics processor they made they had to essentially write a document that was over 300 pages writing out line by line how all interactions with the chip worked from a pin perspective. This is so board partners if they wanted to make custom boards could figure out their own routing or make modifications. This is also why a lot of board partners don't bother, because it's not worth the effort to try and optimize further and redo work that has already been done.)

    In order for socket-able GPU's to exist there would have to be standardization across all GPUs (at least from the same manufacturer). Which is something they really don't want to do.
    In reality what would likely happen is that every GPU generation they would simply have to create a "new socket", more or less defeating the whole point of what you want to accomplish anyway.

    There isn't a way to make this cost any less. GPU's being swap-able on a PCIE card slot is already amazing. Being able to socket GPU's on anything other than a dev card is out of the question.

    So, I'm not gonna change your mind. You can think whatever you want to think. But from a technical standpoint, even if 100% of people wanted a socket-able GPU, manufacturers wouldn't bother because it wouldn't serve much of a purpose short of simply being able to replace it with the same GPU in the case of a failure.
     
    Last edited: Oct 30, 2019
  9. GiGaBiTe

    GiGaBiTe Gawd

    Messages:
    887
    Joined:
    Apr 26, 2013
    HBM is integrated into the GPU MCM.

    https://en.wikipedia.org/wiki/High_Bandwidth_Memory

    I personally think HBM is a terrible idea, BGA packages already have awful problems with breakage due to ROHS solder. Adding more stacks of PCBs and chiplets is just a nightmare scenario. Not to mention they'll run smokin' hot being literally right next to the GPU die itself, and memory does not like heat.

    HBM just pushes electronics farther into the problem of massive amounts of E-Waste. If a GDDRx chip dies on a video card, you can remove and replace it. If a HBM stack dies, the whole card has to be thrown out. You can't exactly call up AMD/Nvidia and order a GPU die replacement.

    There's literally no advantage to using HBM memory other than proximity to the GPU reducing latency. GDDR5 and GDDR6 have similar bandwidth when used conventionally.
     
    AceGoober and Stryker7314 like this.