Researchers Develop Silicon Interconnect for GPUs

Discussion in 'HardForum Tech News' started by AlphaAtlas, Feb 1, 2019.

  1. AlphaAtlas

    AlphaAtlas [H]ard|Gawd Staff Member

    Messages:
    1,713
    Joined:
    Mar 3, 2018
    IEEE Spectrum reports that Rakesh Kumar and his associates are working on a "wafer-scale" interconnect for GPUs. Instead of fabricating multiple GPUs on a massive silicon wafer, or using power hungry and slow traditional interconnects like existing supercomputers use, Kumar wants to use a "silicon interconnect fabric" as a replacement for a circuit board. In other words, Kumar wants an interconnect that would (theoretically) allow 40 GPUs to be seen as "one giant GPU" from the perspective of programmers. Simulations of a 41 GPU design showed that it significantly sped up computation while consuming less energy than 40 GPUs with a conventional interconnect, and he says "the team has started work on building a wafer-scale prototype processor system, but he would not give further details." Kumar will present more information on his progress at the IEEE International Symposium on High-Performance Computer Architecture later this month.

    SiIF replaces the circuit board with silicon, so there is no mechanical mismatch between the chip and the board and therefore no need for a chip package.The SiIF wafer is patterned with one or more layers of 2-micrometer-wide copper interconnects spaced as little as 4 micrometers apart. That's comparable to the top level of interconnects on a chip. In the spots where the GPUs are meant to plug in, the silicon wafer is patterned with short copper pillars spaced about 5 micrometers apart. The GPU is aligned above these, pressed down, and heated. This well-established process, called thermal compression bonding, causes the copper pillars to fuse to the GPU’s copper interconnects. The combination of narrow interconnects and tight spacing means you can squeeze at least 25 times more inputs and outputs on a chip, according to the Illinois and UCLA researchers.
     
    lostin3d and Revdarian like this.
  2. Romeomium

    Romeomium Limp Gawd

    Messages:
    203
    Joined:
    Feb 9, 2017
    This could be huge for AMD chiplet GPU theories...

    Bring on cheap, better performing cards! In 10 years...
     
    lostin3d, joobjoob, Lakados and 2 others like this.
  3. Legendary Gamer

    Legendary Gamer Gawd

    Messages:
    518
    Joined:
    Jan 14, 2012
    Can you imagine how this could revolutionize SLI/X-Fire? Both GPUs acting as one! That would be sweet
     
    joobjoob and /dev/null like this.
  4. Sikkyu

    Sikkyu I Question Reality

    Messages:
    2,882
    Joined:
    Jan 21, 2010
    40 vega 2's on 1 super ultra card. 16k res with raytracing. gg ez

    you might need your own powerplant though
     
    Legendary Gamer likes this.
  5. katanaD

    katanaD [H]ard|Gawd

    Messages:
    1,987
    Joined:
    Nov 15, 2016
    But with 40 GPU's will we get true real time ray tracing?
     
    Trixar and Sulphademus like this.
  6. Legendary Gamer

    Legendary Gamer Gawd

    Messages:
    518
    Joined:
    Jan 14, 2012
    We could probably create the chick from Weird Science with that much power ;)
     
    Paul_Johnson, dvsman, Madoc and 2 others like this.
  7. pcgeekesq

    pcgeekesq [H]ard|Gawd

    Messages:
    1,403
    Joined:
    Apr 23, 2012
  8. Elf_Boy

    Elf_Boy 2[H]4U

    Messages:
    2,299
    Joined:
    Nov 16, 2007
    Neat idea.

    With current tech do we not need all the surface area for heat dissipation?

    If this were combined with tech the recycled all the waste electricity from shorting out bits to zero the memory we could see Moore's law accelerate. AT least for a little while.
     
  9. Legendary Gamer

    Legendary Gamer Gawd

    Messages:
    518
    Joined:
    Jan 14, 2012
    As far as I'm aware they're not the same thing. One only allows the silicon to be stacked, which is your link. What this new process allows is for 40 video cards to act as if they were one enormous video card and the instructions required to harness it need only be written as if it's a single video card. Thus, as a primitive example, we would no longer require a game developer to support SLI or X-Fire, they just create the game with support for a graphics card and this solution would essentially tell the video game that those 40 video cards were just one, massive, super fast Vega/2080Ti. At least, that's how I understood it.

    Chip stacking tech isn't even a reality as of yet. AMD is currently using the chiplet approach and Intel is, possibly, going to actually do chip stacking at some point in time in the future.
     
    Submarinesailor, knowom and Trixar like this.
  10. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,491
    Joined:
    Feb 3, 2014
    it it works it will take off faster than that, this would be huge in the AI fields. That is a multi billion growth field you could see it in the enterprise field much sooner and shortly after in the high end consumer field shortly after as it trickles down.

    I really hope at least I don't want to have to wait that long.
     
  11. Hielo_loco

    Hielo_loco [H]Lite

    Messages:
    103
    Joined:
    Jan 27, 2015
    Sure, but can it run... Crysis... [dramatic pause] raytraced Crysis?
     
    Hruodgar and Trixar like this.
  12. lostin3d

    lostin3d [H]ard|Gawd

    Messages:
    2,003
    Joined:
    Oct 13, 2016
    AlphaAtlas likes this.
  13. knowom

    knowom Limp Gawd

    Messages:
    424
    Joined:
    Aug 15, 2008
    Last edited: Feb 1, 2019