Researchers Develop Silicon Interconnect for GPUs

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
IEEE Spectrum reports that Rakesh Kumar and his associates are working on a "wafer-scale" interconnect for GPUs. Instead of fabricating multiple GPUs on a massive silicon wafer, or using power hungry and slow traditional interconnects like existing supercomputers use, Kumar wants to use a "silicon interconnect fabric" as a replacement for a circuit board. In other words, Kumar wants an interconnect that would (theoretically) allow 40 GPUs to be seen as "one giant GPU" from the perspective of programmers. Simulations of a 41 GPU design showed that it significantly sped up computation while consuming less energy than 40 GPUs with a conventional interconnect, and he says "the team has started work on building a wafer-scale prototype processor system, but he would not give further details." Kumar will present more information on his progress at the IEEE International Symposium on High-Performance Computer Architecture later this month.

SiIF replaces the circuit board with silicon, so there is no mechanical mismatch between the chip and the board and therefore no need for a chip package.The SiIF wafer is patterned with one or more layers of 2-micrometer-wide copper interconnects spaced as little as 4 micrometers apart. That's comparable to the top level of interconnects on a chip. In the spots where the GPUs are meant to plug in, the silicon wafer is patterned with short copper pillars spaced about 5 micrometers apart. The GPU is aligned above these, pressed down, and heated. This well-established process, called thermal compression bonding, causes the copper pillars to fuse to the GPU’s copper interconnects. The combination of narrow interconnects and tight spacing means you can squeeze at least 25 times more inputs and outputs on a chip, according to the Illinois and UCLA researchers.
 
IEEE Spectrum reports that Rakesh Kumar and his associates are working on a "wafer-scale" interconnect for GPUs. Instead of fabricating multiple GPUs on a massive silicon wafer, or using power hungry and slow traditional interconnects like existing supercomputers use, Kumar wants to use a "silicon interconnect fabric" as a replacement for a circuit board. In other words, Kumar wants an interconnect that would (theoretically) allow 40 GPUs to be seen as "one giant GPU" from the perspective of programmers. Simulations of a 41 GPU design showed that it significantly sped up computation while consuming less energy than 40 GPUs with a conventional interconnect, and he says "the team has started work on building a wafer-scale prototype processor system, but he would not give further details." Kumar will present more information on his progress at the IEEE International Symposium on High-Performance Computer Architecture later this month.

SiIF replaces the circuit board with silicon, so there is no mechanical mismatch between the chip and the board and therefore no need for a chip package.The SiIF wafer is patterned with one or more layers of 2-micrometer-wide copper interconnects spaced as little as 4 micrometers apart. That's comparable to the top level of interconnects on a chip. In the spots where the GPUs are meant to plug in, the silicon wafer is patterned with short copper pillars spaced about 5 micrometers apart. The GPU is aligned above these, pressed down, and heated. This well-established process, called thermal compression bonding, causes the copper pillars to fuse to the GPU’s copper interconnects. The combination of narrow interconnects and tight spacing means you can squeeze at least 25 times more inputs and outputs on a chip, according to the Illinois and UCLA researchers.
Can you imagine how this could revolutionize SLI/X-Fire? Both GPUs acting as one! That would be sweet
 
40 vega 2's on 1 super ultra card. 16k res with raytracing. gg ez

you might need your own powerplant though
 
Neat idea.

With current tech do we not need all the surface area for heat dissipation?

If this were combined with tech the recycled all the waste electricity from shorting out bits to zero the memory we could see Moore's law accelerate. AT least for a little while.
 
As far as I'm aware they're not the same thing. One only allows the silicon to be stacked, which is your link. What this new process allows is for 40 video cards to act as if they were one enormous video card and the instructions required to harness it need only be written as if it's a single video card. Thus, as a primitive example, we would no longer require a game developer to support SLI or X-Fire, they just create the game with support for a graphics card and this solution would essentially tell the video game that those 40 video cards were just one, massive, super fast Vega/2080Ti. At least, that's how I understood it.

Chip stacking tech isn't even a reality as of yet. AMD is currently using the chiplet approach and Intel is, possibly, going to actually do chip stacking at some point in time in the future.
 
This could be huge for AMD chiplet GPU theories...

Bring on cheap, better performing cards! In 10 years...
it it works it will take off faster than that, this would be huge in the AI fields. That is a multi billion growth field you could see it in the enterprise field much sooner and shortly after in the high end consumer field shortly after as it trickles down.

I really hope at least I don't want to have to wait that long.
 
Back
Top