Dual core GPU instead of one big GPU?

Galvin

2[H]4U
Joined
Jan 22, 2002
Messages
2,697
But using two dies. Intel did this early on. So instead of Nvidia trying to make one big die with more potential defects, they could make a smaller die with less defects.

Then all they need to do is put two under one heat spreader as two seperate dies. I would think going this route in the future would make better sense. Anyone have thoughts on this?
 
first, its totally different from CPU to GPU..

second, GPU is already multi-core, for GTX 480 have 512 core with 480 active, 5870 have 1600 core.
the core is a bit different, they call it SP ( Shader Processor)
 
Intel i7 only has 731 Million transistors.

GTX 480 has 3.0 billion transistors.

GTX 480 is already 4x as complex and multicored and thus way higher thermal output and power consumption.
 
Last edited:
The GTX 480 has 3 billion transistors.
And it's true, GPUs are really designed very differently, although I don't see why seperating the logic
into two distinct and separately manufactured dies would be a bad idea per se. Although it might have some other undesired consequences when it comes to production. I couldn't say as I'm not an electrical engineer.

Anyways, this is a cool video, who doesn't love the mythbusters? I wouldn't look too deeply into it, but it's the basic idea.

http://www.youtube.com/watch?v=-P28LKWTzrI
 
Well nvidia already made dual gpu cards, only difference is you would have both dies under one heatspreader. Making the total length of the card much shorter. Two dies with 1.5 billion transistors each means less defects per wafer which means more working chips. Not sure how heat would go thru the roof as some of you said.

If they can do a dual gpu card then they can do two dies under one heatspreader. Unless people like having 12" cards for dual gpus :)
 
Its just much cheaper/easier to manufacture 1 GPU design then it is to make 2. You can also adjust for demand easily. If the Dual GPU card is in higher demand you don't have to make more Dual GPU chips, just allocate more of the chips into dual GPU cards.
 
Well nvidia already made dual gpu cards, only difference is you would have both dies under one heatspreader. Making the total length of the card much shorter. Two dies with 1.5 billion transistors each means less defects per wafer which means more working chips. Not sure how heat would go thru the roof as some of you said.

If they can do a dual gpu card then they can do two dies under one heatspreader. Unless people like having 12" cards for dual gpus :)

Except your "only difference" makes all the difference in the world. A "dual core GPU" doesn't make any damn sense from the outset, which is why you will never see it.

Furthermore, to have a "dual core GPU" you first need to define what exactly a core is. And if you go look at a GPU's architecture, that is far easier said than done. For example, some people here have called a core a shader, which works sort of, but not really. A CPU core is standalone. Everything needed from instruction decoding to actually doing the work is basically in each core. GPUs have *nothing* similar to a CPU core. They are an entirely different beast.
 
Intel i7 only has 731 Million transistors.

GTX 480 has 3.0 billion transistors.

GTX 480 is already 4x as complex and multicored and thus way higher thermal output and power consumption.

Differences also include transistor clock speed and process tech (feature size) as well as the quality of that process tech. This has as much to do with power usage and heat output as the pure transistor count of the chip. Refering to process quality, it is thrown around that Nvidia's problems with the 40nm process have resulted in transistors that leak current which lends greatly to this gen's relatively high heat output and power usage.

That being said, I think technologies that improve the communication between the GPUs of a multi GPU card will be a driving factor in this becoming the norm (where the top of the line card is effectively four midrange chips "glued" together). For instance ATI's Sideport tech that they've been having to scrap to keep die size smaller. Sadly I see the "modular core" implementation adding cost and complexity and thus keeping it off the roadmap.
 
The main reason is the HEAT. Period.
Those GPUs get VERY hot.
Trying to put two such GPU dies side by side would cause a fire hazard!

Just check out this article here with their infrared heat pics and you'll understand what I mean.
http://www.behardware.com/art/imprimer/787/

5970 heat under load
IMG0027446.png


GTX 480 SLI heat under load
IMG0028388.png


See how hot those gpus are running? Now imagine slapping two of those GPUs side by side! Unless you've got some really exotic cooling, and the 5970 already uses Vapor Chamber tech, you're asking for Instant MELTDOWN!
 
You keep bringing up heat. But when you have two dies with half the transistors doing the work of what a 512SP can do the heat output should be the same. But a 512part is impossible cause the die size is so huge and always has defects.

The problem Nvidia is running into and I can see it a problem on the next generation is their die size is getting too big. They need to find a way to make smaller dies that can be chained together. Otherwise I just see this happening in each generation.

Wether the two dies on a dual gpu card are side by side or under the same heatspreader the total heat should be the same. Only difference is having two dies under one heat spreader makes the card smaller in size.
 
You keep bringing up heat. But when you have two dies with half the transistors doing the work of what a 512SP can do the heat output should be the same. But a 512part is impossible cause the die size is so huge and always has defects.

The problem Nvidia is running into and I can see it a problem on the next generation is their die size is getting too big. They need to find a way to make smaller dies that can be chained together. Otherwise I just see this happening in each generation.

Wether the two dies on a dual gpu card are side by side or under the same heatspreader the total heat should be the same. Only difference is having two dies under one heat spreader makes the card smaller in size.

That IS ATI's strategy, lol.

The R670 was one early example, and the RV770 was the best representation of that.
The teeny HD3870 was doubled up inot the HD3870x2, which was cheaper, and faster than the monolithic 8800gtx/ultra.
The smaller HD4870 was put into the HD4870 X2, which held the performance lead until nVidia doubled up, and got the GTX295.

The HD5870 is a slight deviation from the "small die" strategy, but not too far.



Here is the thing, though. Remember the G80?

A huge freakn' core, but simply because it was HUGE, and had 128SP :eek:, there was simply NO competition, for quite some time. nVidia commanded the market, and could do whatever they wanted, in terms of pricing.

Remember the 9700PRO?
Another case of MASSIVE chip. The previous record was 60mil transistors, and ATi jumped straight for 110mil. It easily cleared the entire house with nVidia's lineup, and nVidia's next gen card (FX5800) was simply not enough to compete with ATi's card.



Having the biggest, AND the fastest, is the best way to secure the performance crown. Dual GPU cards don't capture as much mindshare as single-GPU performance leaders.
 
You keep bringing up heat. But when you have two dies with half the transistors doing the work of what a 512SP can do the heat output should be the same. But a 512part is impossible cause the die size is so huge and always has defects.

The problem Nvidia is running into and I can see it a problem on the next generation is their die size is getting too big. They need to find a way to make smaller dies that can be chained together. Otherwise I just see this happening in each generation.

Wether the two dies on a dual gpu card are side by side or under the same heatspreader the total heat should be the same. Only difference is having two dies under one heat spreader makes the card smaller in size.

So you are saying the GPU should be split in two? That will have an effect on performance. That has no benefit in terms of manufacture to physically split the GPU in two and the added pathways and linking infrastructure needed to get them to communicate will add significantly to production costs as well as being much slower than a single monolithic core. Additionally, this adds additional complication to manufacture as you longer have guaranteed whole cores that work from a wafer. What if a wafer returns an uneven batch of working left cores over right cores? You have a lot more waste and useless silicone.

Large die size is caused by poor engineering. ATI's die size is much smaller because they trimmed many planned features to those that would be required by the majority of gamers. Fermi has many Tesla parts that are leftover from when it was a GPGPU part. Process size has more overall effect on power and thermal efficiency than die size.
 
How is two dies slower than one? Every dual gpu card has always been the fastest solution so don't see why you're thinking that. Smaller dies means more product out of one wafer. I wonder how much wafer space is getting wasted everytime a 512part gets cut down to 448/480 shader cores. Common sense tells me smaller dies that can be chained together can make more working product than huge dies off one wafer.
 
The only advantage to making two logics on one die would to be further improve efficiency on the SP's as they start placing more on the die. Cutting the actual SP count in half would help out.

The problem is there are games which won't take advantage of the other die, making the whole proess and headache worthless.
 
How is two dies slower than one? Every dual gpu card has always been the fastest solution so don't see why you're thinking that. Smaller dies means more product out of one wafer. I wonder how much wafer space is getting wasted everytime a 512part gets cut down to 448/480 shader cores. Common sense tells me smaller dies that can be chained together can make more working product than huge dies off one wafer.

Because what you don't get is that shaders aren't individual work units. They are just the things that do the number crunching, but they are very dumb. Two CPU cores slapped side by side work because CPU cores don't work on the same thing at once. Each core does its own thing. That isn't how a GPU works at all. For example, all the SPs are managed by a single thread scheduler. There are also things like the memory controller, ROPs, TMUs, etc.. You simply cannot split a GPU down the middle and then glue two pieces together. That just doesn't work at all.

Do yourself a favor and go read about GPU architecture. I promise you, as soon as you sit down and actually look at some of the diagrams you will quickly see why what you are proposing doesn't make any damned sense.

You will also see that Nvidia and ATI have already grouped shaders into core-like structures.
 
Back
Top