Why slots?

HiDensity

n00b
Joined
Dec 10, 2005
Messages
19
Just curiosity really...but could GPU's ever be made into socket chips, like CPU's are? Sure it would make for a much more complicated mobo layouts (needing video connectors and all). Also not sure what the memory latency differences are but perhaps a different memory type and interface would be needed. But cooling would should be alot easier, and the ability to upgrade video memory would be nice.

Just curious if you brainiacs out there know why it wouldn't work. I look at these foot long double slot video cards and just think they seem as stupid as those old "chocolate bar" Slot1 pentiums or early Athlons.
 
IGPs ;)

they're alittle behind in terms of matching dedicated GPUs simply because they dont have the infrastructure to support the same amount of power
 
cost, compatibility, cooling

you would be locked into most likely one brand.... and into a specific generation and variation most likely
motherboard prices would be a lot higher
it would really suck to try and cool an extra high power socketed chip
it would reduce the space for expansion slots.
 
It's mostly cause onboard memory will usually be multiples faster than shared memory (on mobo). Otherwise you'd also have to buy GDDR5 for you mainboard. I'd rather it the way it is now.
 
also take a look at how people bitch at every socket change.
 
It's a neat thought, but honestly it would be too cost intensive and it would be too much of a gamble, even if it were some how adopted.

Also, heat issues would be present and motherboards would be EATX in size, at least.

Even if the GPU had built-in memory like GDDR5, the added cost of construction and design of the motherboards would outweigh the benefits.

This is why APU's have come about, you can now upgrade your CPU and "GPU" simultaneously while still having decent performance for basic apps.
It's no where near the performance of discrete GPUs, but it certainly is a start.

I look at these foot long double slot video cards and just think they seem as stupid as those old "chocolate bar" Slot1 pentiums or early Athlons.
Those processors were made that way during the late 90's and early 00's to support additional L2 cache, even if it ran at half-clock speed of the processor.
Many individuals needed more L2 cache running at half-speed than less running at full speed.

Not a bad design imo, and much easier to install and upgrade the cooling units.
Would be too costly to produce in this era though.
 
Intel are incorperating more and more into a cpu. In another 10 years or so your going to plug your monitor and keyboard into just a chip
 
I think it would be an excellent, high end enthusiast type setup. Have a high end video card architecture built into the motherboard in place of the 1-3 PCI-e x16 slots you'd normally have for video cards with dedicated video memory slots to be filled by the end user (or preinstalled GDDR5 memory of varying sizes). Then you could possibly have the same kind of cooling solution on the GPU as you do on your CPU and when the next gen comes out its a chip replacement instead of a full card replacement, would probably be cheaper for the end user.

Though, this would require GPU manufacturers to stick to a single GPU architecture and work within certain confines. Motherboard prices WOULD be much much higher but, potentially be offset by cheaper GPU chip upgrades. Imagine paying 700-800 for a high end motherboard that has a 6970 built into it, then when the 7k series comes out, you pay 100-200 dollars for a chip and you have latest and greatest.

There's obviously a lot of problems with doing this though, otherwise we'd already have something like it on the market.
 
I think it would be an excellent, high end enthusiast type setup. Have a high end video card architecture built into the motherboard in place of the 1-3 PCI-e x16 slots you'd normally have for video cards with dedicated video memory slots to be filled by the end user (or preinstalled GDDR5 memory of varying sizes). Then you could possibly have the same kind of cooling solution on the GPU as you do on your CPU and when the next gen comes out its a chip replacement instead of a full card replacement, would probably be cheaper for the end user.

Though, this would require GPU manufacturers to stick to a single GPU architecture and work within certain confines. Motherboard prices WOULD be much much higher but, potentially be offset by cheaper GPU chip upgrades. Imagine paying 700-800 for a high end motherboard that has a 6970 built into it, then when the 7k series comes out, you pay 100-200 dollars for a chip and you have latest and greatest.

There's obviously a lot of problems with doing this though, otherwise we'd already have something like it on the market.


until you realize the 7900 series is replacing its GDDR5 memory with XDR2.. so that ruins that theory.

but the bigger issue is that there are to many parts that can fail within a single item. if for some reason the dedicated gpu memory dies you have to replace the entire board. if the gpu socket goes bad, you have to replace the entire board. if you accidentally bend a pin, you have to replace the entire board. its to high of an investment for the consumer and way to high of an investment for motherboard manufactures when it comes to money being lost to RMA's.

the secondary issue is that theres just to many pieces of hardware on a single board and you start running into space issues for traces and VRM's/mosfets, memory location, socket location. theres a reason desktops haven't changed much in the last 20 years. the design is simplistic and works, why make it even more complex?
 
Maybe a little better idea would be a socket on the graphics card instead of the motherboard. Still I think the idea might cause more issues that its worth. I rarely upgrade a cpu on a mobo. Usually a new motherboard is bought whenever i buy a new cpu. So if i could buy a motherboard with the cpu permanently affixed to the board and save $25-50 bucks i'd probably do it.
 
Why? Why slots, you ask?

Because all the Linux and open source fanboys would be in rage.
 
Definitely a lot of good responses. In fact it almost sounds like there are many advantages in having the flexibility of a card.

I guess the thought process I had was that the CPU has naturally evolved to a socket configuration with slotted memory. Wouldn't it be logical that the GPU would follow such a course?

I assumed Nvidia and AMD made most of their money from the actual GPU, since most cards are made by a 3rd party (ASUS/Gigabyte/MSI/etc). Many of those card builders are also motherboard builders. It would be like integrating their card designs into their own motherboards. Still interfacing with the PCI bus the same way.

The post about drop in GPU on the card is close to what I was thinking, except take the card and integrate it to the mobo.

Yes, it would mean 4 different types of motherboards. AMD/AMD, AMD/Nvidia, Intel/AMD, Intel/Nvidia. But aren't we pretty close to that as it is?

I dunno I was just thinking how hot my dang video cards run, and really wished cooling it was as easy as it is w/ my CPU.
 
Why not just a faster USB interface? Then your graphics "card" could be cooled however you'd like it to be and can just plug in via a cable.
 
Why not just a faster USB interface? Then your graphics "card" could be cooled however you'd like it to be and can just plug in via a cable.

They already have that, and it is heavily CPU bound due to the USB interface and controller, not a wise decision beyond 2D and movies.
 
Maybe a little better idea would be a socket on the graphics card instead of the motherboard. Still I think the idea might cause more issues that its worth. I rarely upgrade a cpu on a mobo. Usually a new motherboard is bought whenever i buy a new cpu. So if i could buy a motherboard with the cpu permanently affixed to the board and save $25-50 bucks i'd probably do it.

You have to take a universal design into account, meaning capacitors and correct circuitry to power GPUs which could utilize as little as 20 watts up to 300 watts. The added costs would be far greater than it would be for people or companies to invest in such technology, unless high-end GPUs were utilized, which defeats the whole purpose.


You guys, this isn't a technology limitation, it's a business investment with a huge risk involved, and investors and manufacturers aren't going to be willing to take such a chance on unproven, expensive, and possibly proprietary, technologies.
 
Intel are incorperating more and more into a cpu. In another 10 years or so your going to plug your monitor and keyboard into just a chip


We're getting there...

($25 Raspberry Pi - plays Quake 3 in HD)

30j4tqv.jpg
 
until you realize the 7900 series is replacing its GDDR5 memory with XDR2.. so that ruins that theory.

but the bigger issue is that there are to many parts that can fail within a single item. if for some reason the dedicated gpu memory dies you have to replace the entire board. if the gpu socket goes bad, you have to replace the entire board. if you accidentally bend a pin, you have to replace the entire board. its to high of an investment for the consumer and way to high of an investment for motherboard manufactures when it comes to money being lost to RMA's.

the secondary issue is that theres just to many pieces of hardware on a single board and you start running into space issues for traces and VRM's/mosfets, memory location, socket location. theres a reason desktops haven't changed much in the last 20 years. the design is simplistic and works, why make it even more complex?

The specific architecture and its specs that I gave were for example...so that part of your arguement doesnt apply. Its the same as AMD/Intel going to a new socket type on their next gen CPUs, its something that would have to be taken into account during R&D like CPU manufacturers do now.

Part failures as an arguement doesnt make sense either when you talk about the streamlined efficiency of the entire system. Instead of having a line of people for MB RMAs and a line of people for GPU RMAs, its all one department which could help a company save money by having fewer employees (please dont flame me for the job cutting arguement that could be had). Failure rates on any specific part of this theoretical MB setup wouldnt really apply either once the design and hardware production processes mature as they would be using the same components that are already in use today on current PCBs and even if the failure rates went from (FOR EXAMPLE) 5% on dedicated motherboards and 5% on GPUs to a failure rate of 15% on the motherboards, im willing to bet that the money saved from lower overhead costs would still make up for the additional 5% workload.

I will agree with you though that space constraints would def. be an issue. Though I wonder, they can make a full featured (as far as I know) micro-atx motherboard that does all the same stuff as a regular ATX board does, with just fewer expansion slots, why couldnt they stick to the same design and use the saved space for the GPU stuff? I mean, this kind of board would be geared towards enthusiasts and most of us only use maybe 1 or 2 expansion slots for things other than GPUs. Hell, the motherboard im using, all im using is 3 PCIe lanes for the GPUs and 1 for a dedicated sound card and I still have 3 more expansion slots that are empty on a Standard ATX form factor. Imagine removing all the expansion slots except for 1-2 and using the rest of that saved space for the onboard socket GPU.

There's obviously a lot of issues involved with doing something like this, as I said before, if it were cost effective for the 3rd party manufacturers to do, they would have done it already.
 
Back
Top