Why the irregular spacing on PCIE slots on many mobos?

qdemn7

Supreme [H]ardness
Joined
May 2, 2002
Messages
4,531
I'm sure there's a logical, or maybe just lazy reason. Because if one wanted to run 4 video cards on a mobo, it could not be done.
 
Not many people ever gamed with 4 cards on a mobo and if they want to do so nowadays, a riser setup is far better for heat dissipation (IMO). For gaming, it's a completely moot point, since AMD is ditching Crossfire (maybe back with Big Navi? doubtful, IMO) and Nvidia only has it on their top couple of cards - limited to 2 GPUs total, anyways. A modern 3+ GPU setup is only (IMO) going to appear in HPC workloads and those are increasingly done in rackmounted datacenters, where systems supporting upto 16GPUs are still in play (along with CPUs that support up to 8 sockets).

I suppose much of the strange spacing nowadays is also down to the popularity of 3 slot GPUs. As a result, it does seem a lot of boards have, one x16 slot with lots of clearance and spacing up top, and a bunch of everything else at the bottom.
 
Well I knew there were reasons. I have been out of the loop for years, and now only starting to catch up. Thanks.
 
...
A modern 3+ GPU setup is only (IMO) going to appear in HPC workloads and those are increasingly done in rackmounted datacenters, where systems supporting upto 16GPUs are still in play (along with CPUs that support up to 8 sockets).

....
So who makes the motherboards for those systems? ASUS, Gigabyte, or companies we never hear about?
 
Can you show what you think is an irregular spaced motherboard? I can maybe explain a use case and why it isn't irregular in a modern context. Not many people run 4 GPUs because on consumer platforms there aren't even enough PCIe lanes to distribute for that kind of thing. There used to be boards with PLX chips that gave more PCIe lanes, but with that came higher latency. It has generally fallen out of favor on all consumer platforms and hard to find boards with that anymore. Not to mention Crossfire / SLI / NVLink support is getting worse and worse over the years and the scaling just isn't worth it.

Moving into the world of HPC / high end servers / high end workstations etc... you may be likely to see something like you're expecting.
 
I'm sure there's a logical, or maybe just lazy reason. Because if one wanted to run 4 video cards on a mobo, it could not be done.
The short answer is that almost no one actually does want to run four graphics cards, so that's not generally a design consideration. Also, the typical consumer CPU doesn't have enough PCI-E lanes to make use of multiple X16 slots anyway.

On boards meant for CPUs that have enough PCI-E lanes to justify many PCI-E slots, they generally have more even spacing. (See: Server boards, EVGA SR-2 type boards, etc).
 
Your standard mainstream motherboard market boards using consumer chipsets like Z390 and X570 simply aren't designed to accommodate four graphics cards. Even motherboards like MSI's MEG X570 GODLIKE are problematic. The slots are there, and physically spaced nicely. Unfortunately, you need a chassis that supports hanging a GPU off the edge of the motherboard PCB given the location of that last slot. Even then, there are real concerns as to whether or not the PCIe slots would get enough power to output 4x75w for each slot which would be necessary to run four graphics cards.

You have to understand that every motherboard out there includes various compromises to make things work. You either have to make sacrifices to keep the price down or sacrifice spending money on one feature to include another. Sometimes, you sacrifice layout of the PCIe slots to allow for more M.2 slots or whatever. The ATX form factor is quite restrictive, but changing to something else would be problematic. It was tried with BTX, but BTX really brought no advantages to the table where this topic is concerned.
 
So who makes the motherboards for those systems? ASUS, Gigabyte, or companies we never hear about?
ASUS, Gigabyte, ASRock, Supermicro, Tyan all "directly" manufacture boards for GPU hosts, and Nvidia, Facebook, Microsoft, etc all design boards for it, too. The


EDIT: Dan_D, what do you think the chances are of anything like OCP or something similar (though not identical) to Nvidia's socketed mezzanine connector coming into the consumer space? It seems to be a friendlier solution for high TDP monsters, though obviously puts a lot more demand on motherboard designers, especially w.r.t. scaling down to lower end systems that don't need 400W going through a socket.
 
You could run 4 single slot GPU's if you needed that many monitors, which would be the only use case for it.
 
I'm sure there's a logical, or maybe just lazy reason. Because if one wanted to run 4 video cards on a mobo, it could not be done.
This has a lot to do with GPU cooling. If all of the PCI-E x16 slots have regular spacing, the ONLY way that could be done was to have seven such slots located right against one another. That will severely restrict GPU compatibility to cards with single-slot coolers (or the extreme low-end GPUs). That's not what you want.

In addition, not all higher-end GPUs have double-slot coolers. Some have coolers that take up three or even four slots.
 
Multiple GPUs are very useful for rendering such as using VRay: (Some VRay benchmarks with different combo's or number of GPU's)
https://www.pugetsystems.com/labs/articles/V-Ray-Next-Multi-GPU-Performance-Scaling-1559/

Also very useful for AI work or developement and if not in a data center you can get away with using just as capable Nvidia gaming cards at 1/10 the cost for AI projects using Cuda:
https://timdettmers.com/2019/04/03/which-gpu-for-deep-learning/

If you have the ram, number of cores, bandwidth then multitasking such as rendering with 2 to 3 GPU's while using another GPU for gaming or some other tasks - true multitasking workloads - then yes very useful. Recently but not so much now, it was Crypto mining where you could never have enough GPU's. I use to play games while second GPU in machine would be mining and that was just on a consumer motherboard. At this time mining for most is just not worth it or profitable enough using GPUs. What is funny is it is going back to CPUs for mining that is more profitable.

Risers or cables can be used in the pcie slots to position extra GPU's where ever you want, even externally if need be, in reality you could use every PCIe slot if the motherboard supports all of them at once being used.
 

Attachments

  • 786D334B-91CE-4BF3-BBEA-A0D174271941.jpeg
    786D334B-91CE-4BF3-BBEA-A0D174271941.jpeg
    368.6 KB · Views: 0
Back
Top