Why are video cards still pice16?

Wat

Limp Gawd
Joined
Jun 23, 2019
Messages
508
Why are all those pcie lanes used on a video card when they dont improve performance? Except for some edge cases, a 4x or 8x connection has shown to not affect performance at all.
 
Probably because when PCIe 1.0 was mainstream you could use all the bandwidth. Now it's more just backward compatibility (on lower end cards that don't need the bandwidth).
 
Probably because when PCIe 1.0 was mainstream you could use all the bandwidth. Now it's more just backward compatibility (on lower end cards that don't need the bandwidth).
Exactly this. The hard part for most consumers is, they don’t understand the lanes relative to the generation. But if you’re remotely an enthusiast you’ll figure it out.
 
No real good reason for the most part. Nothing is stopping you from running a GPU in x8, a good amount of motherboards let you select x16 or x8/x8 or similar types of settings in order to assign the lanes you require for the slots. Most people don't understand this concept (like you've figured out), imagine what John Doe would do if he found an AMD card that had PCIe x16 and an NVidia card that had only x8... it doesn't matter if if the NVidia card was faster or not, it just doesn't sound as impressive.
 
The question is. Why not? All motherboards have an x16 slot which will work with any card. Why complicate matters?

A basic keyboard uses virtually no bandwidth but we aren’t going to call for the return of USB 1.1
 
Each new PCIe spec has always been overkill so you have growth built into the standard, or so an enthusiast can split the lanes up. If you have to buy a new motherboard every time you upgrade your video card, then it wouldn't be very well designed,. right?

Some custom systems may use cut-down options, but they are usually not intended to be upgraded.

PCIe targets the "overkill" 16x option both because it allows the most bandwidth growth, and motherboards offer multiple x16-sized slots because of the extra pins for 75w power delivery. You need both power delivery and minimum number of lanes to match in order to install your graphics card. Also, recall that server devices (compute-focused GPUs, nvme SSDs) are also the target of these "overkill" standards: it's cheaper to make them universal, and just add more lanes to the server processors.

PCIe 1.0 was overkill when it first came our in 2004, just like PCIe 4.0 was overkill when it came out last week.
 
Last edited:
It’s up to 15% improvement from 8x to 16x 3.0, and I imagine more if you’re dumb enough to go with multiGPU, so why not?

https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/3.html

That's only on Assassin's Creed: Origins. Every other game they tested it's a below couple percent performance drop.

Does that mean that pcie 3.0 x8 is on it's last legs? Sure! But it's not out-of-date yet!

The full x16 3.0 slot (single GPU) has several years remaining.
 
That's only on Assassin's Creed: Origins. Every other game they tested it's a below couple percent performance drop.

Does that mean that pcie 3.0 x8 is on it's last legs? Sure! But it's not out-of-date yet!

The full x16 3.0 slot (single GPU) has several years remaining.

if a few % is to be gained there is no reason to use anything less then x16. and I wouldn't say x8 or even x4 or x1 are on their last legs any more then x16 3.0. its just less lanes and the users need to delegate those lanes as they see fit because MANY devices dont need x16 and unfortunately on the consumer end pcie lanes are still pretty finite
 
That's only on Assassin's Creed: Origins. Every other game they tested it's a below couple percent performance drop.

Does that mean that pcie 3.0 x8 is on it's last legs? Sure! But it's not out-of-date yet!

The full x16 3.0 slot (single GPU) has several years remaining.

AC was only 10% IIRC. It was a different game at 15%.

People pick different brands for 10%... why would you throw away performance. I’ll take a few % more any day. Most OC’s are within 5-10% but we spend tons of money extra to get them.
 
Motherboard power is probably the most important criteria. The PCIe spec limits slot power by length.

https://en.wikipedia.org/wiki/PCI_Express#Power

1x cards are limited to: 6W @12v 10W @ 3.3V, 10W total.
x4/8 cards are limited to: 25W @ 12V, 10W @ 3.3V, 25W total.
x16 cards are limited to 65@ 12 V, 10W @ 3.3V, 75W total.

Because the pinout for power delivery doesn't change from 1x to 16x, I presume this was done so that board designers could make full size ATX boards without needing to make sure they could deliver >500W to the bottom half of the board.

The 10W @ 3.3V part is what legacy PCIe cards ran on; maintaining the power on the new spec made initial card porting easier in that it was possible to keep the old design almost the same and just insert a PCI-PCIe bridge chip onto the card.
 
Motherboard power is probably the most important criteria. The PCIe spec limits slot power by length.

https://en.wikipedia.org/wiki/PCI_Express#Power

1x cards are limited to: 6W @12v 10W @ 3.3V, 10W total.
x4/8 cards are limited to: 25W @ 12V, 10W @ 3.3V, 25W total.
x16 cards are limited to 65@ 12 V, 10W @ 3.3V, 75W total.

Because the pinout for power delivery doesn't change from 1x to 16x, I presume this was done so that board designers could make full size ATX boards without needing to make sure they could deliver >500W to the bottom half of the board.

The 10W @ 3.3V part is what legacy PCIe cards ran on; maintaining the power on the new spec made initial card porting easier in that it was possible to keep the old design almost the same and just insert a PCI-PCIe bridge chip onto the card.

Too true. Even if modern cards are considered more efficient but their hunger for power keeps growing. The data throughput means almost nothing at this point since we still can't really populate the lanes with GPU data but 2.0x16 is finally beginning to reach it's limits. Could be another 5 years, or more, before 3.0 reaches its and 4.0 has more potential for SSD's than anything else but even then the current ones can only use it in very selective ways.
 
Too true. Even if modern cards are considered more efficient but their hunger for power keeps growing. The data throughput means almost nothing at this point since we still can't really populate the lanes with GPU data but 2.0x16 is finally beginning to reach it's limits. Could be another 5 years, or more, before 3.0 reaches its and 4.0 has more potential for SSD's than anything else but even then the current ones can only use it in very selective ways.

GPU peak power has largely leveled off (for the same reason that CPU power did, cooling without being objectionably loud becomes too hard).

Top end single GPU card power by generation (via wikipedia)
GF8800U (2007) 175W
GF9800GTX (2008) 141W
GF285 (2009) 205W
GF480 (2010)) 240W
GF580 (2011) 244W
GTX680 (2012) 195W
GTX 780 and Titan (2013) 230W
GTX980 Ti (2015) 250W
GTX1080Ti and Titan (2016) 250W
RTX2080. Ti and Titan (2018) 250W

You can do something similar for AMD's cards. Power levels have largely been the same for the last decade, just oscilating up/down a bit based on efficiency but largely constrained by the need to keep fan noise in check.

You can see larger swings one or two steps down where the company with the weaker cards will crank the power on their mid-level cards like crazy to narrow the performance gap at the cost of higher power bills for heavy users.
 
None of that has to do with pcie x8 or x16... Most of the cards listed use external (from the pci bus) connections, so it's kind of a moot point.
 
Am I the only one still wondering why we don't have PCIe x64 slots to help push the GPU makers to new realms?



PCIe64x.jpg
 
Ah, I did not realize that power delivery scaled with number of lanes. I thought all slots has the same 75 watt power budget.
Thanks for the info!
 
GPU at this point for 90 odd % of all users out there seem to be perfectly content with pci-e 2 or 3 @x8 preferably x16 (more fo multi-care usage or when pairing with onboard graphics that will alllow it)

pci-e beyond this for the most part has been used for all the extras folks want/may need
such as extra Sata, USB, M.2 and whatever else tickles their fancy

GPU alone, likely no need to go beyond this limit desktop wise (till we go optical based, but then the rules change again .. hell we might go back too 200+nm die size)

not know crud about servers but they likely have their own format so can use full width bus to do all the fancy computation that gamers likely will never truly have use of, so they keep the upper stupid $$$$$ stuff for the industry / customers that have need of it, more than a few foks with $$$$$$$$$$ more than need of it really.

I am surprised someone along the line when Crypto was far more "in your face" then it is now, did not do a full blown custom GPU for the sole purpose of mining.

___________________

A CryptoRipper Radeon would be some crazy piece of tech with a price to match it, beyond this, even them mighty Titan or Nvidia's Ti cards even the many X2 style cards over the years do not seem to thrash pci speeds nearly as bad as
m.2
pci-e SSD
Sata if they bothered to up it's spec another shot to let the SATA based SSD get above the current ~550mb speed.

must be because GPU do a shit ton of crunching tons of data in a short amount of time BUT they only give out X packets of compressed data that the CPU can easily tear chunks from as it calls for it

something along those lines

as for that big w/e GPU that appears to be, it was not THAT long ago we were using IBM PC compatible and many of their add in cards made many of what we think is "huge" really not all that bulky beyond the much much better coolers being used these days.

would be lolz though, 64gb+ram (AMD already done that with 1TB+ can add to forget name at the moment)
tens of thousands of shaders (or CUDA cores) likely need dual 8 + 6 pin as well as a dedicated case "wall" that serves as it's surface area custom rad ^.^
 
GPU at this point for 90 odd % of all users out there seem to be perfectly content with pci-e 2 or 3 @x8 preferably x16 (more fo multi-care usage or when pairing with onboard graphics that will alllow it)

pci-e beyond this for the most part has been used for all the extras folks want/may need
such as extra Sata, USB, M.2 and whatever else tickles their fancy

GPU alone, likely no need to go beyond this limit desktop wise (till we go optical based, but then the rules change again .. hell we might go back too 200+nm die size)

not know crud about servers but they likely have their own format so can use full width bus to do all the fancy computation that gamers likely will never truly have use of, so they keep the upper stupid $$$$$ stuff for the industry / customers that have need of it, more than a few foks with $$$$$$$$$$ more than need of it really.

I am surprised someone along the line when Crypto was far more "in your face" then it is now, did not do a full blown custom GPU for the sole purpose of mining.

___________________

A CryptoRipper Radeon would be some crazy piece of tech with a price to match it, beyond this, even them mighty Titan or Nvidia's Ti cards even the many X2 style cards over the years do not seem to thrash pci speeds nearly as bad as
m.2
pci-e SSD
Sata if they bothered to up it's spec another shot to let the SATA based SSD get above the current ~550mb speed.

must be because GPU do a shit ton of crunching tons of data in a short amount of time BUT they only give out X packets of compressed data that the CPU can easily tear chunks from as it calls for it

something along those lines

as for that big w/e GPU that appears to be, it was not THAT long ago we were using IBM PC compatible and many of their add in cards made many of what we think is "huge" really not all that bulky beyond the much much better coolers being used these days.

would be lolz though, 64gb+ram (AMD already done that with 1TB+ can add to forget name at the moment)
tens of thousands of shaders (or CUDA cores) likely need dual 8 + 6 pin as well as a dedicated case "wall" that serves as it's surface area custom rad ^.^

There were mining focused cards with 0 or 1 video outputs (presumably the 1 out was to try and give it some level of post-mining resale value) instead of the ~4 common on conventional cards.

There wasn't a new sata standard because the SATA protocol itself was a poor fit to SSDs and the protocol overhead itself was becoming a significant performance hit. NVME being built on top of PCIe instead of creating its own interconnect was part of the gradual convergence of most consumer priced interconnects on PCIe/USB as the cost of designing higher speed data buses continues to grow with each new generations speed boosts.
 
It’s up to 15% improvement from 8x to 16x 3.0, and I imagine more if you’re dumb enough to go with multiGPU, so why not?

https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/3.html

Note that older games see a boost with x16 while newer games see no change. Why might this be? Well, try watching your bus usage when you're gaming. There are only two very specific times when you'll see that utilization exceed 10%: When you initially load the game, and when you initially load a map. This is because when you're gaming, very very little data has to move between CPU and GPU - except when textures have to be loaded onto the GPU. Newer games are built around GPUs with more VRAM, so they pre-load more data. Older games had to deal with GPUs under 6GB and thus couldn't cache as much.

So why then do motherboards and cards all have x16 connectors? Quite simply, it is a better connector. More power available, it's far more ubiquitous, plus it's physically stronger and can support today's massive heatsinks much better.

Despite having x16 connectors, today's GPUs all run great at lower lane counts. This capability is critical when running multiple GPUs or multiple NVMe SSDs, or a bunch of PCI expansion cards due to how limited the lane counts are in consumer CPUs.
 
Back
Top