Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Exactly this. The hard part for most consumers is, they don’t understand the lanes relative to the generation. But if you’re remotely an enthusiast you’ll figure it out.Probably because when PCIe 1.0 was mainstream you could use all the bandwidth. Now it's more just backward compatibility (on lower end cards that don't need the bandwidth).
It’s up to 15% improvement from 8x to 16x 3.0, and I imagine more if you’re dumb enough to go with multiGPU, so why not?
https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/3.html
That's only on Assassin's Creed: Origins. Every other game they tested it's a below couple percent performance drop.
Does that mean that pcie 3.0 x8 is on it's last legs? Sure! But it's not out-of-date yet!
The full x16 3.0 slot (single GPU) has several years remaining.
That's only on Assassin's Creed: Origins. Every other game they tested it's a below couple percent performance drop.
Does that mean that pcie 3.0 x8 is on it's last legs? Sure! But it's not out-of-date yet!
The full x16 3.0 slot (single GPU) has several years remaining.
Motherboard power is probably the most important criteria. The PCIe spec limits slot power by length.
https://en.wikipedia.org/wiki/PCI_Express#Power
1x cards are limited to: 6W @12v 10W @ 3.3V, 10W total.
x4/8 cards are limited to: 25W @ 12V, 10W @ 3.3V, 25W total.
x16 cards are limited to 65@ 12 V, 10W @ 3.3V, 75W total.
Because the pinout for power delivery doesn't change from 1x to 16x, I presume this was done so that board designers could make full size ATX boards without needing to make sure they could deliver >500W to the bottom half of the board.
The 10W @ 3.3V part is what legacy PCIe cards ran on; maintaining the power on the new spec made initial card porting easier in that it was possible to keep the old design almost the same and just insert a PCI-PCIe bridge chip onto the card.
Too true. Even if modern cards are considered more efficient but their hunger for power keeps growing. The data throughput means almost nothing at this point since we still can't really populate the lanes with GPU data but 2.0x16 is finally beginning to reach it's limits. Could be another 5 years, or more, before 3.0 reaches its and 4.0 has more potential for SSD's than anything else but even then the current ones can only use it in very selective ways.
Now that's funny. Might have a hard time finding a case to fit that .Am I the only one still wondering why we don't have PCIe x64 slots to help push the GPU makers to new realms?
View attachment 174943
Now you just need that extra custom case.Am I the only one still wondering why we don't have PCIe x64 slots to help push the GPU makers to new realms?
View attachment 174943
GPU at this point for 90 odd % of all users out there seem to be perfectly content with pci-e 2 or 3 @x8 preferably x16 (more fo multi-care usage or when pairing with onboard graphics that will alllow it)
pci-e beyond this for the most part has been used for all the extras folks want/may need
such as extra Sata, USB, M.2 and whatever else tickles their fancy
GPU alone, likely no need to go beyond this limit desktop wise (till we go optical based, but then the rules change again .. hell we might go back too 200+nm die size)
not know crud about servers but they likely have their own format so can use full width bus to do all the fancy computation that gamers likely will never truly have use of, so they keep the upper stupid $$$$$ stuff for the industry / customers that have need of it, more than a few foks with $$$$$$$$$$ more than need of it really.
I am surprised someone along the line when Crypto was far more "in your face" then it is now, did not do a full blown custom GPU for the sole purpose of mining.
___________________
A CryptoRipper Radeon would be some crazy piece of tech with a price to match it, beyond this, even them mighty Titan or Nvidia's Ti cards even the many X2 style cards over the years do not seem to thrash pci speeds nearly as bad as
m.2
pci-e SSD
Sata if they bothered to up it's spec another shot to let the SATA based SSD get above the current ~550mb speed.
must be because GPU do a shit ton of crunching tons of data in a short amount of time BUT they only give out X packets of compressed data that the CPU can easily tear chunks from as it calls for it
something along those lines
as for that big w/e GPU that appears to be, it was not THAT long ago we were using IBM PC compatible and many of their add in cards made many of what we think is "huge" really not all that bulky beyond the much much better coolers being used these days.
would be lolz though, 64gb+ram (AMD already done that with 1TB+ can add to forget name at the moment)
tens of thousands of shaders (or CUDA cores) likely need dual 8 + 6 pin as well as a dedicated case "wall" that serves as it's surface area custom rad ^.^
It’s up to 15% improvement from 8x to 16x 3.0, and I imagine more if you’re dumb enough to go with multiGPU, so why not?
https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/3.html