Planning my next build - when will nVidia support PCI-E 4?

I'm wondering if I'm the only one that see PCIe 4.0 as the "one to skip version" and that PCIe 5 might be the next big target. Does anyone else feel this way?
I dont see that it matters and for those that need to purchase, you buy whats on offer.
You can always wait for the next performance boost but it could be quite a long wait.
Why do you want to skip it?
 
Holy shit please don’t ever repeat that. You’re contributing to the downfall of human intelligence.

Irregardless has never been and should never be a real word in the English language.

Great, you disagree with it's use, that doesn't change it's meaning. It's been around since the 1800's and is still used today (ir)regardless of whether you like it or not. The man asked what it meant and I answered. If you want to contemplate what words should or should not be used in the English language, maybe a different forum would be a better place. Btw, I prefer regardless but I don't look at people any differently if they use the other term.

Back to topic, I think th best thing about pcie 4.0 is that you can put 2 GPUs in @ 4.0 x8 and have the same speed as having full x16 slots. By itself a single pcie 4.0 GPU with x16 doesn't gain much (anything?) over version 3.0. My hope is this will slowly reduce # of lanes required as you would only need x2 for an nvme to hit the same speeds as current v3 x4. Network cards could either gain bandwidth or use less lanes, USB, etc. It's incremental with pcie version 5 coming up, but we still aren't saturating 3.0, so I feel I have time before 4.0 is deprecated.
 
I'm wondering if I'm the only one that see PCIe 4.0 as the "one to skip version" and that PCIe 5 might be the next big target. Does anyone else feel this way?

Once we hit PCIe 3.0, the bottlenecks pretty much lifted. After that, power users needed more lanes more than needing more bandwidth per lane, in order to support more devices. Both Intel and AMD have been increasing the number of lanes available through the CPU socket as well as through the chipset PCIe hub in recent iterations.

And the biggest challenge isn't really the speed of the lanes on the board, but the speed used on the devices you want to attach to them. Increasing per-lane bandwidth with PCIe 4.0 and PCIe 5.0 (which is actually already in use elsewhere) is still nice, as it's certainly nice to be 'ahead' of the curve, but it's not really a decision point at this time.
 
I dont see that it matters and for those that need to purchase, you buy whats on offer.
You can always wait for the next performance boost but it could be quite a long wait.
Why do you want to skip it?

Actually, what I'm suggesting is that vendors as a whole will skip it with PCIe 5 banging on the door.
 
The best thing about an intel PCIe 4.0 mainboard, assuming intel sticks to it's limit of 16x total PCIe lanes available for all cards directly from the CPU (after PCH and SATA, USB etc), would be the ability to not bottleneck the GPU with an 8x / 8x split or even 4x if you ran multiple NVMe drives, RAID card, 4k capture card, 10GbE NICs or whatever you need PCIe lanes for.
For that reason I still buy HEDT boards, but AMD has shown that with 20 (16+4) direct-from-CPU lanes instead of 16 and PCIe 4.0, mainstream desktop can be good for a lot more scenarios. (where X570 fails is pricing compared to HEDT, IMHO).
 
The best thing about an intel PCIe 4.0 mainboard, assuming intel sticks to it's limit of 16x total PCIe lanes available for all cards directly from the CPU (after PCH and SATA, USB etc),

They haven't stuck to 16 lanes off the CPU for a few revisions now. You'll need to check the chipset of interest to be sure, but it may be up to 24 lanes.
 
Intel mainstream has a total of 24 lanes. 4 for DMI, 4 for peripherals (SATA, USB3 etc) and 16 for general PCIe cards.
That has been the case for a while now. Some mobos use switches etc, or give you shared lanes with DMI or whatever, but only 16 lanes are dedicated for add-in PCIe.

To clarify DMI 3.0 is a total of 4x PCIe shared for everyting that comes off of it, but some mobo makers split off lots of lanes off of it. Even if you run 24 lanes off of DMI, only 4x is talking to the CPU at any given time.

https://www.guru3d.com/news-story/intel-z390-chipset-product-brief-and-block-diagram-posted.html
 
Intel mainstream has a total of 24 lanes. 4 for DMI, 4 for peripherals (SATA, USB3 etc) and 16 for general PCIe cards.
That has been the case for a while now. Some mobos use switches etc, or give you shared lanes with DMI or whatever, but only 16 lanes are dedicated for add-in PCIe.

To clarify DMI 3.0 is a total of 4x PCIe shared for everyting that comes off of it, but some mobo makers split off lots of lanes off of it. Even if you run 24 lanes off of DMI, only 4x is talking to the CPU at any given time.

https://www.guru3d.com/news-story/intel-z390-chipset-product-brief-and-block-diagram-posted.html
Z170 had 20 DMI lanes. Z270 and Z370/Z390 all have 24. Modern Intel mainstream has 16 CPU + 24 DMI. They haven't only had 8 lanes since the Z97 chipset. Even AMD with its 16+4 setup from the CPU only has 16 total available for video cards. How many people who are gaming are running more than 1 video card these days? Regardless, as mentioned earlier in the thread, we're barely maxing out 8 lanes of PCI-E 3.0 right now.
 
The DMI interface only has the equivalent of 4 PCIe 3.0 lanes talking to the CPU. Doesn’t help how many lanes you multiply out of it. It was worse when it only had 4x PCIe 2.0 lanes worth of bandwidth in the past but now, we have 10gbit USBC ports, nvme m.2, ethernet, Sata and Sata express lanes all sharing a 4x link. Mobos even drive pcie slots off of it but several of those choices are either / or - there’s not enough bandwidth to drive them all at full speed.
 
Last edited:
This is where pcie 4.0 could come in handy... More bandwidth with less lanes. You can run your GPU with pcie 4.0 x8 and then you have 8 lanes for some extra nvme drives or 10GB similar. Most people will never use them nor care, but others that don't want to go into the HEDT market have a few more options.
 
The DMI interface only has the equivalent of 4 PCIe 3.0 lanes talking to the CPU. Doesn’t help how many lanes you multiply out of it. It was worse when it only had 4x PCIe 2.0 lanes worth of bandwidth in the past but now, we have 10gbit USBC ports, nvme m.2, ethernet, Sata and Sata express lanes all sharing a 4x link. Mobos even drive pcie slots off of it but several of those choices are either / or - there’s not enough bandwidth to drive them all at full speed.

I'd say that one would have to set up a test case to find a situation where all of the bandwidth available to devices off of the southbridge would flood the channel to the northbridge on a consumer setup.

If you have a use case for that much bandwidth concurrently, you should be looking at HEDT or enterprise hardware, really.

This is where pcie 4.0 could come in handy... More bandwidth with less lanes. You can run your GPU with pcie 4.0 x8 and then you have 8 lanes for some extra nvme drives or 10GB similar. Most people will never use them nor care, but others that don't want to go into the HEDT market have a few more options.

This assumes that all peripherals are updated, and well, that just doesn't happen. GPUs, sure; everything else? Plenty of 'modern' stuff is still on PCIe 2.0.
 
I'd say that one would have to set up a test case to find a situation where all of the bandwidth available to devices off of the southbridge would flood the channel to the northbridge on a consumer setup.

If you have a use case for that much bandwidth concurrently, you should be looking at HEDT or enterprise hardware, really.



This assumes that all peripherals are updated, and well, that just doesn't happen. GPUs, sure; everything else? Plenty of 'modern' stuff is still on PCIe 2.0.
Hence I said 'could'. I agree, for most users this won't be useful. In theory it means you could get the same performance with 1/2 the lanes... in reality, you could still make some use, like an x8 GPU and an nvme card to have a couple of full speed nvmes installed, but in most cases... not much available to actually put it to use. Although the 4 lanes to the DMI give it the same bandwidth as 8 lanes of pcie 3.0, so even though it's shared, it either uses less lanes or gives you more bandwidth for USB, NIC, etc.
 
Biggest issue is that in most cases, you'd need to have a converter to go from 1x PCIe 4.0 lanes to 2x PCIe 3.0 lanes, and implementing that on the board or on a part would likely be more expensive than just replacing the part considering the limited demand for such solutions.
 
PCIe 4.0 really isn't worth worrying about at all, imho.

I'm currently running my Titan Xp at 8x 3.0 mode to accommodate an SSD for its 4x PCIe connection in PCIe slot 2. I've not seen any significant performance drop in graphics as a result. Tells me nothing is really gunna need that serious amount of bandwidth that PCI 3.0 can't handle any time soon.
 
There aren't enough benefits for moving GPU's to PCIe 4.0 and it is too small a market to justify the expense. Existing cards don't even fully push 3.0, adding 4.0 support would only add heat and expense to the cards while adding no noticable benefits to the user. PCIe 5.0 will be a different story, it will be supported in the Intel Server space has similar heat and power draw to 4.0 while offering enough bandwidth that they could feasibly use it as the GPU interconnect for multi GPU solutions. 5.0 will be a game changer 4.0 will be a foot note.
 
There aren't enough benefits for moving GPU's to PCIe 4.0 and it is too small a market to justify the expense. Existing cards don't even fully push 3.0, adding 4.0 support would only add heat and expense to the cards while adding no noticable benefits to the user. PCIe 5.0 will be a different story, it will be supported in the Intel Server space has similar heat and power draw to 4.0 while offering enough bandwidth that they could feasibly use it as the GPU interconnect for multi GPU solutions. 5.0 will be a game changer 4.0 will be a foot note.
I'm pretty sure he asked because AMD GPUs already support it... So it's not that GPUs won't move, it's Nvidia that's waiting (due to the lack of compelling reason).
 
Being that AMD is having a ever more presence in Servers, I expect Nvidia to at least have PCIe4 for the data centers as needed with Ampere besides NVlink superior overall bandwidth of 300gb/s over PCIe4 64gb/s bidirectional.

I would also expect PCIe 4 at least on Ampere high end gaming cards, even though PCIe 3 at 16x I would think would be very much sufficient and not restrictive to gaming. PCIe 8x for 3080 Ti (whatever it will be called) I personally would bulk on that configuration, as in using both PCIe 16x slots on a X370, Z390 etc. motherboard.
 
I don't think they're waiting so much as they'll have it on the next release cycle. Turing predates Navi by what, a year or so?
Ergo, they are waiting. It wouldn't have had a point when the 2080 was released. If they feel there is a slight chance to gain any $ from implementing it, then you will see it on their next release. If they don't, then they won't. It may make more sense from a data center point of view to be able to install multiple GPUs while using less lanes for GPU workloads, or possibly faster transfer of data in and out of the GPU. I imagine if they decide to skip it with this release they will probably wait on pcie 5. I hadn't heard much either way, but probably find out more about their data center cards in March.
 
If they feel there is a slight chance to gain any $ from implementing it, then you will see it on their next release. If they don't, then they won't.

It's honestly difficult to imagine them not including it in the architecture, especially since AMD is marketing support on Navi+.
 
  • Like
Reactions: noko
like this
There aren't enough benefits for moving GPU's to PCIe 4.0 and it is too small a market to justify the expense. Existing cards don't even fully push 3.0, adding 4.0 support would only add heat and expense to the cards while adding no noticable benefits to the user. PCIe 5.0 will be a different story, it will be supported in the Intel Server space has similar heat and power draw to 4.0 while offering enough bandwidth that they could feasibly use it as the GPU interconnect for multi GPU solutions. 5.0 will be a game changer 4.0 will be a foot note.
When buying a new system 4.0 secures GPU updates for a 5-year period easily. 3.0 will be left in the dust after a couple of GPU gens.
 
When buying a new system 4.0 secures GPU updates for a 5-year period easily. 3.0 will be left in the dust after a couple of GPU gens.
If the current GPU doesn't see a benefit to 4.0, it won't see one moving further. They don't have to future proof, it's backward compatible. That said, I would be surprised if it wasn't supported on their next gen.
 
I'm wondering if I'm the only one that see PCIe 4.0 as the "one to skip version" and that PCIe 5 might be the next big target. Does anyone else feel this way?
For now even PCIe 2.0 is fairly enough for high end GPUs. The GPU may take advantage in some situations of peak data transfer from PCIE 4.0. This has been proven by AMD to be useful in some gaming on low end graphics card RX 5500 with 4GB VRAM only and PCIe 4.0 support vs PCIe 3.0. Don't expect next high end GPUs to take advantage of PCIe 4.0 even if it's going to be supported. Maybe in some multi-GPU use without a supplemental link between GPUs (like NVlink) it may become usefull.
 
Back
Top