PCIe 5.0 Is Ready for Prime Time

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
PCIe 4.0 is expected be a short-lived standard, as PCI-SIG, the organization behind the interface, is already close to releasing finalized specs for PCIe 5.0: the group announced it ratified Version 0.9 this week, leading the way to a final revision in Q1 2019 and potential products by the end of the year. While PCIe 4.0 granted 64GBps of throughput, PCIe 5.0 doubles that to 128GBps.

PCIe 5.0 also brings other features, like electrical changes to improve signal integrity, backward-compatible CEM connectors for add-in cards, and backward compatibility with previous versions of PCIe. The PCI-SIG also designed the new standard to reduce latency and tolerate higher signal loss for long-reach applications.
 
Last edited:
Data center only. Quad port 100gbs cards, FC, SAS, and NVMe HBAs.

No one will see this in their PC for a while, nor will they need to.

It will probably become mainstream in a few years. As for no one will need this, it's much the same with today's PCIe3 and a 48 lane CPU. Very few people need it but it doesn't mean nobody needs it. Many folks here work from home or have home workstations which server for both work and gaming so if anyone does any kind of video production or rendering (for example) like hell could use this.
 
Question? Will this lead to low cost video cards that use system memory instead of on board memory? Would such a card be useless for gaming?
 
Amd: pcie4 > pcie5 bios update
Intel: need new mobo

I expect both to need a new motherboard because of the signaling change.

PCIe 5.0 also brings other features, like electrical changes to improve signal integrity, backward-compatible CEM connectors for add-in cards, and backward compatibility with previous versions of PCIe.
 
Ok. Data center and workstation. Even then it'll be rare to be fully utilized.

Once it comes and becomes standard in servers itll simply be the standard everywhere else as well. it only makes sense to keep production lines standardized.

It'll just be irrelevant in the consumer workspaceI.
 
Data center only. Quad port 100gbs cards, FC, SAS, and NVMe HBAs.

No one will see this in their PC for a while, nor will they need to.

I would say more PCIE bandwidth at the consumer level is critical. As NVME prices go down more and more people will feel the DMI 3.0 bottleneck. Hell anyone running a 2080ti at the consumer level can't use an additional PCIE device without choking their GPU. We probably won't see lane increases at the consumer level, but PCIE 5.0 can compensate for that.
 
Yep, and as 120Hz, 240Hz+ screens become easier/cheaper to make, all forms of latency reduction are welcome.
 
This is not even finalized and yet people think we'll see this anytime soon? That is absolutely laughable as PCI 4.0 was finalized October 2017 and we still don't even have any consumer boards with that yet.
 
Just like nobody "needs" 10gige in the home... right?

Nobody needs more than 56k... right?

There will be graphics cards that need this bandwidth, as well as plenty of other devices.

How many of us feel that SATA 3 is enough for our SSDs? Probably not very many. NVMe 4x NAND devices are very desirable, and the difference is tangible.

Data center only. Quad port 100gbs cards, FC, SAS, and NVMe HBAs.

No one will see this in their PC for a while, nor will they need to.
 
They are not even at (just close to) standards approval. Parts for this stuff is miles away yet. 4.0 may be short, but seriously it's at least "ready" with parts that can ship, controllers that can be adopted and you aren't even seeing that yet in products. Worse when people get 4.0, the jump to 5.0 is going to feel quite unnecessary for a long time. 4.0 will be plenty for a lot of uses until well into the 2020's.
 
Too bad it's not available today. Might have provided a faster way of exposing whether or not your NVIDIA product was infested with Space Invaders.
 
Question? Will this lead to low cost video cards that use system memory instead of on board memory? Would such a card be useless for gaming?

A GPU integrated into the GPU, like those that already exist fill this need.
 
Just like nobody "needs" 10gige in the home... right?

Nobody needs more than 56k... right?

There will be graphics cards that need this bandwidth, as well as plenty of other devices.

How many of us feel that SATA 3 is enough for our SSDs? Probably not very many. NVMe 4x NAND devices are very desirable, and the difference is tangible.


Bill Gates said no one will ever need it.

I said 'for a while'. PCIe 2.0 x16 doesn't hurt graphics by much. NVMe is really the ONLY thing which has outgrown PCIe 3.0 in the home, and only under some VERY small use cases. Check out the load time gaming benchmarks with NVMe vs something like an 850 EVO. It's faster, yes. To some it's worth it.

Look, I know it's coming. I'm looking forward to it coming. I get as [H]ard as the rest of you and geek out over the new and shiny stuff. BRING IT ON. I just know that it's more for benchmark e-peen than actual tangible results. The use cases I've outlined are what's driving this. When you've got a SAN that's acting as THE hard disk for 3 racks full of hosts, running 5000 virtual machines, that's when this kind of per-plug density is going to make a difference. Users like us are just along for the ride when the wave trickles down.

We don't need to argue about this.
 
No one will see this in their PC for a while, nor will they need to.

I'd argue that the biggest value of this on the desktop will be to split up small numbers of gen 5 PCIe lanes from the CPU to larger numbers of gen 2 PCIe lanes on the motherboard so we can use more devices at the same time.
 
I am hoping we get a Thunderbolt standard based on either 4.0 or 5.0 released soon. External GPUs run on PCIe x4, so the extra bandwidth is a big win here.
 
Question? Will this lead to low cost video cards that use system memory instead of on board memory? Would such a card be useless for gaming?
I doubt it. Bandwidth is increased, but latency is still limited by the length of the traces. You could reduce the latency a bit by taking more advantage of the bandwidth and sending more data at once, but you can't go faster than physics allows, so the trace length is always going to limit you in latency (until they figure that quantum shit out for consumer hardware). It's a hard minimum limit, and the only way it get around it would be a major change in tech or board layout.
 
I am hoping we get a Thunderbolt standard based on either 4.0 or 5.0 released soon. External GPUs run on PCIe x4, so the extra bandwidth is a big win here.

I thought Thunderbolt was only an Apple thing. I've never known anyone to use it.
 
Question? Will this lead to low cost video cards that use system memory instead of on board memory? Would such a card be useless for gaming?

Yes.. might be good for minecraft.

What you want exists.. its called an igp(Intel) or apu(amd).

Benchmarks are available in vast amounts.
 
Bill Gates said no one will ever need it.

I said 'for a while'. PCIe 2.0 x16 doesn't hurt graphics by much. NVMe is really the ONLY thing which has outgrown PCIe 3.0 in the home, and only under some VERY small use cases. Check out the load time gaming benchmarks with NVMe vs something like an 850 EVO. It's faster, yes. To some it's worth it.

Look, I know it's coming. I'm looking forward to it coming. I get as [H]ard as the rest of you and geek out over the new and shiny stuff. BRING IT ON. I just know that it's more for benchmark e-peen than actual tangible results. The use cases I've outlined are what's driving this. When you've got a SAN that's acting as THE hard disk for 3 racks full of hosts, running 5000 virtual machines, that's when this kind of per-plug density is going to make a difference. Users like us are just along for the ride when the wave trickles down.

We don't need to argue about this.


Bill Gates also said that 640k of RAM ought to be enough for anyone :p


You have a point though. PCIe bandwidth has almost no impact what so ever to GPU performance.

My Sandy-E X79 system was launched before PCIe gen 3 validation was formalized, so while it has the hardware for it, it only officially supports Gen 2. There is a registry edit you can tweak to force the Nvidia drivers to work in Gen 3 mode. Even with my Pascal Titan X there is absolutely 0 difference in performance between Gen 2 and Gen 3 x16. I havent seen any recent testing with the latest motherboards, CPU's and GPU's but if I had to wager a guess, it would be that this is still the case. Marginal to no difference in performance

And this makes sense. Have you ever looked at the Rivatuner charts on a second screen while playing a game? PCIe bandwidth usage rarely goes over single digits percentage wise, and usually only during loading when textures are decompressed and loaded into VRAM. If I had to wager a guess it would be that differences in PCIe bandwidth (unless you do something silly and drop down to Gen 1 1x or something like that) have only very small impacts on performance, and then only impact load times, not framerate in game.

Speaking of doing silly things, in 2010 I bought a Radeon 6850 to experiment with using external GPU's on a laptop. I got an Expresscard to PCIe adapter and ran the GPU with an external PSU. In my testing between between 16x gen 2 (the fastest at the time) and 1x gen 1 (what the expresscard in the laptop offered) there was actually a difference in performance, but it was minor, less than 5%. I never checked for it, but I don't remember there being long load times either, but I could be wrong. This 5% could also have been due to the difference in CPU though. The laptop had some low clocked mobile dual core, and the desktop I was comparing on had a desktop quad core.

Now, the Radeon 6850 came out a long time ago and his hardly representative of modern GPU's, and it may be that high end modern GPU's require more PCIe bandwidth to perform at their best, but at least it shows that PCIe bandwidth has much less of an impact on performance than most people think.


Still, this being the [H] I'm not willing to put anything up to chance. I too insist that my GPU's run at 16x Gen 3 today, even though I have tested and seen no performance difference... Just in case :p
 
Question? Will this lead to low cost video cards that use system memory instead of on board memory? Would such a card be useless for gaming?

Yes.. might be good for minecraft.

What you want exists.. its called an igp(Intel) or apu(amd).

Benchmarks are available in vast amounts.

Agreed. Having more PCIe bandwidth certainly increases the speed at which a GPU can access system ram, but the system RAM is still not well suited to GPU use. Look at how derided the DDR4 version of the GeForce GT 1030 was.

And now you are sharing that RAM with the rest of the system too, further detracting from the GPU performance.

IMHO, in anything but the very low end, dedicated video RAM on the video card will be a must for the foreseeable future.
 
I'm coming at this from the angle that the amount of bandwidth available becomes a glut, and as such when things like that happen, usually the industry innovates in ways we weren't really thinking about before.

Consider that it's so affordable now to put together rather good 1080p videos, and we now have multiple platforms to distribute that content world-wide. We effectively have the means to do things that previously would have a barrier to entry of $1M-$5M+ just to start.

Naturally, many people are innovating now that we have those new means. And I think the PCIe 5.0 thing is likely to have similar effect.

Also, I don't think we're "arguing" here, I'd say we're discussing it. :)

Bill Gates said no one will ever need it.

I said 'for a while'. PCIe 2.0 x16 doesn't hurt graphics by much. NVMe is really the ONLY thing which has outgrown PCIe 3.0 in the home, and only under some VERY small use cases. Check out the load time gaming benchmarks with NVMe vs something like an 850 EVO. It's faster, yes. To some it's worth it.

Look, I know it's coming. I'm looking forward to it coming. I get as [H]ard as the rest of you and geek out over the new and shiny stuff. BRING IT ON. I just know that it's more for benchmark e-peen than actual tangible results. The use cases I've outlined are what's driving this. When you've got a SAN that's acting as THE hard disk for 3 racks full of hosts, running 5000 virtual machines, that's when this kind of per-plug density is going to make a difference. Users like us are just along for the ride when the wave trickles down.

We don't need to argue about this.
 
Back
Top