X570 Chipset to have 40 PCIe 4.0 lanes

Woah woah woah, step back like five or 8 posts. I'm almost 100% sure zen2 will not support ddr5. For one, nobody has been talking up ddr5 afaik, and no demos have used ddr5...is there even server hardware that uses it? Have crucial or gskill made any dimms? Was the spec not finalized late last year?

Zen3 I could see, maybe. Almost no way in hell Zen2, gtfo.
 
Even with 64gb the games I really would like to get much better are too big to go the ram disk route.

Wow, eso and other modern mmo/rpg type games are 50-75 gb or bigger.

I'd need at least 96-128gb (imo) to pull it off.
There's a reason I'm saying RamCache and not a RamDisk.

Yes, game installs right now are huge. However you pretty much NEVER need to access 100% of those files all the time. Especially for something big like an RPG. Dozens upon dozens of those gigs are audio files or animation files that will only play during a single quest. Gigabytes of data on textures not relevant to what you're immediately seeing. Cutscenes that are single play and then won't be seen again for tens or hundreds of hours.

That's why you only cache the good stuff. Relevant character models, textures immediately necessary, music and audio that occur frequently.

This is all done in the background quietly when you have a good algorithm, and from my experience, Primocache is a damn good algorithm. A full install of FFXV is like 160gb, but with an nvme cache of about 20gb it has all the files relevant to post game play, and the smaller 8gb ram cache covers all the super relevant loading and streaming. About the same for MH world, etc.

My goal with x570 is about 20 gigs of catch-all cache. Probably 512GB of super, super fast NVME cache for my game drive, and preferably 2TB of slower but still fast nvme (Intel 660p or so) cache for my plex server (which will likely blossom to over 50TB if I can find a board with more SATA connections). My main OS will likely conitnue to be on its own NVME drvive, of which 250gb has proven sufficient.
 
Just wondering how many usable lanes of PCIe 4.0 will x570 motherboards actually have? Can I run a 4x PCIe 3.0 device "full speed" on 2x PCIe 4.0 lanes (e.g. existing m.2 drives that expect a 4x PCIe 3.0 lanes)?

I'm interested in creating a couple of mini "hyper converged" high speed VM servers for home labs that would need speedy storage, and while Threadripper could definitely do that job, they are expensive between chips and motherboard. I'd rather put those $ towards more NVME drives. Ideally I could run 6 nvme 4x drives + a gpu in a "normal" motherboard.
 
Can I run a 4x PCIe 3.0 device "full speed" on 2x PCIe 4.0 lanes (e.g. existing m.2 drives that expect a 4x PCIe 3.0 lanes)?

No.

When connecting devices will autonegotiate the latest gen standard both the device and the host support.

So, a Gen3 4x device connecting to a Gen4 host will connect at Gen3 4x speeds using 4 lanes.

If you only allow it to use two lanes, it will max out at Gen3 2x speeds, as it is not Gen4 compatible.

So, out of one of these systems you will get a total of (28 - uplink lanes) dedicated CPU lanes + (40 - uplink lanes) using shared uplink bandwidth.

You cannot run a 4x gen3 NVMe device at full speed using 2x gen4 lanes UNLESS you have a some sort of PCIe switching device, which the chipset includes.

The chipset effectively uses a switching device to share the full bandwidth of its uplink lanes to the CPU across these 40 (- uplink lanes) regardless of what speeds they are running at. You will still be limited to the 40-uplink lanes total, and by the total bandwidth of the uplink lanes, but they can run at any speed you need them to.



The article describes the chipset being rumored as having 40 lanes INCLUDING the uplink lanes to the CPU.

So if 7nm Zen has the same lane counts as previous Zen CPU's (which it probably will because lane counts are pincount dependent, and it is using the same socket) we are talking about 28 lanes coming off the CPU. On previous Zen CPU's these were Gen3 lanes, now they are Gen4 lanes.

On most motherboards you should expect 16x of those 28x lanes to be dedicated to a GPU slot. We now have 12x lanes left.

The next part is really guesswork at this point.

If the new chips work the way the previous ones did, 4x of those lanes will be uplink lanes tot he chipset.

The chipset - since it is rumored to be 40x lanes including uplink, then uses those 4x lanes to provide 36x lanes to other devices (this includes on board devices, so not all of them will be available for expansion)

The chipset has some sort of PCIe lane switching in it though, so those 36 lanes can be used as gen1, gen2, and probably gen 3 and maybe gen 4 without impacting the 4x link speed to the CPU, but all of them share those 4 lanes to the CPU no matter what.

Now, 36 lanes sharing the bandwidth of 4 lanes (albeit at gen4 speeds) to the CPU doesn't sound like a fantastic idea to me.

This is where I am theorizing that instead of 4x uplink lanes, there will be more uplink lanes, probably 8x. This allows the chipset to provide 32x lanes to other devices, sharing 8x gen4 bandwidth, which seems a lot more reasonable.

As for how significant the uplink bandwidth sharing is, it depends on what you use the lanes for. Many of the devices we use don't utilize the latest generation PCIe links. Sound cards, NIC"s USB adapters, storage controllers, etc etc, can often be older generation. Also, you don't always max out every slot. Sometimes you stick a 1x sound card in an 8x slot. It is also rare for all devices to be maxing out their bandwidth at the same time. So in practice the bandwidth sharing is less of a big deal than it initially sounds, as long as you keep th elimitations in mind when deciding what to do with all of those lanes.

If the uplink stays at 4x lanes and are at Gen4 speeds, this is equivalent to ~8x Gen3 lanes, ~16x Gen2 lanes or 32x Gen 1 lanes worth of bandwidth. This sounds a bit limited for 36 lanes to share.

If the uplink goes up to 8x lanes and are at Gen4 speeds, this is equivalent to ~16x Gen3 lanes, ~32x Gen2 lanes or 64x Gen 1 lanes worth of bandwidth, shared over 32 lanes. This sounds a lot more reasonable.

Who knows, maybe even 12x uplink is on the table (but then you have no other direct to CPU lanes left over after the GPU slot) If that is the case at Gen4 speeds, this is equivalent to ~24x Gen3 lanes, ~48x Gen2 lanes or 96x Gen 1 lanes worth of bandwidth, shared over 28 lanes. Who knows if they do something like this.
 
No.

When connecting devices will autonegotiate the latest gen standard both the device and the host support.

So, a Gen3 4x device connecting to a Gen4 host will connect at Gen3 4x speeds using 4 lanes.

If you only allow it to use two lanes, it will max out at Gen3 2x speeds, as it is not Gen4 compatible.

So, out of one of these systems you will get a total of (28 - uplink lanes) dedicated CPU lanes + (40 - uplink lanes) using shared uplink bandwidth.

You cannot run a 4x gen3 NVMe device at full speed using 2x gen4 lanes UNLESS you have a some sort of PCIe switching device, which the chipset includes.

The chipset effectively uses a switching device to share the full bandwidth of its uplink lanes to the CPU across these 40 (- uplink lanes) regardless of what speeds they are running at. You will still be limited to the 40-uplink lanes total, and by the total bandwidth of the uplink lanes, but they can run at any speed you need them to.



The article describes the chipset being rumored as having 40 lanes INCLUDING the uplink lanes to the CPU.

So if 7nm Zen has the same lane counts as previous Zen CPU's (which it probably will because lane counts are pincount dependent, and it is using the same socket) we are talking about 28 lanes coming off the CPU. On previous Zen CPU's these were Gen3 lanes, now they are Gen4 lanes.

On most motherboards you should expect 16x of those 28x lanes to be dedicated to a GPU slot. We now have 12x lanes left.

The next part is really guesswork at this point.

If the new chips work the way the previous ones did, 4x of those lanes will be uplink lanes tot he chipset.

The chipset - since it is rumored to be 40x lanes including uplink, then uses those 4x lanes to provide 36x lanes to other devices (this includes on board devices, so not all of them will be available for expansion)

The chipset has some sort of PCIe lane switching in it though, so those 36 lanes can be used as gen1, gen2, and probably gen 3 and maybe gen 4 without impacting the 4x link speed to the CPU, but all of them share those 4 lanes to the CPU no matter what.

Now, 36 lanes sharing the bandwidth of 4 lanes (albeit at gen4 speeds) to the CPU doesn't sound like a fantastic idea to me.

This is where I am theorizing that instead of 4x uplink lanes, there will be more uplink lanes, probably 8x. This allows the chipset to provide 32x lanes to other devices, sharing 8x gen4 bandwidth, which seems a lot more reasonable.

As for how significant the uplink bandwidth sharing is, it depends on what you use the lanes for. Many of the devices we use don't utilize the latest generation PCIe links. Sound cards, NIC"s USB adapters, storage controllers, etc etc, can often be older generation. Also, you don't always max out every slot. Sometimes you stick a 1x sound card in an 8x slot. It is also rare for all devices to be maxing out their bandwidth at the same time. So in practice the bandwidth sharing is less of a big deal than it initially sounds, as long as you keep th elimitations in mind when deciding what to do with all of those lanes.

If the uplink stays at 4x lanes and are at Gen4 speeds, this is equivalent to ~8x Gen3 lanes, ~16x Gen2 lanes or 32x Gen 1 lanes worth of bandwidth. This sounds a bit limited for 36 lanes to share.

If the uplink goes up to 8x lanes and are at Gen4 speeds, this is equivalent to ~16x Gen3 lanes, ~32x Gen2 lanes or 64x Gen 1 lanes worth of bandwidth, shared over 32 lanes. This sounds a lot more reasonable.

Who knows, maybe even 12x uplink is on the table (but then you have no other direct to CPU lanes left over after the GPU slot) If that is the case at Gen4 speeds, this is equivalent to ~24x Gen3 lanes, ~48x Gen2 lanes or 96x Gen 1 lanes worth of bandwidth, shared over 28 lanes. Who knows if they do something like this.
sorry, a bit drunk and tired so I didn't read your whole post but, theoretically you could do it, but it would require a multiplexer or something like that to take the 4x 3.0 lanes and put them on the faster 2x 4.0 bus, and you'd need one per device/slot. not really feasible.
 
sorry, a bit drunk and tired so I didn't read your whole post but, theoretically you could do it, but it would require a multiplexer or something like that to take the 4x 3.0 lanes and put them on the faster 2x 4.0 bus, and you'd need one per device/slot. not really feasible.
Thqt said, you can do something similar with a single mux, turning the 4.0 lanes reserved for the chipset into 3.0 lanes and double the number, but I think you'd still need to segregate them by half since you're doubling.
 
sorry, a bit drunk and tired so I didn't read your whole post but, theoretically you could do it, but it would require a multiplexer or something like that to take the 4x 3.0 lanes and put them on the faster 2x 4.0 bus, and you'd need one per device/slot. not really feasible.

I believe that's sort of what I said.

The chipset uses some form of this to accomplish allowing its lanes to share the bandwidth of the uplink lanes.
 
Back
Top