AMD X570 Chipset Blockdiagram Surfaces - Specs - PCIe 4.0 All The Way!

I was under the impression that it would use it but you might be right, let's wait until someone with more knowledge pitch in :)

PCIe3 devices will operate at PCEe3 speeds in a PCIe4 slot. Just like with older generations. The card/ssd itself only has parts that speak PCIe3 on them; faster PCIe4 signalling would look like line noise to them.
 
Killer is owned by Rivet Networks or something iirc

PCIe3 devices will operate at PCEe3 speeds in a PCIe4 slot. Just like with older generations. The card/ssd itself only has parts that speak PCIe3 on them; faster PCIe4 signalling would look like line noise to them.

Yup, the device will only operate at up to the highest level supported by all hardware in the chain incl AIC, Motherboard and/or chipset, and CPU.
 
How much slower ? What about latency ?
Not heaps but some features indeed induced additional latency. It's more 'it was not as good as existing solutions in some common scenarios'. Pretty sure [H] had a benchmark on Killer nics... I wouldn't kick one out of bed if it was on the best/most suitable motherboard but it's also definitely not something I would actively look for.

And what ccityinstaller said. PCI4 is confusing unless you are aware it has twice the bandwidth..

So even though the lanes look not really any better, you can fit a lot more down them now, e.g. 8x Pcie4 is much faster than 8x pcie3.
 
Yes, if you use 3x NVMe devices you will disable the SATA ports according to the block diagram. There appears to be a PCIe switch for those lanes.

The way I see the block is that there is the obvious x4 from processor for NVMe. One m.2 that is provided and cannot be routed by the south, and then there is a x4 slot/lanes that can be routed to m.2, but this m.2 can be SATA or NVMe protocol.

Am I seeing something off??
 
The way I see the block is that there is the obvious x4 from processor for NVMe. One m.2 that is provided and cannot be routed by the south, and then there is a x4 slot/lanes that can be routed to m.2, but this m.2 can be SATA or NVMe protocol.

Am I seeing something off??

No, you see it correctly. The PCIe lanes direct to the CPU for M.2 devices aren't really applicable to the chipset as you mentioned. I didn't mean to indicate otherwise. Its one of the slots off the chipset which shares bandwidth with the SATA ports.
 
And what ccityinstaller said. PCI4 is confusing unless you are aware it has twice the bandwidth..

So even though the lanes look not really any better, you can fit a lot more down them now, e.g. 8x Pcie4 is much faster than 8x pcie3.

The only place this is relevant at launch is in the uplink bandwidth between the CPU and the X570 chipset, which apparently retains the same number of lanes (4) but now has its bandwidth doubled due to Gen4.

As of today, there are no PCIe Gen4 expansion cards. Stick a gen3 expansion card in a gen4 slot, and it still operates as a Gen3 device.

Long term I'm sure it will wind up being useful, but not quite yet.
 
The only place this is relevant at launch is in the uplink bandwidth between the CPU and the X570 chipset, which apparently retains the same number of lanes (4) but now has its bandwidth doubled due to Gen4.

As of today, there are no PCIe Gen4 expansion cards. Stick a gen3 expansion card in a gen4 slot, and it still operates as a Gen3 device.

Long term I'm sure it will wind up being useful, but not quite yet.

Would of been nice if they would artificially split Gen4 into 2x Gen3 lanes.
 
I have two m2 drives. The first one gets full bandwidth around 3GB/sec. But the second one is bandwidth limited by 50% or so. Thats on my x79 chipset. Using M.2 pcie adapter from asus.
Hoping the x570 chipset fixes the bandwidth limitation

using two adapters since x79 can't run more than one device per pcie slot
 
Would of been nice if they would artificially split Gen4 into 2x Gen3 lanes.

Well, the chipset supposedly does that to a certain extent. It has 40 lanes in total, four of which are used for CPU uplink. So it takes the bandwidth of 4x Gen4 lanes and shares that over 36 chipset lanes, some/many of which are used by on board devices.
 
  • Like
Reactions: N4CR
like this
using two adapters since x79 can't run more than one device per pcie slot

You'd need PCIe Bifurcation support for that. It doesn't exist on older boards, and even on newer boards it's usually an undocumented hit or miss feature (like ECC or VT-d on Intel boards)

You can still do it, but you need an active adapter with a PLX or similar chip on it. Most of these are expensive ($400-$500) but in another thread I was recently discussing this in Thevoid230 suggested this Addonics board which is only $155. The specs say nothing g about which switching chip it uses (but there is clearly something hiding under that heatsink. No idea of it is any good.
 
The only place this is relevant at launch is in the uplink bandwidth between the CPU and the X570 chipset, which apparently retains the same number of lanes (4) but now has its bandwidth doubled due to Gen4.

As of today, there are no PCIe Gen4 expansion cards. Stick a gen3 expansion card in a gen4 slot, and it still operates as a Gen3 device.

Long term I'm sure it will wind up being useful, but not quite yet.
Well said but I'm sure there will be PCI4 'dongles' that allow for e.g. two PCIe3x devices to be plugged in in future as you mentioned in some regard in the latest post #50.
Will make for some interesting expansion options, as you pointed out I'm just glad for the extra bandwidth before jumping on to NVME at rock bottom prices in the Zen2 build upcoming.
 
Well said but I'm sure there will be PCI4 'dongles' that allow for e.g. two PCIe3x devices to be plugged in in future as you mentioned in some regard in the latest post #50.
Will make for some interesting expansion options, as you pointed out I'm just glad for the extra bandwidth before jumping on to NVME at rock bottom prices in the Zen2 build upcoming.

I doubt this. It would require a new Gen4 to Gen3 capable PCIe switch.

There is/was essentially only one player in the PCIe switch market, PLX technology. They were bought by Broadcom in 2014, and then jacked up their prices and now only serve the Enterprise storage market.

This is why most active PCIe to m.2 splitters/adapers now start at ~$500
 
Not sure if you guys saw this article. I'm certain all the bandwidth isn't there, however, 4.0 is supposedly supported on 3 and 4 series AMD Motherboards. It's up to the individual board manufacturers if they want to support it or not:
https://www.tomshardware.com/news/gigabyte-amd-ryzen-3000-pcie-4.0-x470,39377.html


My understanding is that you probably get Gen 4 support going to the "direct to CPU" lanes no matter what, but that the links to the chipset Will remain on gen3 on older chipsets.

I could be wrong though.
 
Would of been nice if they would artificially split Gen4 into 2x Gen3 lanes.

You'd need a PLX chip to do that, and as noted upthread the monopolist making them - Broadcom - is only interested in selling them to server makers at Enterprise rates too high for any but the most stupidly expensive boards to support.
 
My understanding is that you probably get Gen 4 support going to the "direct to CPU" lanes no matter what, but that the links to the chipset Will remain on gen3 on older chipsets.

I could be wrong though.

It's probably going to be slightly worse than that. Due to shorter maximum signal lengths, it will probably only be the top x16 slot that can support PCIe4. Even if the second one is wired to the CPU it will be far enough away that to be in spec there would need to be a redriver chip in the middle to relay the signal.
 
It's probably going to be slightly worse than that. Due to shorter maximum signal lengths, it will probably only be the top x16 slot that can support PCIe4. Even if the second one is wired to the CPU it will be far enough away that to be in spec there would need to be a redriver chip in the middle to relay the signal.

That makes sense.

I'm not well read enough on the Gen4 standard and it's trace length capabilities.
 
That makes sense.

I'm not well read enough on the Gen4 standard and it's trace length capabilities.
I had heard something about signaling issues, but I'm way out of my league on that one. What I do know is that the new 500 series chipsets were cooking the motherboards and they had to apply massive heatsinks and active cooling.

Makes me wonder if enabling it on my B450 Board will turn the MoBo into a puddle of plastic... lol. Prolly not if it's just the traces to the CPU and only 4 lanes, but... That would be bad :eek:
 
That makes sense.

I'm not well read enough on the Gen4 standard and it's trace length capabilities.

I haven't been able to find the final spec numbers anywhere, and pre-release discussions are all over the place. EE Times Asia captured the full range in one spot though, with the spec authors suggesting upto 12 but board makers only getting 3 or 4; while noting that board design upgrades could increase the distance but at the cost of $100-300 per mobo. (If those are server class boards, significantly less for smaller consumer mobos.)

PCIE5 is even worse. I cant find it now, but I've seen speculation in IIRC Anandtech articles that implementation costs and max trace lengths would make it a server only product indefinitely.


https://www.eetasia.com/news/article/18061502-pcie-45-higher-bandwidth-but-at-what-cost
 
That's a good point. I knoe some offered the Intel and then a killer NiC as the second choice on the Mobos that had dual NICs. I might be thinking of x399 boards, but I thought some x470 boards had dual NICs.
if any did it was probably the asus crosshair boards and maybe the asrock gaming pro board.. most of them had the single intel NIC though.
 
I doubt this. It would require a new Gen4 to Gen3 capable PCIe switch.

There is/was essentially only one player in the PCIe switch market, PLX technology. They were bought by Broadcom in 2014, and then jacked up their prices and now only serve the Enterprise storage market.

This is why most active PCIe to m.2 splitters/adapers now start at ~$500

You'd need a PLX chip to do that, and as noted upthread the monopolist making them - Broadcom - is only interested in selling them to server makers at Enterprise rates too high for any but the most stupidly expensive boards to support.
Microsemi has a line of PCIe switches as well. The prices are still in the $500 range though :(
 
I haven't been able to find the final spec numbers anywhere, and pre-release discussions are all over the place. EE Times Asia captured the full range in one spot though, with the spec authors suggesting upto 12 but board makers only getting 3 or 4; while noting that board design upgrades could increase the distance but at the cost of $100-300 per mobo. (If those are server class boards, significantly less for smaller consumer mobos.)

PCIE5 is even worse. I cant find it now, but I've seen speculation in IIRC Anandtech articles that implementation costs and max trace lengths would make it a server only product indefinitely.


https://www.eetasia.com/news/article/18061502-pcie-45-higher-bandwidth-but-at-what-cost


Going to do some refreshing on this....I know all the big players plan to ramp PCI-E 5.0 much much faster then the 3 to 4.0 transition. We should probably see it with Intel's new stuff on their first DDR5 PROSUMER platform.

I think X570 IS going to get Zen 2 and possibly Zen 2+ next year, and then we see AM5 with DDR5 AND maybe 5.0 with Zen3 or most likely Zen3+. That's just a WAG for now.
 
bunch of x4 slots would be nice for an m.2 nas, or even a rackmount sata nas. just sucks that there's no rdimm support.

Yes it sucks in a certain regard. However, Ryzen 2000 and 3000 support Unregistered ECC, which of course cost more per GB than registered/buffered.
 
Going to do some refreshing on this....I know all the big players plan to ramp PCI-E 5.0 much much faster then the 3 to 4.0 transition. We should probably see it with Intel's new stuff on their first DDR5 PROSUMER platform.

I think X570 IS going to get Zen 2 and possibly Zen 2+ next year, and then we see AM5 with DDR5 AND maybe 5.0 with Zen3 or most likely Zen3+. That's just a WAG for now.

We cant even utilize the full bandwidth of 3.0 with 2080ti's. How are we expected to utilize the bandwidth of 5th generation lanes when were still saturating 2nd generation just now today.
 
We cant even utilize the full bandwidth of 3.0 with 2080ti's. How are we expected to utilize the bandwidth of 5th generation lanes when were still saturating 2nd generation just now today.

PCIe5 is being driven by the need for stupidly huge amounts of bandwidth in servers; not for anything consumer related. Flash based storage servers will be able to connect 4x as much drive bandwidth as before; due to RAID/etc rebuild times that will probably mean being able to pack them in to a storage server like HDDs because they only use a x1 connection instead of a x4 for the most part although I'd not be surprised if crazy fast x4 drives for systems that only need limited amounts of storage.

And while GPUs for gaming never come close to maxing out an x16 slot hollywood level rendering can use texture amounts far above what can fit in a GPUs ram (thus the pro cards with integral SSDs from a few years back) and would benefit from having 4x as much bandwidth to system ram. The same thing goes for some compute loads; both on GPUs and with specialized accelerator cards. Future 40/100 gigabit networking cards will benefit from needing fewer PCIe lanes to connect, and if/when 10gbe finally gets cheap enough to mainstream on consumer boards using fewer lanes will make it easier to integrate on kitchen sink equipped mobos.
 
PCIe5 is being driven by the need for stupidly huge amounts of bandwidth in servers; not for anything consumer related. Flash based storage servers will be able to connect 4x as much drive bandwidth as before; due to RAID/etc rebuild times that will probably mean being able to pack them in to a storage server like HDDs because they only use a x1 connection instead of a x4 for the most part although I'd not be surprised if crazy fast x4 drives for systems that only need limited amounts of storage.

And while GPUs for gaming never come close to maxing out an x16 slot hollywood level rendering can use texture amounts far above what can fit in a GPUs ram (thus the pro cards with integral SSDs from a few years back) and would benefit from having 4x as much bandwidth to system ram. The same thing goes for some compute loads; both on GPUs and with specialized accelerator cards. Future 40/100 gigabit networking cards will benefit from needing fewer PCIe lanes to connect, and if/when 10gbe finally gets cheap enough to mainstream on consumer boards using fewer lanes will make it easier to integrate on kitchen sink equipped mobos.

Yeah I, like many others, forget about the enterprise. We get our heads so wrapped up around consumer all the time.
 
Yeah I, like many others, forget about the enterprise. We get our heads so wrapped up around consumer all the time.

It's a big part of why PCIe stagnated at 3.0 for so long. 3.0 was fast enough for everything consumers needed; and until PCIe ssds went mainstream there wasn't a huge need in enterprise either. Now that the latter has changed they've slammed out 2 new versions back to back; and if the latest leaked roadmap can be trusted Intel will have 5.0 on Xeon's in 2021 only a year after they roll out 4.0 for their servers.
 
and if/when 10gbe finally gets cheap enough to mainstream on consumer boards using fewer lanes will make it easier to integrate on kitchen sink equipped mobos.

It's 'cheap enough' for the motherboards, but switches are still at a huge premium over 1Gbit and generally CAT6 is needed over CAT5e, and CAT6a is needed for longer distance runs.

That, and there's very little real utility for 10Gbit outside of servers.
 
It's 'cheap enough' for the motherboards, but switches are still at a huge premium over 1Gbit and generally CAT6 is needed over CAT5e, and CAT6a is needed for longer distance runs.

That, and there's very little real utility for 10Gbit outside of servers.

Or fiber like I run. But yeah switching is retarded expensive still.
 
Or fiber like I run. But yeah switching is retarded expensive still.

Same difference, really- prices of 25/40/50/80/100Gbit gear is coming down too and you're not really going to get >10Gbit over copper, but anything over 10Gbit is cost prohibitive in terms of switching. Even datacenter pulls are stupid expensive; new equipment could cover a nice mortgage with a handful of units.

And the reality is that we're going to get 'multi-gig' 2.5Gbit and 5Gbit instead, which can negotiate over CAT5e.
 
Same difference, really- prices of 25/40/50/80/100Gbit gear is coming down too and you're not really going to get >10Gbit over copper, but anything over 10Gbit is cost prohibitive in terms of switching. Even datacenter pulls are stupid expensive; new equipment could cover a nice mortgage with a handful of units.

And the reality is that we're going to get 'multi-gig' 2.5Gbit and 5Gbit instead, which can negotiate over CAT5e.

2.5 and 5gb in enterprise deployment? I'd disagree. Much more likely to have multiple 10- and 20-gig deployments using infiniband or similar.

10gig network is my desired next stop. I saturate my 1gig network at home with ease. I think I'll investigate lacp before taking the plunge, though, due to the switching cost.
 
It's 'cheap enough' for the motherboards, but switches are still at a huge premium over 1Gbit and generally CAT6 is needed over CAT5e, and CAT6a is needed for longer distance runs.

That, and there's very little real utility for 10Gbit outside of servers.

You can do things quite affordably as long as you are not opposed to used Enterprise hardware.

I recently bought two Intel X520 dual port PCIe adapters for ~$100 each on eBay. Each of them included two 10GBase-SR SFP+ transcievers.

I also bought an Aruba S2500-48T switch (48 gigabit ports, 4 10Gig SFP+ ports) for only $125 shipped, and picked up a lot of two compatible Finisar SFP+ SR transcievers for $20 shipped.

Then all I needed to do was buy the fiber (I got that new).

Now I have a direct link from my Desktop to my NAS server as well as links from my server to my switch and from my desktop to my switch at 10gig speeds, and it seems much better at getting sustained high file transfer speeds than my old 10GBase-T copper 82598EB adapters which I used in a direct link only configuration between my NAS and Desktop.

Anyway, I digressed a little, but my point was you CAN do this somewhat affordably.
 
Last edited:
2.5 and 5gb in enterprise deployment? I'd disagree. Much more likely to have multiple 10- and 20-gig deployments using infiniband or similar.

10gig network is my desired next stop. I saturate my 1gig network at home with ease. I think I'll investigate lacp before taking the plunge, though, due to the switching cost.

Enterprise is still predominantly Gigabit wired Ethernet or WiFi to the client. Faster standards are primarily used for linking between switches and linking to servers.

In reality, the need for faster than Gigabit between clients is limited in most scenarios. There are exceptions, but most of the time it simply is not needed when all you are doing is sending and receiving Word and Excel documents that are a few KB each.
 
2.5 and 5gb in enterprise deployment?

On the desktop, but with respect to enterprises, 2.5Gig and 5Gig are coming for access switching both for desktop clients and for stuff like WAPs. Cisco and Ubiquiti at least are already selling WAPs with 2.5Gig ports. We're also seeing high-end consumer router equipment with 2.5Gig two ports as well, primarily for hooking up a NAS, with the second port for a workstation.
 
Same difference, really- prices of 25/40/50/80/100Gbit gear is coming down too and you're not really going to get >10Gbit over copper, but anything over 10Gbit is cost prohibitive in terms of switching. Even datacenter pulls are stupid expensive; new equipment could cover a nice mortgage with a handful of units.

And the reality is that we're going to get 'multi-gig' 2.5Gbit and 5Gbit instead, which can negotiate over CAT5e.

You know honestly consumers dont need a switch. Most installs are a nas with a 10g card and a desktop. You can just run a direct cat6 cable between the two hosts. It will work fine. Loke my x399 board has an Aquantia 10gb nic. If I bought another nic for 50 bucks on Ebay I could use those.

But in my case I'm using Intel XFSR cards into a Cisco 4948-10ge switch. When I bought the cards years ago they were about 120 each. I have 4 of them. I'm going to get a Unifi 10gig switch soon anyways and just use sfp+ ports and mix match between copper and light.
 
You know honestly consumers dont need a switch. Most installs are a nas with a 10g card and a desktop. You can just run a direct cat6 cable between the two hosts. It will work fine. Loke my x399 board has an Aquantia 10gb nic. If I bought another nic for 50 bucks on Ebay I could use those.

Well, in general, consumers don't need 10Gig, but I think we established that. For those that do, a switch is preferable, primarily because it allows more than one client to access storage or share other resources at the higher linespeed. I will also concede that for single users or limited uses, having a 10Gbit or other line direct to a workstation and then having a 1Gbit line to a switch is absolutely fine.

I'm going to get a Unifi 10gig switch soon anyways and just use sfp+ ports and mix match between copper and light.

A note about Unifi, if you're not already aware: almost no Unifi gear supports layer 3 controls, which means that all routing, no matter how simple, must go through a router, and with Unifi, you generally want that to be a USG. Those fall into two categories, which are expensive and slow, and really, really expensive.

Get an Edgeswitch instead, or go Mikrotik or something else.
 
Well, in general, consumers don't need 10Gig, but I think we established that. For those that do, a switch is preferable, primarily because it allows more than one client to access storage or share other resources at the higher linespeed. I will also concede that for single users or limited uses, having a 10Gbit or other line direct to a workstation and then having a 1Gbit line to a switch is absolutely fine.



A note about Unifi, if you're not already aware: almost no Unifi gear supports layer 3 controls, which means that all routing, no matter how simple, must go through a router, and with Unifi, you generally want that to be a USG. Those fall into two categories, which are expensive and slow, and really, really expensive.

Get an Edgeswitch instead, or go Mikrotik or something else.

Nah .. I'm am ex CCNA, went into Medical/Biology now. I have no need for layer 3 switching at those rates. My unifi usg pro rackmount can route layer 3 at full line speed 1gbps. I have no need for multiple subnets/vlans between 10gb hosts. My usage scenario of course. Ymmv.

If I wanted an affordable 10G line speed router I'd just build another pfsense box. It will absolutely crush an enterprise boxed solution in every metric if you put the right hardware in it. I've always loved pfsense and when given opportunity recommended it over Cisco any day of the week. Except critical enterprise I'd stick with Cisco or Juniper simply for the support level agreements.
 
A note about Unifi, if you're not already aware: almost no Unifi gear supports layer 3 controls, which means that all routing, no matter how simple, must go through a router, and with Unifi, you generally want that to be a USG. Those fall into two categories, which are expensive and slow, and really, really expensive.

My recently acquired Aruba S2500-48T has Layer 3 capability, but I have no idea what I'd actually use it for in a home environment. Connecting various VLAN's with restricted access?
 
My recently acquired Aruba S2500-48T has Layer 3 capability, but I have no idea what I'd actually use it for in a home environment. Connecting various VLAN's with restricted access?

Well, theoretically, in an IoT-infested environment with both untrusted users (family members and guests) and provision for 'guest' internet access while providing significant local network services (NAS, steaming, surveilance hub, automation), you'd want something along the lines of seven or eight vlan / subnet pairs each with different levels of access to various services.

I'm still trying to really wrap my head around this as it's just one of the projects I work on at home, but it's also something that I expect consumer-oriented manufacturers to really start approaching and tackling.
 
Well, theoretically, in an IoT-infested environment with both untrusted users (family members and guests) and provision for 'guest' internet access while providing significant local network services (NAS, steaming, surveilance hub, automation), you'd want something along the lines of seven or eight vlan / subnet pairs each with different levels of access to various services.

I'm still trying to really wrap my head around this as it's just one of the projects I work on at home, but it's also something that I expect consumer-oriented manufacturers to really start approaching and tackling.

Man, you're describing a very complex home network. Some people use setups like this. Others say screw it, it's not worth all the hassle.
 
Back
Top