AMD X570 Chipset Blockdiagram Surfaces - Specs - PCIe 4.0 All The Way!

Discussion in 'HardForum Tech News' started by Sabrewulf..., May 20, 2019.

  1. DanNeely

    DanNeely 2[H]4U

    Messages:
    3,495
    Joined:
    Aug 26, 2005
    PCIe3 devices will operate at PCEe3 speeds in a PCIe4 slot. Just like with older generations. The card/ssd itself only has parts that speak PCIe3 on them; faster PCIe4 signalling would look like line noise to them.
     
  2. wolfofone

    wolfofone Gawd

    Messages:
    724
    Joined:
    Aug 15, 2010
    Killer is owned by Rivet Networks or something iirc

    Yup, the device will only operate at up to the highest level supported by all hardware in the chain incl AIC, Motherboard and/or chipset, and CPU.
     
  3. N4CR

    N4CR 2[H]4U

    Messages:
    3,834
    Joined:
    Oct 17, 2011
    Not heaps but some features indeed induced additional latency. It's more 'it was not as good as existing solutions in some common scenarios'. Pretty sure [H] had a benchmark on Killer nics... I wouldn't kick one out of bed if it was on the best/most suitable motherboard but it's also definitely not something I would actively look for.

    And what ccityinstaller said. PCI4 is confusing unless you are aware it has twice the bandwidth..

    So even though the lanes look not really any better, you can fit a lot more down them now, e.g. 8x Pcie4 is much faster than 8x pcie3.
     
  4. Shikami

    Shikami Gawd

    Messages:
    640
    Joined:
    Apr 5, 2010
    The way I see the block is that there is the obvious x4 from processor for NVMe. One m.2 that is provided and cannot be routed by the south, and then there is a x4 slot/lanes that can be routed to m.2, but this m.2 can be SATA or NVMe protocol.

    Am I seeing something off??
     
  5. Dan_D

    Dan_D [H]ard as it Gets

    Messages:
    54,481
    Joined:
    Feb 9, 2002
    No, you see it correctly. The PCIe lanes direct to the CPU for M.2 devices aren't really applicable to the chipset as you mentioned. I didn't mean to indicate otherwise. Its one of the slots off the chipset which shares bandwidth with the SATA ports.
     
  6. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,307
    Joined:
    Oct 29, 2000
    The only place this is relevant at launch is in the uplink bandwidth between the CPU and the X570 chipset, which apparently retains the same number of lanes (4) but now has its bandwidth doubled due to Gen4.

    As of today, there are no PCIe Gen4 expansion cards. Stick a gen3 expansion card in a gen4 slot, and it still operates as a Gen3 device.

    Long term I'm sure it will wind up being useful, but not quite yet.
     
    Sulphademus likes this.
  7. MMitch

    MMitch Gawd

    Messages:
    764
    Joined:
    Nov 29, 2016
    Would of been nice if they would artificially split Gen4 into 2x Gen3 lanes.
     
  8. Galvin

    Galvin 2[H]4U

    Messages:
    2,695
    Joined:
    Jan 22, 2002
    I have two m2 drives. The first one gets full bandwidth around 3GB/sec. But the second one is bandwidth limited by 50% or so. Thats on my x79 chipset. Using M.2 pcie adapter from asus.
    Hoping the x570 chipset fixes the bandwidth limitation

    using two adapters since x79 can't run more than one device per pcie slot
     
  9. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,307
    Joined:
    Oct 29, 2000
    Well, the chipset supposedly does that to a certain extent. It has 40 lanes in total, four of which are used for CPU uplink. So it takes the bandwidth of 4x Gen4 lanes and shares that over 36 chipset lanes, some/many of which are used by on board devices.
     
    N4CR likes this.
  10. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,307
    Joined:
    Oct 29, 2000
    You'd need PCIe Bifurcation support for that. It doesn't exist on older boards, and even on newer boards it's usually an undocumented hit or miss feature (like ECC or VT-d on Intel boards)

    You can still do it, but you need an active adapter with a PLX or similar chip on it. Most of these are expensive ($400-$500) but in another thread I was recently discussing this in Thevoid230 suggested this Addonics board which is only $155. The specs say nothing g about which switching chip it uses (but there is clearly something hiding under that heatsink. No idea of it is any good.
     
  11. N4CR

    N4CR 2[H]4U

    Messages:
    3,834
    Joined:
    Oct 17, 2011
    Well said but I'm sure there will be PCI4 'dongles' that allow for e.g. two PCIe3x devices to be plugged in in future as you mentioned in some regard in the latest post #50.
    Will make for some interesting expansion options, as you pointed out I'm just glad for the extra bandwidth before jumping on to NVME at rock bottom prices in the Zen2 build upcoming.
     
  12. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,307
    Joined:
    Oct 29, 2000
    I doubt this. It would require a new Gen4 to Gen3 capable PCIe switch.

    There is/was essentially only one player in the PCIe switch market, PLX technology. They were bought by Broadcom in 2014, and then jacked up their prices and now only serve the Enterprise storage market.

    This is why most active PCIe to m.2 splitters/adapers now start at ~$500
     
  13. Legendary Gamer

    Legendary Gamer Gawd

    Messages:
    530
    Joined:
    Jan 14, 2012
  14. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,307
    Joined:
    Oct 29, 2000

    My understanding is that you probably get Gen 4 support going to the "direct to CPU" lanes no matter what, but that the links to the chipset Will remain on gen3 on older chipsets.

    I could be wrong though.
     
  15. DanNeely

    DanNeely 2[H]4U

    Messages:
    3,495
    Joined:
    Aug 26, 2005
    You'd need a PLX chip to do that, and as noted upthread the monopolist making them - Broadcom - is only interested in selling them to server makers at Enterprise rates too high for any but the most stupidly expensive boards to support.
     
  16. DanNeely

    DanNeely 2[H]4U

    Messages:
    3,495
    Joined:
    Aug 26, 2005
    It's probably going to be slightly worse than that. Due to shorter maximum signal lengths, it will probably only be the top x16 slot that can support PCIe4. Even if the second one is wired to the CPU it will be far enough away that to be in spec there would need to be a redriver chip in the middle to relay the signal.
     
  17. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,307
    Joined:
    Oct 29, 2000
    That makes sense.

    I'm not well read enough on the Gen4 standard and it's trace length capabilities.
     
  18. Legendary Gamer

    Legendary Gamer Gawd

    Messages:
    530
    Joined:
    Jan 14, 2012
    I had heard something about signaling issues, but I'm way out of my league on that one. What I do know is that the new 500 series chipsets were cooking the motherboards and they had to apply massive heatsinks and active cooling.

    Makes me wonder if enabling it on my B450 Board will turn the MoBo into a puddle of plastic... lol. Prolly not if it's just the traces to the CPU and only 4 lanes, but... That would be bad :eek:
     
  19. DanNeely

    DanNeely 2[H]4U

    Messages:
    3,495
    Joined:
    Aug 26, 2005
    I haven't been able to find the final spec numbers anywhere, and pre-release discussions are all over the place. EE Times Asia captured the full range in one spot though, with the spec authors suggesting upto 12 but board makers only getting 3 or 4; while noting that board design upgrades could increase the distance but at the cost of $100-300 per mobo. (If those are server class boards, significantly less for smaller consumer mobos.)

    PCIE5 is even worse. I cant find it now, but I've seen speculation in IIRC Anandtech articles that implementation costs and max trace lengths would make it a server only product indefinitely.


    https://www.eetasia.com/news/article/18061502-pcie-45-higher-bandwidth-but-at-what-cost
     
  20. sirmonkey1985

    sirmonkey1985 [H]ard|DCer of the Month - July 2010

    Messages:
    21,450
    Joined:
    Sep 13, 2008
    if any did it was probably the asus crosshair boards and maybe the asrock gaming pro board.. most of them had the single intel NIC though.
     
  21. serpretetsky

    serpretetsky [H]ard|Gawd

    Messages:
    1,697
    Joined:
    Dec 24, 2008
    Microsemi has a line of PCIe switches as well. The prices are still in the $500 range though :(
     
  22. ccityinstaller

    ccityinstaller [H]ardness Supreme

    Messages:
    4,232
    Joined:
    Feb 23, 2007

    Going to do some refreshing on this....I know all the big players plan to ramp PCI-E 5.0 much much faster then the 3 to 4.0 transition. We should probably see it with Intel's new stuff on their first DDR5 PROSUMER platform.

    I think X570 IS going to get Zen 2 and possibly Zen 2+ next year, and then we see AM5 with DDR5 AND maybe 5.0 with Zen3 or most likely Zen3+. That's just a WAG for now.
     
    wolfofone likes this.
  23. tangoseal

    tangoseal [H]ardness Supreme

    Messages:
    7,502
    Joined:
    Dec 18, 2010
    Yes it sucks in a certain regard. However, Ryzen 2000 and 3000 support Unregistered ECC, which of course cost more per GB than registered/buffered.
     
  24. tangoseal

    tangoseal [H]ardness Supreme

    Messages:
    7,502
    Joined:
    Dec 18, 2010
    We cant even utilize the full bandwidth of 3.0 with 2080ti's. How are we expected to utilize the bandwidth of 5th generation lanes when were still saturating 2nd generation just now today.
     
  25. DanNeely

    DanNeely 2[H]4U

    Messages:
    3,495
    Joined:
    Aug 26, 2005
    PCIe5 is being driven by the need for stupidly huge amounts of bandwidth in servers; not for anything consumer related. Flash based storage servers will be able to connect 4x as much drive bandwidth as before; due to RAID/etc rebuild times that will probably mean being able to pack them in to a storage server like HDDs because they only use a x1 connection instead of a x4 for the most part although I'd not be surprised if crazy fast x4 drives for systems that only need limited amounts of storage.

    And while GPUs for gaming never come close to maxing out an x16 slot hollywood level rendering can use texture amounts far above what can fit in a GPUs ram (thus the pro cards with integral SSDs from a few years back) and would benefit from having 4x as much bandwidth to system ram. The same thing goes for some compute loads; both on GPUs and with specialized accelerator cards. Future 40/100 gigabit networking cards will benefit from needing fewer PCIe lanes to connect, and if/when 10gbe finally gets cheap enough to mainstream on consumer boards using fewer lanes will make it easier to integrate on kitchen sink equipped mobos.
     
  26. tangoseal

    tangoseal [H]ardness Supreme

    Messages:
    7,502
    Joined:
    Dec 18, 2010
    Yeah I, like many others, forget about the enterprise. We get our heads so wrapped up around consumer all the time.
     
  27. DanNeely

    DanNeely 2[H]4U

    Messages:
    3,495
    Joined:
    Aug 26, 2005
    It's a big part of why PCIe stagnated at 3.0 for so long. 3.0 was fast enough for everything consumers needed; and until PCIe ssds went mainstream there wasn't a huge need in enterprise either. Now that the latter has changed they've slammed out 2 new versions back to back; and if the latest leaked roadmap can be trusted Intel will have 5.0 on Xeon's in 2021 only a year after they roll out 4.0 for their servers.
     
  28. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,346
    Joined:
    Jun 13, 2003
    It's 'cheap enough' for the motherboards, but switches are still at a huge premium over 1Gbit and generally CAT6 is needed over CAT5e, and CAT6a is needed for longer distance runs.

    That, and there's very little real utility for 10Gbit outside of servers.
     
  29. tangoseal

    tangoseal [H]ardness Supreme

    Messages:
    7,502
    Joined:
    Dec 18, 2010
    Or fiber like I run. But yeah switching is retarded expensive still.
     
  30. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,346
    Joined:
    Jun 13, 2003
    Same difference, really- prices of 25/40/50/80/100Gbit gear is coming down too and you're not really going to get >10Gbit over copper, but anything over 10Gbit is cost prohibitive in terms of switching. Even datacenter pulls are stupid expensive; new equipment could cover a nice mortgage with a handful of units.

    And the reality is that we're going to get 'multi-gig' 2.5Gbit and 5Gbit instead, which can negotiate over CAT5e.
     
  31. Joust

    Joust 2[H]4U

    Messages:
    2,879
    Joined:
    Nov 30, 2017
    2.5 and 5gb in enterprise deployment? I'd disagree. Much more likely to have multiple 10- and 20-gig deployments using infiniband or similar.

    10gig network is my desired next stop. I saturate my 1gig network at home with ease. I think I'll investigate lacp before taking the plunge, though, due to the switching cost.
     
  32. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,307
    Joined:
    Oct 29, 2000
    You can do things quite affordably as long as you are not opposed to used Enterprise hardware.

    I recently bought two Intel X520 dual port PCIe adapters for ~$100 each on eBay. Each of them included two 10GBase-SR SFP+ transcievers.

    I also bought an Aruba S2500-48T switch (48 gigabit ports, 4 10Gig SFP+ ports) for only $125 shipped, and picked up a lot of two compatible Finisar SFP+ SR transcievers for $20 shipped.

    Then all I needed to do was buy the fiber (I got that new).

    Now I have a direct link from my Desktop to my NAS server as well as links from my server to my switch and from my desktop to my switch at 10gig speeds, and it seems much better at getting sustained high file transfer speeds than my old 10GBase-T copper 82598EB adapters which I used in a direct link only configuration between my NAS and Desktop.

    Anyway, I digressed a little, but my point was you CAN do this somewhat affordably.
     
    Last edited: May 22, 2019
    IdiotInCharge likes this.
  33. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,307
    Joined:
    Oct 29, 2000
    Enterprise is still predominantly Gigabit wired Ethernet or WiFi to the client. Faster standards are primarily used for linking between switches and linking to servers.

    In reality, the need for faster than Gigabit between clients is limited in most scenarios. There are exceptions, but most of the time it simply is not needed when all you are doing is sending and receiving Word and Excel documents that are a few KB each.
     
  34. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,346
    Joined:
    Jun 13, 2003
    On the desktop, but with respect to enterprises, 2.5Gig and 5Gig are coming for access switching both for desktop clients and for stuff like WAPs. Cisco and Ubiquiti at least are already selling WAPs with 2.5Gig ports. We're also seeing high-end consumer router equipment with 2.5Gig two ports as well, primarily for hooking up a NAS, with the second port for a workstation.
     
  35. tangoseal

    tangoseal [H]ardness Supreme

    Messages:
    7,502
    Joined:
    Dec 18, 2010
    You know honestly consumers dont need a switch. Most installs are a nas with a 10g card and a desktop. You can just run a direct cat6 cable between the two hosts. It will work fine. Loke my x399 board has an Aquantia 10gb nic. If I bought another nic for 50 bucks on Ebay I could use those.

    But in my case I'm using Intel XFSR cards into a Cisco 4948-10ge switch. When I bought the cards years ago they were about 120 each. I have 4 of them. I'm going to get a Unifi 10gig switch soon anyways and just use sfp+ ports and mix match between copper and light.
     
  36. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,346
    Joined:
    Jun 13, 2003
    Well, in general, consumers don't need 10Gig, but I think we established that. For those that do, a switch is preferable, primarily because it allows more than one client to access storage or share other resources at the higher linespeed. I will also concede that for single users or limited uses, having a 10Gbit or other line direct to a workstation and then having a 1Gbit line to a switch is absolutely fine.

    A note about Unifi, if you're not already aware: almost no Unifi gear supports layer 3 controls, which means that all routing, no matter how simple, must go through a router, and with Unifi, you generally want that to be a USG. Those fall into two categories, which are expensive and slow, and really, really expensive.

    Get an Edgeswitch instead, or go Mikrotik or something else.
     
  37. tangoseal

    tangoseal [H]ardness Supreme

    Messages:
    7,502
    Joined:
    Dec 18, 2010
    Nah .. I'm am ex CCNA, went into Medical/Biology now. I have no need for layer 3 switching at those rates. My unifi usg pro rackmount can route layer 3 at full line speed 1gbps. I have no need for multiple subnets/vlans between 10gb hosts. My usage scenario of course. Ymmv.

    If I wanted an affordable 10G line speed router I'd just build another pfsense box. It will absolutely crush an enterprise boxed solution in every metric if you put the right hardware in it. I've always loved pfsense and when given opportunity recommended it over Cisco any day of the week. Except critical enterprise I'd stick with Cisco or Juniper simply for the support level agreements.
     
    IdiotInCharge likes this.
  38. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,307
    Joined:
    Oct 29, 2000
    My recently acquired Aruba S2500-48T has Layer 3 capability, but I have no idea what I'd actually use it for in a home environment. Connecting various VLAN's with restricted access?
     
  39. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,346
    Joined:
    Jun 13, 2003
    Well, theoretically, in an IoT-infested environment with both untrusted users (family members and guests) and provision for 'guest' internet access while providing significant local network services (NAS, steaming, surveilance hub, automation), you'd want something along the lines of seven or eight vlan / subnet pairs each with different levels of access to various services.

    I'm still trying to really wrap my head around this as it's just one of the projects I work on at home, but it's also something that I expect consumer-oriented manufacturers to really start approaching and tackling.
     
  40. Joust

    Joust 2[H]4U

    Messages:
    2,879
    Joined:
    Nov 30, 2017
    Man, you're describing a very complex home network. Some people use setups like this. Others say screw it, it's not worth all the hassle.
     
    wolfofone and IdiotInCharge like this.