Any Z390 Mobos that support 16x16x SLI/CFX? Bottleneck at 8x/8x PCI-E 3.0?

Discussion in 'Motherboards' started by edo101, Mar 20, 2019.

  1. edo101

    edo101 Limp Gawd

    Messages:
    140
    Joined:
    Jul 16, 2018
    Title pretty much. I have been out of the game for a while and was supprised that most mobos that host processors like the 9900k don't have 16x 16x SLI/CFX capability

    I thought Z390 was high end? given the prices? I'm on X58 and it can do PCI E 2 16x6x. Was cheaper back in 2010 too.

    Does 16x 16x matter for PCI E 3.0 or is 8x 8x enough?

    For the record I'm asking for future proofing. I might decide to pick up two gfx cards. Most likely not but once I upgrade from my X58, Im hoping my upgrade will last me another 8 to 9 years like the X58 did
     
  2. Spartacus09

    Spartacus09 Gawd

    Messages:
    769
    Joined:
    Apr 21, 2018
    z390 is just recent for the 9th gen processors, its the high end tier, not the professional/enthusiast tier.
    The x58 platform compares to the 2066 socket, not the 1151.

    That being said Asus claims it does 2x 16x, but is $380: https://www.asus.com/us/Commercial-Servers-Workstations/WS-Z390-PRO/
    Your limitation is generally by the 9900k and the amount of lanes it can provide not the motherboard.
     
  3. edo101

    edo101 Limp Gawd

    Messages:
    140
    Joined:
    Jul 16, 2018
    Would it be able to handle x16 x16 with an NVMe drive? I'm very far out of the loop, I don't even know how to make the PCI E lan calculations
     
  4. Spartacus09

    Spartacus09 Gawd

    Messages:
    769
    Joined:
    Apr 21, 2018
    It should according to the manual, other sata/pcie or U.2 ports might be disabled to run it though.
    Whats your use case you need dual x16 3.0 lanes?
     
    nEo717 likes this.
  5. edo101

    edo101 Limp Gawd

    Messages:
    140
    Joined:
    Jul 16, 2018
    two GPUs. like say 2 1080 Tis or 2 2080 Tis
     
  6. Spartacus09

    Spartacus09 Gawd

    Messages:
    769
    Joined:
    Apr 21, 2018
  7. edo101

    edo101 Limp Gawd

    Messages:
    140
    Joined:
    Jul 16, 2018
  8. Spartacus09

    Spartacus09 Gawd

    Messages:
    769
    Joined:
    Apr 21, 2018
    Normally no, most boards link the NVMe and sata lanes (such as enabling the 2nd nvme slot disables sata port 5/6 on the board), some do on the pcie lanes but its generally the 4x slots that are affected not the 16x or 8x.
    No offense with only two cards you don't really have a lot to worry about and if you are worrying about these things (3x or 4x GPUs) then you should be looking at Threadripper or 2066 that does have the extra lanes.
     
  9. dexvx

    dexvx [H]ard|Gawd

    Messages:
    1,031
    Joined:
    Aug 14, 2002
    You realize any LGA 1151 CPU will only have 16x PCIe 3.0 lanes? That board you linked uses a PLX PCIe switch. So while both GPU's can link up at PCIe x16 (to the PLX switch), you're still going to get capped at PCIe x16 to the CPU.
     
  10. Spartacus09

    Spartacus09 Gawd

    Messages:
    769
    Joined:
    Apr 21, 2018
    :shrug: He asked for a board with dual x16 I provided one, his use case isn't really valid or needed to be dual x16 anyway (in my opinion anyway).
     
  11. edo101

    edo101 Limp Gawd

    Messages:
    140
    Joined:
    Jul 16, 2018
    You mean it can only do one x16 at the end of the day? I thought the PLX is supposed to allow it to do 2x 3.0x16? If 8x 8x won't create much of a bottleneck, then I'm fine getting a cheaper board
     
  12. dexvx

    dexvx [H]ard|Gawd

    Messages:
    1,031
    Joined:
    Aug 14, 2002
    The PLX is a PCIe switch. This is similar in concept to a networking switch. If you have 3x GbE ports with connected clients, your total aggregate non-blocking throughput would be 6 Gbps (bi-directional). However, if you are just sending throughput to one client (e.g. the CPU), then you're limited to 2 Gbps non-blocking.
     
  13. edo101

    edo101 Limp Gawd

    Messages:
    140
    Joined:
    Jul 16, 2018
    lol still don't understand.

    Are you really at any simulataneously getting only 8x 8x with that PLX switch
     
  14. dexvx

    dexvx [H]ard|Gawd

    Messages:
    1,031
    Joined:
    Aug 14, 2002
    Ok...

    GPU 1 > 16x to PLX
    GPU 2 > 16x to PLX
    CPU > 16x to PLX

    At any given time, only 16x PCIe traffic can go to/from any *single* device connected to the PLX switch. But all 3 devices simultaneously talking to each other can, in theory, can do 48x PCIe speeds.
     
  15. deruberhanyok

    deruberhanyok [H]ard|Gawd

    Messages:
    1,347
    Joined:
    Aug 22, 2004
    edo101 basically the slots will act like they have 16x bandwidth, and all of that will go to the PLX chip (so you could have 32x going in to the PLX chip) but it still only has a 16x link back to the processor. So no matter how many "16x" slots were available, the effective bandwidth between them and the CPU is still only going to be 16x.

    It doesn't actually matter for regular use - by the time x8 3.0 becomes a real limitation, multi-gpu will probably be obsolete anyways.

    If you really want a true x16/x16 capability you need an HEDT platform - socket 2066 (X299 chipset) for Intel or sTR4 (X399 chipset) for AMD. Your existing X58 chipset is an HEDT platform.

    The closest equivalent to Z390 at the time you got your X58 setup would have probably been the P55 chipset.
     
  16. eclypse

    eclypse 2[H]4U

    Messages:
    2,992
    Joined:
    Dec 7, 2003
    Just go with a single 2080ti card. Sli is kinda dead atm with newer games.. only good for benchmarking atm.. and that's only good for a few passes.

    If you go with sli then you'll want water cooling as well. So added headaches and money.

    This is coming from a guy that always did sli since sli was born. 2x,3x and 4x when possible.

    If the system is for gaming you'll only need the fastest ghz cpu you can get. 5+ ghz.. think 8700k (6core),9900k (8core). That's your future proofing with more then needed cores.

    2080ti will do most of the work after 1440 screen res. So 4k will be no prob as well.

    Only need sli for 4k at above 60hz. (4k limited to 60hz ) Theires like 1 or 2 screens currently that can do 144hz at 4k. That would be a reason for sli but those screens are like $1600 bucks.

    Ultimate gaming system would be:

    9900k/32gb 3200mhz + DDR4/2080ti/1000-1200watt PSU/ 34" ultra wide 1440 120hz monitor.

    Slap a 500GB/1t m.2 drive in there for OS/games and a 1TB SSD for games and a spinner for files/backup and fly.

    Custom water loop/AIO cause the 9900K and 2080ti are HOT.
     
  17. dexvx

    dexvx [H]ard|Gawd

    Messages:
    1,031
    Joined:
    Aug 14, 2002
    I would say its best to de-aggregate your gaming system and storage needs. I do agree that multi-GPU setups are just not good for 90%+ use cases. I mean its great for taking photos and showing off to your friends.

    Current ultimate gamer (IMO), similar to eclypse
    i9-9900K, 32GB DDR4-3200/CL15, RTX 2080 Ti, Maximus Apex XI (if you're serious about O/C) or Maximus XI Extreme/Gigabyte Auros Extreme, 750-850W Titanium PSU, 1 Samsung 970 Pro or 2TB Samsung 970 EVO

    If you need storage, just build a secondary storage box. No point in cluttering up your main system.
     
  18. eclypse

    eclypse 2[H]4U

    Messages:
    2,992
    Joined:
    Dec 7, 2003
    ;)

    My system I'm building/upgrading right now:

    9900k <- going to delid today and I have a direct die mount for it. Gona be fun.

    Asus z390 formula/extreme XI. Leaning extreme as I have both new in box right now.

    G.Skill 32GB 4266MHz 4x 8GB. Doubling up from 16GB from the old system. Was willing to just go with 2x 16GB 3200mhz but was cheaper to just buy 2 more 16gb sticks.

    Evga 2080ti FTW3 ultra with evga block I gota put on today as well! Fun fun haha

    2 sep loops with new EK XE 480 rads push/pull corsair sp120 HP fans.

    Ek RGB Velocity CPU block. New.

    Evga super nova G2 1300watt PSU new.

    Thermaltake Tower 900 case with 5 new red corsair ML140 Pro fans.

    Alienware 34" g-sync 1440 ultra wide 120hz monitor.

    In a perfect world I'd be testing the water loop tonight but I doubt it haha.. first time going with hard lines so gona be a learning curve to deal with.

    If I was 20 something.. this build be done in 24 hours. Being old (46).. sucks!

    I'll be tired after doing the GPU block. :/ haha
     
  19. Dan_D

    Dan_D [H]ardOCP Motherboard Editor

    Messages:
    53,114
    Joined:
    Feb 9, 2002
    It's really simple. The CPU has an integrated PCIe controller which offers a maximum of 16 PCIe lanes. This cannot be changed, altered, or added to. A PLX chip is a PCIe switch that uses all 16 PCIe lanes from the CPU for communication. The PLX chip has 32x PCIe lanes itself, but when communicating to the CPU, it's limited by the CPU's 16 PCIe lanes. It's like using a network switch that goes out to 16 PC's. You still only have a single GbE Internet connection. You don't have 24 GbE of bandwidth, you have 1GbE of bandwidth to the internet because you end up throttled by the shared connection or the smallest pipe in the system.

    Its the same thing here. There are 16x PCIe lanes on the CPU and your adding a splitter which doubles the lane count for expansion cards, but ultimately you are stuck with the 16x PCIe lanes the CPU offers. For graphics cards, its been tested and shown many times by many sites that x8/x8 is virtually indistinguishable from a x16/x16 configuration. There are very few cases where any difference can be seen and most of the time, its only a 1-3 FPS and only something you see on benchmarks. The PLX chip also adds latency to the system which robs you of 1-3 FPS. We've seen this consistently from the NF100 days up to today's modern PLX chips. We've seen it from the Pentium D through the Core i9's and everything in between. A PLX chip just isn't worth it. The only thing they do is add flexibility to the PCIe slot layout and the ability to run devices at full speed in any slot you like depending on how its configured. In fact, the latency problem was recognized well enough that motherboard manufacturers would offer a single x16 PCIe slot which bypassed the PLX to avoid the latency issue. They only recommended using slots that were tied to the PLX for multi-GPU configurations.

    Hope that helps.
     
    Bcc335 and dawsonkm like this.
Tags: