What can I do with a PCI-E 4.0 x1 slot?

Wag

[H]ard|Gawd
Joined
Aug 29, 2006
Messages
1,352
The ASUS B650e-f that came with the MicroCenter 7900x bundle (not my choice, would have gotten something else) only has two full sized PCI-e x16 slots, the other two are 4.0 x1 (and it looks like those are mostly lost if you get a large GPU). I have a 10G NIC that will go in the 2nd x16 slot, so I’m not sure of what use the other x1 slots are?

Any ideas of PCI cards that utilitze the full bandwidth of 4.0 x1 slots? Are there any?
 
Anything. A pcie 4 x1 slot is a whole lot of bandwidth and could be used for any pcie card with minimal real world bottleneck.

GPUs would be fine, regular nvme drive would work fine, ect.. so the question is really what pcie card do you want?

Problem is a lot of cards won’t fit in a x1 slot. But I was thinking more a 10G NIC, and there aren’t any 4.0 x1 I can think of, so I could use the second x16 slot for a TB 4 card ( which requires a x4 at least).
 
Problem is a lot of cards won’t fit in a x1 slot. But I was thinking more a 10G NIC, and there aren’t any 4.0 x1 I can think of, so I could use the second x16 slot for a TB 4 card ( which requires a x4 at least).
if it works for your build a pcie ribbon riser (unpowered one) works very well for running any card in a x1 slot. The "usb" style ones may even allow you to place the card in the cable management part of the case instead.

For physical x1 cards you would be limited to USB, audio, non 10gig ethernet, or nvme adapter.
You can just cut out the rear wall of the PCI slot to allow any other size connector to fit.
This works really well. A hot razor works well.
 
View attachment 566971

For example, my NIC is a ConnectX3 single port which is a x4 card. That's not going to fit in a 1x slot.
Sure it will, if the x1 slot has an open back, either because it came that way (which would be nice, but unusual), because you opened it up through force or heat (which I wouldn't do cause I know I'm going to mess it up), or because you used a riser.

Looks like that card runs at pci-e 3.0, and pci-e 3.0 x1 doesn't quite have the throughput of 10g, but it's pretty close.
 
Sure it will, if the x1 slot has an open back, either because it came that way (which would be nice, but unusual), because you opened it up through force or heat (which I wouldn't do cause I know I'm going to mess it up), or because you used a riser.

Looks like that card runs at pci-e 3.0, and pci-e 3.0 x1 doesn't quite have the throughput of 10g, but it's pretty close.
I believe a 10g card on a x1 3.0 lane as you pointed out would be a bottleneck. Concidering the compatibility of nics a 2.5g x1 nic would probrably be more ideal and feasible if such exists.
 
I believe a 10g card on a x1 3.0 lane as you pointed out would be a bottleneck. Concidering the compatibility of nics a 2.5g x1 nic would probrably be more ideal and feasible if such exists.
Yeah, no 4.0 x1 NICs (yet) that I know of.

The riser is a good idea though- I don't need it to run at 10G exactly, my transfers to my NAS typically burst >2.5g to platter drives (and obviously faster to SSDs), that's why a 2.5g x1 wouldn't be adequate for me.

I planned on eventually getting a TB 4 card for some recording work I wanted to do- although ASUS' TB 4 card looks kind of shitty, and doesn't appear to do real multi-monitor output.
 
Yeah, no 4.0 x1 NICs (yet) that I know of.

The riser is a good idea though- I don't need it to run at 10G exactly, my transfers to my NAS typically burst >2.5g to platter drives (and obviously faster to SSDs), that's why a 2.5g x1 wouldn't be adequate for me.

I planned on eventually getting a TB 4 card for some recording work I wanted to do- although ASUS' TB 4 card looks kind of shitty, and doesn't appear to do real multi-monitor output.
Almost no pcie 4.0 nics period. And when they exist its more targeted at many port 10 or 40gb high end cards. No reason to make a x1 card as that only appeals to the consumer boards who were to lazy to put a x8 lane slot on the board and bifurcate the lanes efficiently.

On the other end, a pcie 3.0 x1 dual 2.5 gb card exists, is supported and costs like $40. That would give you a direct nas link, an extra ethernet port, and a slight bottleneck on transfer speeds that may result in a few sec extra on worse case transfers.

Imo high end consumer boards for both amd and intel are very disappointing for anything other then simple gpu nvme and nothing else configs. Ive had a heck of a time trying to build a workstation out of a 3900x system (a VERY capable cpu for this) but the mobo perpetuals dont play nice under ESXI, only half the usb pass through, cant use all pcie lane, and have plenty of intermidiate issues resulting in a system that I really cant use how I want. coming from enterprise boards and a previous x99 setup where everything worked, pcie was abundant and stable, and all integrated perpetuals worked. Makes me consider a enterprise solution for the next build witch is a shame as the am4 and am5 cpus are VERY good chips.

Could you run a gpu on a x16 to x16 riser to free up a x16 slot? funny enough most gpus would run plenty well on a x1 pcie 4.0 slot over a ribbon riser.

Edit: I looked at the board and read your first post. Can you go with a quad port 10gb or enterprise 40gb nic on the x16 slot instead of using the x1?
 
The ASUS B650e-f that came with the MicroCenter 7900x bundle (not my choice, would have gotten something else) only has two full sized PCI-e x16 slots, the other two are 4.0 x1 (and it looks like those are mostly lost if you get a large GPU). I have a 10G NIC that will go in the 2nd x16 slot, so I’m not sure of what use the other x1 slots are?

Any ideas of PCI cards that utilitze the full bandwidth of 4.0 x1 slots? Are there any?
USB ports, and don't bother with PCIe 4.

Give your high-Hz USB devices a separate dedicated chip each, no hubs!

Reduce the USB load on your motherboard's USB chip, if you have lots of USB devices -- give your 1000-8000Hz mouse its own dedicated USB root chip. Even the extra overhead of having to take a PCIe lane, pales in comparision to the USB contention of a USB-overloaded computer.

You can reduce your USB erratic-jitter by keeping your high-Hz keyboard on the motherboard's USB chip, and your high-Hz mouse on the PCIe USB chip. Spreading the high-pollrate workload really does wonders for improved USB performance. Especially if you also have other USB devices.

Sometimes all the ports on a cheap crappy motherboard are sharing the same USB root -- a single USB hub! Which makes gaming mice perform crappy / jittery on those specific motherboards. Unfixable until you add a PCIe USB card which only requires x1.

Your 1000-8000Hz mouse don't need PCIe 4. There's many earlier-version PCIe x1 USB port cards that makes an 1000-to-8000Hz mouse work perfect with less CPU% consumed per poll, and less timestamp-jitter per poll.
 
Last edited:
USB ports, and don't bother with PCIe 4.

Give your high-Hz USB devices a separate dedicated chip each, no hubs!

Reduce the USB load on your motherboard's USB chip, if you have lots of USB devices -- give your 1000-8000Hz mouse its own dedicated USB root chip. Even the extra overhead of having to take a PCIe lane, pales in comparision to the USB contention of a USB-overloaded computer.

You can reduce your USB erratic-jitter by keeping your high-Hz keyboard on the motherboard's USB chip, and your high-Hz mouse on the PCIe USB chip. Spreading the high-pollrate workload really does wonders for improved USB performance. Especially if you also have other USB devices.

Sometimes all the ports on a cheap crappy motherboard are sharing the same USB root -- a single USB hub! Which makes gaming mice perform crappy / jittery on those specific motherboards. Unfixable until you add a PCIe USB card which only requires x1.

Your 1000-8000Hz mouse don't need PCIe 4. There's many earlier-version PCIe x1 USB port cards that makes an 1000-to-8000Hz mouse work perfect with less CPU% consumed per poll, and less timestamp-jitter per poll.

This is all confusing to me because I have a B650e board and am sort of limited in lanes- for example, I have 4 NVME drives, and the board shares bandwidth on some of the M2 slots (only 3- probably buy a controller card) so I’m going to have to figure out how to properly allocate everything.

I’ll start my build tomorrow. I’m sure you’ll see me post plenty of questions.

Oh, and the motherboard has an onboard 2.5G NIC, and I have a 2.5G rj45 sfp+ module around here somewhere. I guess I’ll test it out- one thing you can say about the Mellanox connectx3 is that it’s rock solid- I have no idea what kind of controller comes on the board.
 
Anything. A pcie 4 x1 slot is a whole lot of bandwidth and could be used for any pcie card with minimal real world bottleneck.

GPUs would be fine, regular nvme drive would work fine, ect.. so the question is really what pcie card do you want?
I used to use if for a USB 3 header card. Now I use it for some USB 3 external ports for low speed devices like my UPS battery status software or a secondary keyboard. But I was real pissed when I discovered that my ASUS ROG X670e board has only an x1 and not an x4 slot. From what I can tell, it's the same with MSI or Gigabyte boards. Boo on AMD here.
 
Huh? What about the electrical connectors, or the warranty?
Cards are supposed to work down to x1 electrical. For that Mellanox 10g card, running at pci-e 3.0x1 means missing out on a few Gbps, you might get closer to 7-8Gbps instead of what you should, but eh, close enough. Although that OWC card looks like a nice choice, too.

Warranty is probably shot, even though Magnison-Moss may say otherwise if the defect wasn't related to the slot alteration. Really, there's no good reason for board makers to put closed back slots, but they're very common (well, sometimes there's components mounted in line with the slot, but they could at least put a mechanical x4 or x8 if an x16 card edge would hit something.
 
Cards are supposed to work down to x1 electrical. For that Mellanox 10g card, running at pci-e 3.0x1 means missing out on a few Gbps, you might get closer to 7-8Gbps instead of what you should, but eh, close enough. Although that OWC card looks like a nice choice, too.

Warranty is probably shot, even though Magnison-Moss may say otherwise if the defect wasn't related to the slot alteration. Really, there's no good reason for board makers to put closed back slots, but they're very common (well, sometimes there's components mounted in line with the slot, but they could at least put a mechanical x4 or x8 if an x16 card edge would hit something.
Yeah, but I put this one on AMD for its reference design here.
 
Annoys me that I’m going to have to buy an extension cable with an open end in order to use a x1 port.

Also, I think one of the x1 ports shares bandwidth with an M2 slot- I know the 2nd x16 slot will be disabled if I use the third M2 slot. Of course I have 4 NVME drives. This is what happens when you essentially get a free motherboard.
 
Annoys me that I’m going to have to buy an extension cable with an open end in order to use a x1 port.

Also, I think one of the x1 ports shares bandwidth with an M2 slot- I know the 2nd x16 slot will be disabled if I use the third M2 slot. Of course I have 4 NVME drives. This is what happens when you essentially get a free motherboard.
If you have 4 nvme drives, a GPU, and a network card you shouldnt even bother trying to use the other pcie lanes. You're asking for too much. Just upgrade your x16 slot network card to something excessive instead. This is more a issue with consumer pcie lane allocation.
 
If you have 4 nvme drives, a GPU, and a network card you shouldnt even bother trying to use the other pcie lanes. You're asking for too much. Just upgrade your x16 slot network card to something excessive instead. This is more a issue with consumer pcie lane allocation.
?

The Connectx3 is a x4 card. I don’t understand your other point.

I might sell it and try a x670e motherboard, I’d have more lanes available. I could also use two of my nvme drives as cache in my UnRaid server.
 
If you have 4 nvme drives, a GPU, and a network card you shouldnt even bother trying to use the other pcie lanes. You're asking for too much. Just upgrade your x16 slot network card to something excessive instead. This is more a issue with consumer pcie lane allocation.
What if you have only 2 NMVe drives installed, plus a GPU and one other x16 card? What then?

Do PCIE lanes get shared, just allocated, presumably at boot up. I'm not trolling. I honestly don't know.
 
What if you have only 2 NMVe drives installed, plus a GPU and one other x16 card? What then?

Do PCIE lanes get shared, just allocated, presumably at boot up. I'm not trolling. I honestly don't know.
Pcie lane allocation is motherboard spacific. For boards that often say it is shared with a nvme drive. The only thing the board can do is either block the pcie lane when a nvme drive is in use or bifurcate the nvme to x1 or x2 and reserve x1 for the slot.

Pcie lanes get shared in a predefined manner. Enterprise motherboards often allow decent control over this. Even to allow a x16 lane to be bifurcated into quad x4 or other configs. That's useful for running quad nvme adapters in a x16 slot or other odd pcie devices.

Consumer boards are much more often nvme or pcie slot not both. The first 2 nvme or so will have properly allocated lanes, and the rest will be shared (slot or nvme not both simaltaniasly) 4 nvme drives then trying to fill every single pcie slot is asking for trouble and best case youre running nvme at x1. Mobos limits are often not clearly defined. I used to run into this alot trying to populate all pcie slots with GPUs. Nope, the board doesnt like that dispite it not being defined as a limitation.
 
?

The Connectx3 is a x4 card. I don’t understand your other point.

I might sell it and try a x670e motherboard, I’d have more lanes available. I could also use two of my nvme drives as cache in my UnRaid server.
Ya get a mellonex (or other) quad 10gb or quad 40gb card instead to take care of the networking, instead of trying to support the connectx3, onboard drivers (normally garbage), as well as a x1 network card shoehorned into the configuration.

The issue I see is with the 4nvme drives and use of more then 2 pcie slots. (Although the connectx3 being x4 does help this cause)
 
What if you have only 2 NMVe drives installed, plus a GPU and one other x16 card? What then?

Do PCIE lanes get shared, just allocated, presumably at boot up. I'm not trolling. I honestly don't know.
I think that’s the main disadvantage of b650(e) boards- ultimately they have to share lanes because there’s not enough to go around.

I found this entry in the manual:

The M.2_3 slot shares bandwidth with the PCIe 4.0 x16 slot (supporting x4 mode). When the M.2_3 slot is operating in PCIe mode, the PCIe 4.0 x16 slot will be disabled.

And this:

Expansion Slots
  • 1 x PCIe 5.0 x16 SafeSlot [CPU]
  • 1 x PCIe 4.0 x16 Slot [Chipset]
  • 2 x PCIe 4.0 x1 Slot [Chipset]
I’m also pretty sure I read that two of the Sata ports are sharing lanes either with the x1 ports or an M2 slot.
 
I think that’s the main disadvantage of b650(e) boards- ultimately they have to share lanes because there’s not enough to go around.

I found this entry in the manual:



And this:

I’m also pretty sure I read that two of the Sata ports are sharing lanes either with the x1 ports or an M2 slot.
That's disappointing to hear that the pcie4 x16 slot is disabled with use of the m.2 3 nvme drive instead of bifiricated into 4 x4 allowing the slot to still be used for networking, ect.

It does sound like a lazy limitation of this b650e board. Be careful reading about these limitations on better boards tho. My am4 experience led to the impression that even "high end" boards are garbage regarding pcie allocation and onboard perpetuals compared to past enterprise or hedt solutions.
 
That's disappointing to hear that the pcie4 x16 slot is disabled with use of the m.2 3 nvme drive instead of bifiricated into 4 x4 allowing the slot to still be used for networking, ect.

It does sound like a lazy limitation of this b650e board. Be careful reading about these limitations on better boards tho. My am4 experience led to the impression that even "high end" boards are garbage regarding pcie allocation and onboard perpetuals compared to past enterprise or hedt solutions.
I guess I am spoiled. I’m used to having an X99 board all these years, plenty of lanes. I think this is just the way ASUS handles their boards- I originally planned on buying a Tuf Gaming x670e-Plus Wifi (don’t need the wifi but it’s next to impossible to find the non-wifi varieties of their boards) and apparently there are some similar strange limitations there as well.

I’ll really take a close look at some boards the next few days before I decide what to do- totally disabling the 2nd x16 port when using the 3rd NVME slot seems pretty shitty to me.

 
You could always sell me the board and just buy whatever you actually wanted :D. Not all of us live near a MC.
 
You could always sell me the board and just buy whatever you actually wanted :D. Not all of us live near a MC.
I already put it up on CL and Marketplace. Problem with selling it online is the shipping and commission (eBay) makes it hardly worth my while. I at least want to break even.
 
This is all confusing to me because I have a B650e board and am sort of limited in lanes- for example, I have 4 NVME drives, and the board shares bandwidth on some of the M2 slots (only 3- probably buy a controller card) so I’m going to have to figure out how to properly allocate everything.
It's not critically important stuff to most people, but if you play competitively, it's a consideration.

I am not sure how many USB root hubs this motherboard has to handle its USB ports -- but some very basic motherboards literally merge all USB ports via internal root hubs into 2 USB links (bandwidth sharing for all ports). Some onboard stuff is even shared contention with one of the hubs (e.g. some motherboards treat onboard Bluetooth/etc, as embedded USB device instead of an embedded PCIe device).

It's been best practice not to put too many high-bandwidth / high-Hz devices on a single USB hub. USB hubs are often external boxes where you can plug four devices into one port as an example. Most people are more familiar with this. But what some don't realize, is that hubs are built into motherboards too. The port above/below, and sometimes the adjacent ports, are the same USB hub built into the motherboard, literally merging multiple USB ports into a shared one internally. Such motherboards with heavy USB sharing, generally will perform very crappy in esports, when you combine a bunch of high-Hz USB devices concurrently. When a motherboard manufacturer cheaps out on a motherboard, more things are shared resources, e.g. more things routed through a USB root hub, to save a few pennies.

Usually the rear USB2 port cluster and the rear USB3 port cluster are separate root hubs each, and the front ports are often yet another separate hub. But not always.

Most people overlook the metaphorical "USB lanes" (number of USB root hubs) when looking for extra PCIe lanes. Watch out for cheap motherboards that put all USB ports (plus some onboard stuff) on too few "USB lanes" (aka USB Root Hubs), as seen in Device Manager.

USB jitter (from excess USB sharing on a hub) affect competitive gameplay, in a way worse than PCIe lane sharing. Mouse stutters & mouse lags.
 
Last edited:
It's not critically important stuff to most people, but if you play competitively, it's a consideration.

I am not sure how many USB root hubs this motherboard has to handle its USB ports -- but some very basic motherboards literally merge all USB ports via internal root hubs into 2 USB links (bandwidth sharing for all ports). Some onboard stuff is even shared contention with one of the hubs (e.g. some motherboards treat onboard Bluetooth/etc, as embedded USB device instead of an embedded PCIe device).

It's been best practice not to put too many high-bandwidth / high-Hz devices on a single USB hub. USB hubs are often external boxes where you can plug four devices into one port as an example. Most people are more familiar with this. But what some don't realize, is that hubs are built into motherboards too. The port above/below, and sometimes the adjacent ports, are the same USB hub built into the motherboard, literally merging multiple USB ports into a shared one internally. Such motherboards with heavy USB sharing, generally will perform very crappy in esports, when you combine a bunch of high-Hz USB devices concurrently. When a motherboard manufacturer cheaps out on a motherboard, more things are shared resources, e.g. more things routed through a USB root hub, to save a few pennies.

Usually the rear USB2 port cluster and the rear USB3 port cluster are separate root hubs each, and the front ports are often yet another separate hub. But not always.

Most people overlook the metaphorical "USB lanes" (number of USB root hubs) when looking for extra PCIe lanes. Watch out for cheap motherboards that put all USB ports (plus some onboard stuff) on too few "USB lanes" (aka USB Root Hubs), as seen in Device Manager.

USB jitter (from excess USB sharing on a hub) affect competitive gameplay, in a way worse than PCIe lane sharing. Mouse stutters & mouse lags.

Do "Z series" motherboards generally have more USB root hubs than the H and B series boards? How can you tell how many a motherboard has before you buy it?
 
Do "Z series" motherboards generally have more USB root hubs than the H and B series boards? How can you tell how many a motherboard has before you buy it?

A few years ago, an anecdote.

There is an incredible jitter problem when you plug in a high-Hz keyboard and high-Hz mouse into the same USB root hub.

Currently, I do not study specific motherboards, but I can corroborate the "USB Roulette" example where you randomly try different USB ports (and USB/mouse drivers) while using MouseTester...

The USB Roulette Technique For 2x - 10x Jitter Improvement (In Certain Cases)

You can get a 10x jitter improvement sometimes! (proof below)

Blur Busters fan that contributed imagery:

1697580889907.png

So I can cherrypick one motherboard, an EVGA Z390 motherboard.

Major improvement on an EVGA Z390 was achieved by Blue=keyboard, Red=mouse

While MouseTester is a poor analog for a heavily-loaded computer (CPU+GPU running concurrently), it can reveal jitter differences between different ports:

BEFORE: EVGA Z390 Jitter between 990Hz-thru-1010Hz (20ms jitter amplitude)

Mouse and keyboard plugged into same USB root hub, that also happened to be inferior at handling both:
1697581023824.png


AFTER: EVGA Z390 Jitter between 999Hz-thru-1001Hz (2ms jitter amplitude)


Plugged into better USB port away from high-Hz keyboard USB root hub:
1697581210449.png


Note: Although he used a different CPI (400 vs 800), the improved accuracy allowed him to upgrade sensitivity to 800-1600cpi.

Another option is a PCIe 3 USB card, which produced exactly the same improvement -- when you don't have enough congestion-free USB root hubs and congestion-free chipset processing. This is a cheap $20 purchase of a PCIe USB add-on card from Amazon that saves you from having to replace motherboard due to USB limitations. Don't plug anything else into a 4-port PCIe USB card, just plug ONLY your 1-to-8KHz-caliber mouse into it for the most efficient low-jitter high-pollrate handling.

Try both the motherboard provider USB driver and microsoft provided USB driver, for the port your mouse is plugged in. Sometimes you will have less jitter and lower CPU utilization per poll cycle.

Although PCIe lane processing may (sometimes) produce extra overheads and IPC, it's usually a big improvement over the massive jitter problem of two high-Hz devices sharing the same USB root hub.

Even 1ms jitter = 10 pixeljump aimtrainer error at 10,000 pixel/sec flick turn, so don't neglect those milliseconds if you play professionally. If you use 8KHz mice, fine tune your pollrate to your sweet spot (usually ~2KHz-ish), there are quite noticeable benefits above 1000Hz, but too much pollrate bogs CPU and can readd jitter.

Nontheless, it was absolutely night and day, bigger than an ordinary computer upgrade -- without needing an upgrade.

20ms versus 2ms, peeps!

Play your USB Roulette Properly. YMMV. But it works for many!
 
Last edited:
  • Like
Reactions: Meeho
like this
Very interesting. Thanks for sharing.

I ended up selling the board and buying a used Asus ProArt X670E-CREATOR WIFI off Amazon Warehouse for $350. I know buying a used board (particularly an expensive one like this) is a crapshoot but it's fine.

Have to say this is the nicest (and most expensive) motherboard I've owned. Even here there are restrictions on lanes, but the board is jam packed with features like an onboard 10G NIC and 2 TB4 ports so I don't have to add much of anything.
 
Very interesting. Thanks for sharing.

I ended up selling the board and buying a used Asus ProArt X670E-CREATOR WIFI off Amazon Warehouse for $350. I know buying a used board (particularly an expensive one like this) is a crapshoot but it's fine.

Have to say this is the nicest (and most expensive) motherboard I've owned. Even here there are restrictions on lanes, but the board is jam packed with features like an onboard 10G NIC and 2 TB4 ports so I don't have to add much of anything.
Yeah, that's a nice board, BUT. Would have been nicer with 8 RAM slots . No more SATA ports or PCIE slots than my Strix E-A. (should have got a Strix E-E ) :(
 
There is no way any consumer market board will have more than 4 ram slots any time soon. You have to make a ton of compromises to even run 4 sticks in most boards now, much less double them up. The faster ram gets that harder it is to run more than a few.
 
There is no way any consumer market board will have more than 4 ram slots any time soon. You have to make a ton of compromises to even run 4 sticks in most boards now, much less double them up. The faster ram gets that harder it is to run more than a few.
If I do some music production I won’t need much more than 64GB-128GB anyways- part of the reason I wanted it was because of the onboard 10G NIC and TB4 ports.
 
if it works for your build a pcie ribbon riser (unpowered one) works very well for running any card in a x1 slot. The "usb" style ones may even allow you to place the card in the cable management part of the case instead.

For physical x1 cards you would be limited to USB, audio, non 10gig ethernet, or nvme adapter.

This works really well. A hot razor works well.
Yes. Even x16 high end GPU. They'll lose about 10 to 20% of their speed.
 
Back
Top