pcie 5.0 nic card not working in asus x670e-pro

jpmomo

n00b
Joined
Oct 6, 2022
Messages
2
I am trying to get a mellanox/nvida connectx-7 pcie 5.0 nic card to work in the pcie 5.0 slot in an asus x670e-pro mb.
When I put the pcie 5 nic in the pcie 5 x16 slot, the card is not even recoginzed.
I can get a mellanox pcie 4.0 nic to work in that slot but not the pcie 5.0 nic.
the pcie 5.0 nic is recognized in the other pcie 4.0 x4 slot but only runs at that lower speed.

I am using the integrated graphics from the cpu (amd ryzen 7950x) because I need to utilize the one pcie 5 x16 slot for the nic.

I went through most of the bios settings but could not find a setting that allow the card to be recognized in that slot.

I am hoping that someone else can at least confirm that they have some pcie 5 device working properly in that slot.

Thanks for any suggestions.
 
I am trying to get a mellanox/nvida connectx-7 pcie 5.0 nic card to work in the pcie 5.0 slot in an asus x670e-pro mb.
When I put the pcie 5 nic in the pcie 5 x16 slot, the card is not even recoginzed.
I can get a mellanox pcie 4.0 nic to work in that slot but not the pcie 5.0 nic.
the pcie 5.0 nic is recognized in the other pcie 4.0 x4 slot but only runs at that lower speed.

I am using the integrated graphics from the cpu (amd ryzen 7950x) because I need to utilize the one pcie 5 x16 slot for the nic.

I went through most of the bios settings but could not find a setting that allow the card to be recognized in that slot.

I am hoping that someone else can at least confirm that they have some pcie 5 device working properly in that slot.

Thanks for any suggestions.
You should drop a GPU (or anything else you have for that matter) into that slot just to test it. There's a possibility that the slot is actually defective. Similar instances are out there of people that had these growing pains with Intel's 12th Gen boards and PCI-E 5.0 slots that were dead, direct from the factory.

I have seen the mainboards specifically expecting that you are dropping a Graphics adapter into that slot. Make certain that you have the IGP only selected in the bios settings and not set to Auto or the board may try to initialize that 16X slot as a GPU and nothing else.

In theory the slot should work for your use case. I have a MB where I am running a RAID Adapter in the main 16x PEG slot and a 4x NIC in the Secondary PCI-E slot.
 
You should drop a GPU (or anything else you have for that matter) into that slot just to test it. There's a possibility that the slot is actually defective. Similar instances are out there of people that had these growing pains with Intel's 12th Gen boards and PCI-E 5.0 slots that were dead, direct from the factory.

I have seen the mainboards specifically expecting that you are dropping a Graphics adapter into that slot. Make certain that you have the IGP only selected in the bios settings and not set to Auto or the board may try to initialize that 16X slot as a GPU and nothing else.

In theory the slot should work for your use case. I have a MB where I am running a RAID Adapter in the main 16x PEG slot and a 4x NIC in the Secondary PCI-E slot.
Thanks for the feedback.

I did try that slot with another nic (pci 4.0) and it worked as expected but at the card's pci 4.0 rates.)

I also specified IGP only but that didn't help.

Unfortunately, I don't have any other pci 5 devices to narrow down the issue.

thanks again for the suggestions.
 
Thanks for the feedback.

I did try that slot with another nic (pci 4.0) and it worked as expected but at the card's pci 4.0 rates.)

I also specified IGP only but that didn't help.

Unfortunately, I don't have any other pci 5 devices to narrow down the issue.

thanks again for the suggestions.
So, the issue may be with the 5.0 NIC. It's running at in the 4.0 Slot... So, maybe the interface logic on the NIC is faulty for 5.0? Also, what in the sam hell are you connecting to needing that speed? a 40 Gbps Switch or server?
 
So, the issue may be with the 5.0 NIC. It's running at in the 4.0 Slot... So, maybe the interface logic on the NIC is faulty for 5.0? Also, what in the sam hell are you connecting to needing that speed? a 40 Gbps Switch or server?
Connect-X 7 is 200-400Gbps; that'd be overkill for a 40G connection...

Welcome to the forums, jpmomo.
 
Connect-X 7 is 200-400Gbps; that'd be overkill for a 40G connection...

Welcome to the forums, jpmomo.
How the hell would that even work? PCI-E 5.0 only supports 128 GB/s across he 16 Lane Slot. You would never be able to sustain the connections on that card at it's rated speeds. You would need the card to be a OCP3.0 form factor that can support 32 Lanes to get 256 GB/s and that still doesn't even touch the rated connection speeds of the Connect-X 7.

Hell, PCI-E 4.0 will do 64 GB/s you could easily run damn near any NIC on that.

Doing 10G is murderous overkill for most people, and expensive. I have had it running in my home between my server and main rig before but most storage solutions will not sustain a transfer at those speeds and saturate it unless you have some sort of Datacenter Array... I run 5G and have a helluva time saturating it's bandwidth on my network because nothing I have at home can really sustain more than 300 Mb/s with the occasional burst in the 400+ MB/s range.

Anyway, jpmomo, I wish you well in your endeavors regardless.
 
How the hell would that even work? PCI-E 5.0 only supports 128 GB/s across he 16 Lane Slot. You would never be able to sustain the connections on that card at it's rated speeds. You would need the card to be a OCP3.0 form factor that can support 32 Lanes to get 256 GB/s and that still doesn't even touch the rated connection speeds of the Connect-X 7.

Hell, PCI-E 4.0 will do 64 GB/s you could easily run damn near any NIC on that.

Wikipedia says 4.0x16 is ~ 32GB/sec and 5.0 os 64GB/s. But note that's bytes and networking is bits. I believe you get full duplex throughput on pci-e, so 5.0x16 is about 500Gbps, so plenty of room. But putting the card in a 4.0x4 slot is an anemic ~ 8GB/s per wikipedia or 64Gbps. Totally lame ;) (I never got to work with servers that had more than 4x 10G nics, so I'm bluffing a bit)

If the bios has settings for it, I'd try limiting the slot to 4.0 and see if that works at x16; it's not what you paid for, but it'd be better than in the x4 slot. You'll likely need to work with asus and melanox support/engineering to get this figured out. Early adopter fun.

I dunno how you're going to feed that NIC either; I've seen Netflix CDN presentations, and they've got about as many lanes of NVMe as lanes of NIC.
 
Wikipedia says 4.0x16 is ~ 32GB/sec and 5.0 os 64GB/s. But note that's bytes and networking is bits. I believe you get full duplex throughput on pci-e, so 5.0x16 is about 500Gbps, so plenty of room. But putting the card in a 4.0x4 slot is an anemic ~ 8GB/s per wikipedia or 64Gbps. Totally lame ;) (I never got to work with servers that had more than 4x 10G nics, so I'm bluffing a bit)

If the bios has settings for it, I'd try limiting the slot to 4.0 and see if that works at x16; it's not what you paid for, but it'd be better than in the x4 slot. You'll likely need to work with asus and melanox support/engineering to get this figured out. Early adopter fun.

I dunno how you're going to feed that NIC either; I've seen Netflix CDN presentations, and they've got about as many lanes of NVMe as lanes of NIC.
PCIe 2.0 is 16 GB/s ; 3.0 is 32 GB/s ; 4.0 is 64 GB/s ; 5.0 is 128 GB/s. It doubles every generation. All Full Duplex means is that data can be transmitted in both directions at equal speed. So, even with server grade pipelines (twice PCIE 5.0) you can't reach the data transmission speeds these cards support. On a consumer board, like his, this is folly. Why you would even be using an off the shelf motherboard like that if you had a $2,319.00 dollar NIC is way beyond me... Sell then NIC buy a better motherboard and get a NIC that you can support like 10G and Pay your rent for a Month too... Might even have money left over for a hooker for a couple nights depending where you live in the world.
 
Last edited:
PCIe 2.0 is 16 GB/s ; 3.0 is 32 GB/s ; 4.0 is 64 GB/s ; 5.0 is 128 GB/s. It doubles every generation. All Full Duplex means is that data can be transmitted in both directions at equal speed. So, even with server grade pipelines (twice PCIE 5.0) you can't reach the data transmission speeds these cards support. On a consumer board, like his, this is folly. Why you would even be using an off the shelf motherboard like that if you had a $2,319.00 dollar NIC is way beyond me... Sell then NIC buy a better motherboard and get a NIC that you can support like 10G and Pay your rent for a Month too... Might even have money left over for a hooker for a couple nights depending where you live in the world.
Mind your units.
GB/s is not the same as Gb/s.
 
So, the issue may be with the 5.0 NIC. It's running at in the 4.0 Slot... So, maybe the interface logic on the NIC is faulty for 5.0? Also, what in the sam hell are you connecting to needing that speed? a 40 Gbps Switch or server?
Testing/troubleshooting purposes? While 670E boards are expensive for consumer hardware, they're far cheaper than that servers you'd normally put that sort of card in. They're also unlikely to be in the sort of system you'd want to take down for extended troubleshooting so get a cheap alternative system to use for testing NICs/etc.
 
Testing/troubleshooting purposes? While 670E boards are expensive for consumer hardware, they're far cheaper than that servers you'd normally put that sort of card in. They're also unlikely to be in the sort of system you'd want to take down for extended troubleshooting so get a cheap alternative system to use for testing NICs/etc.
IIRC the OP was having an issue getting the 5.0 NIC to function in the boards main 5.0 Slot, but it worked in the 4.0 slot.

No idea what they are doing, you may be correct. That's something I hadn't considered.
 
I'd reach out to Asus support and see what they say. I had trouble with an Asus P8P67 motherboard that ended up in a mass recall. SSD or SSD RAID problems in the Intel chipset, if I remember correctly. These early adopter problems can consume so many hours and it might not be fixable. I'd loop support in ASAP.
 
Back
Top