Asus Z390-E PCIEx16_1 Not Functional Until PCIEx16_2 is Occupied. I've tried Everything!!

[H]eckel

[H]ard|Gawd
Joined
Apr 8, 2001
Messages
1,905
Hey guys,

I am still fighting with this board. I've figured out a few more things since the thread that I posted last week, but I have been working on it non stop all weekend.

I am on my second Z390-E Gaming board from Asus. I had to RMA the first because PCIEX16_1&2 weren't working.

I have read about and tried so many things. The only thing that works, and I just found this out on accident, is if I put an old PCIe video card in slot 2, the BIOS will come up showing the card in slot 1 a 8x instead of 16x, which it never has. My guess is that since the board will do SLI at 8x per slot, it must see the second slot has a card in it and slows down the first one causing something to work.

I have tried all of the possible BIOS options over and over again. I've been building computers for nearly 30 years and never ran in to anything like this.

Since this particular chipset seems to use the CPU directly for the first to PCIe slots, could it be a damaged CPU?

I also have to PCIe NVMe cards taking up some of the bandwidth, but these are not supposed to effect slot 1. You can increase the speed at the cost of a couple SATA ports, and you can configure a slot for a special multi NVMe card, but none of that applies to me.

I have all of the latest firmware and BIOS as well.

Is it time to call Intel, or should I work with Asus some more?

Thanks,
 
Check the motherboard socket pins for any bent or broken/missing ones, as well as the contacts on the bottom of the CPU.

I assume you've tried without the NVME devices anyway?
 
Check the motherboard socket pins for any bent or broken/missing ones, as well as the contacts on the bottom of the CPU.

I assume you've tried without the NVME devices anyway?
That is my next step. Thing is, I want to use both of my NVMe drives and technically it should not be sharing lanes with PCIEx16_1.

Any ideas why putting a card in to slot 2 causes the BIOS to report 8x 3.0 in slot one and 0x 3.0 in slot two? It sees both, but 0x obviously makes two non functional.
 
Is it possible since the 1070 is a previous generation card, that it is having issues negotiating the UEFI initialization? The 1070 has worked fine in all previous UEFI builds. It is 16x gen 3. I also updated to the latest firmware from Nvidia.

I tried CSM as well to see if that helped. No difference whatsoever.
 
The other thing that has made this so frustrating to troubleshoot is that there is no way to manually disable onboard video. Even if I manually set to PCIe, it goes to onboard if it doesn't detect a PCIe card. With or without monitor plugged in to ob DP or HDMI.
 
From my understanding, the CPU puts out 16 PCIe lanes and the chipset 24 for a total of 40. If you're correct that the top two slots are controlled by the CPU, and if you're running a x16 card in top slot, it is probably using all 16 lanes allocated to those two slots from the CPU, thus disabling the x1 slot above the x16 GPU slot. Seems like it should be documented better though.
 
My guess is that there is an attempt to use too much (PICIe lane wise) or a BIOS config. Could be a bug though.
 
My guess is that there is an attempt to use too much (PICIe lane wise) or a BIOS config. Could be a bug though.

from that thread that's what it sounds like but the issue is that it shouldn't effect it at all since the m.2 slots are connected to the chipset while the pcie x16 slot 1 & 2 are connected direct to the cpu. my guess is asus screwed something up in their lane configuration since it's not just the E series but also a few other asus z390 boards that suffer from this problem.
 
from that thread that's what it sounds like but the issue is that it shouldn't effect it at all since the m.2 slots are connected to the chipset while the pcie x16 slot 1 & 2 are connected direct to the cpu. my guess is asus screwed something up in their lane configuration since it's not just the E series but also a few other asus z390 boards that suffer from this problem.
Probably a bug with the ASUS boards. Would love to know of any other z390 manufacturers that might have something similar wrong (just to make ASUS feel better). Just shows you that any vendor can make a lemon.
 
so found a 4 page thread that is basically a ton of people having similar issues as you with asus z390 boards.. do you by chance have both m.2 slots filled?

either way here's the thread.
https://rog.asus.com/forum/showthre...boot-with-Nvidia-GPU-installed-(2070-or-1070)
Yes, as mentioned above, I am running both. I have two Samsung 970 Evo Plus 1TB drives. I will certainly take a look at the thread, but as far as Asus and the manual is concerned, the NVMe even in PCIe mode should not take away from the 16 lanes the GPU should get.

.....hmm, check out this image I just found. It doesn't explain everything, but could help me understand part of it.

Basically, it says I have three options:

a: 1x16 lanes PCIe 3 Graphics or SSD
b: 2x8 lanes PCIe 3 Graphics and SSD
c: 1x8 and 2x4 lanes PCIe 3 Graphics and SSD


This would explain why my first slot is only running at 8x, but not why I have to install a second card to get it to detect it at all. This is all very strange.
 

Attachments

  • Annotation 2020-03-16 211122.png
    Annotation 2020-03-16 211122.png
    264.1 KB · Views: 0
Also, does anyone else ever look at the IRQ conflicts and sharing in system info? I haven't had to worry about IRQs in decades, but this had me wondering. There are a ton of devices that share IRQs and memory space. More the reason I'd love to be able to hard disable the onboard graphics.

In the example below, you can see, however, that my onboard audio was "conflicting" with my video card but it wasn't causing any trouble that I could tell. Also disabling onboard devices didn't magically release any PCIe lanes to allow my GPU to post properly.

Memory Address 0xA2000000-0xA2FFFFFF NVIDIA GeForce GTX 1070
Memory Address 0xA2000000-0xA2FFFFFF Intel(R) PCIe Controller (x16) - 1901

I/O Port 0x00003000-0x0000307F NVIDIA GeForce GTX 1070
I/O Port 0x00003000-0x0000307F Intel(R) PCIe Controller (x16) - 1901

Memory Address 0x90000000-0x9FFFFFFF NVIDIA GeForce GTX 1070
Memory Address 0x90000000-0x9FFFFFFF PCI Express Root Complex
Memory Address 0x90000000-0x9FFFFFFF Intel(R) PCIe Controller (x16) - 1901

IRQ 16 NVIDIA GeForce GTX 1070
IRQ 16 High Definition Audio Controller

Memory Address 0xA3300000-0xA3303FFF Samsung NVMe Controller
Memory Address 0xA3300000-0xA3303FFF Intel(R) PCI Express Root Port #21 - A32C

Memory Address 0xA0000-0xBFFFF PCI Express Root Complex
Memory Address 0xA0000-0xBFFFF Intel(R) PCIe Controller (x16) - 1901
 
Also, does anyone else ever look at the IRQ conflicts and sharing in system info? I haven't had to worry about IRQs in decades, but this had me wondering. There are a ton of devices that share IRQs and memory space. More the reason I'd love to be able to hard disable the onboard graphics.

In the example below, you can see, however, that my onboard audio was "conflicting" with my video card but it wasn't causing any trouble that I could tell. Also disabling onboard devices didn't magically release any PCIe lanes to allow my GPU to post properly.

yeah saw the same image as well, i was also thinking maybe IRQ conflict but usually that requires you to fill every slot and even then that should be a non issue. at least in that thread the only people that got their system working was removing the second SSD and even that didn't seem like a 100% fix for some people so i have no clue what the exact issue is, just that it worked for a couple of the people in that thread. also agree with cjcox might be worth researching other boards to replace it with to make sure they also don't suffer from the same problem.
 
Back
Top