PCIE Bifurcation

Mhh that's odd, I thought I tested that with my gigabyte x570 and the cards ended up in separate iommu groups, I will look into it as well
 
Some thoughts on how to fix IOMMU grouping:

1. Bios Update
2. Kernel Update
3. UEFI only!!! Try disabling CSM in BIOE
4. ACS override patch (security!!!)
5. maybe there is some BIOS settings concerning IOMMU/ACS
 
Last edited:
Thanks C_Payne - I had to change ACS from 'auto' to 'enabled'. In separate groups now.

Code:
IOMMU Group 28:
        0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7)
        0a:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0]
IOMMU Group 29:
        0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP108 [GeForce GT 1030] [10de:1d01] (rev a1)
        0b:00.1 Audio device [0403]: NVIDIA Corporation GP108 High Definition Audio Controller [10de:0fb8] (rev a1)
 
:) Great news, hope this is it now and everything works out.

I just wanted to call out publicly how impressed I’ve been with your help, and excellent service with sending a replacement riser, even when it turned the problem was actually with my motherboard.
 
Hey guys,

I know this isn't SFF related but it seems to be the most popular thread on PCIe bifurcation so I'm just going to try my luck.

Anyone managed to get the Asus Hyper M.2 to work with X79? I currently have a C602 DP board (Dual E5-2670s on X9DRi-F) that has the 4x4x4x4 bifurcation option in BIOS but am only seeing 1 drive in the OS.
 
I have worked with a similar board: X9DRi-LN4F
That one needed a BIOS update for bifurcation to work properly.
I assume you have a Linux system available. 'sudo lspci -vv' will provide more insight. Especially the amount of root ports and the link capabilities section of those ports.
 
I have worked with a similar board: X9DRi-LN4F
That one needed a BIOS update for bifurcation to work properly.
I assume you have a Linux system available. 'sudo lspci -vv' will provide more insight. Especially the amount of root ports and the link capabilities section of those ports.
What am I looking for in lspci -vv? I have the latest BIOS. Do you mean a custom BIOS?
 
You are welcome to post the output here. Have it set to x16 and check the root ports. They will show a LnkCap of 8GT, x16.
If you set it to x4x4x4x4 you should see this drop to x4 and also there should be 4 root ports instead of 1 with LnkCap of x4.
 
You are welcome to post the output here. Have it set to x16 and check the root ports. They will show a LnkCap of 8GT, x16.
If you set it to x4x4x4x4 you should see this drop to x4 and also there should be 4 root ports instead of 1 with LnkCap of x4.
Toggling the bifurcation correctly adjusts it to x4 but it only shows the first drive. The rest of them go missing.
Code:
04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM951/PM951 (rev 01) (prog-if 02 [NVM Express])
        Subsystem: Samsung Electronics Co Ltd Device a801
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 32
        Region 0: Memory at dfa00000 (64-bit, non-prefetchable) [size=16K]
        Region 2: Memory at dfa04000 (32-bit, non-prefetchable) [size=256]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [50] MSI: Enable- Count=1/8 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [70] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 25.000W
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
                        RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM not supported, Exit Latency L0s <4us, L1 <64us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR+, OBFF Not Supported
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+
                         EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
        Capabilities: [b0] MSI-X: Enable+ Count=9 Masked-
                Vector table: BAR=0 offset=00003000
                PBA: BAR=0 offset=00002000
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
        Capabilities: [148 v1] Device Serial Number 00-00-00-00-00-00-00-00
        Capabilities: [158 v1] Power Budgeting <?>
        Capabilities: [168 v1] #19
        Capabilities: [188 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns
        Capabilities: [190 v1] L1 PM Substates
                L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- L1_PM_Substates-
        Kernel driver in use: pciback
        Kernel modules: nvme
 
The interesting part is the root ports, not the device. You can pm the complete output of lspci with x16 Mode and x4x4x4x4 Mode and I will go through. Anyway I assume the bifurcation setting is not implemented correctly.
 
The interesting part is the root ports, not the device. You can pm the complete output of lspci with x16 Mode and x4x4x4x4 Mode and I will go through. Anyway I assume the bifurcation setting is not implemented correctly.
I've looked around, it seems like the overwhelming consensus is that the LN4F works but the F will not. Something about a faulty bifurcation implementation. Holding out hope for someone to counteract this.
 
Hello, I have bifurcation riser 8x8 and asrock x399 taichi motherboard, BIOS version 3.8, in this BIOS version there is an opportunity to switch to x8x8 mode, but the problem is that one video card works on x8 and the other one falls into pcie 2.0 x1, while x4x4 mode works great, How to solve this problem does anyone know?
 
I have a crazy request.
I have a X9Dri-LN4f+ Motherboard and PCIe bifurcation seems to work just fine (tested with an ASUS Hyper M.2 adapter).
My problem is that the GPUs take 2 slots each and then I'm left without any room for other devices. So I'd like to cable using some riser extensions to another chassis and use a "splitter" there.

@C_Payne: Would it be possible to have a PCB in the shape of an ATX motherboard (although not as deep, i.e. just the part below the IO shield) where one could just slide the cards in without hassle, i.e. without having to turn 90° (all the PCIe slots, e.g. 8 slots, and then you have 2-3 entrance points for the "uplink" extension cables) ? Otherwise I'll have to do this using the RSC-R2U-2E8 or smth similar, although I guess it's going to be limited to PCIe 2.0 and it's going to be quite a hack with drilling holes etc.
 
Hello, I have bifurcation riser 8x8 and asrock x399 taichi motherboard, BIOS version 3.8, in this BIOS version there is an opportunity to switch to x8x8 mode, but the problem is that one video card works on x8 and the other one falls into pcie 2.0 x1, while x4x4 mode works great, How to solve this problem does anyone know?
This sounds a bit like @marcosscrivens issues with his B450, you could try to raise a support ticket with asrock, they are quite helpful.
What kind of riser/cables are you using? Maybe its a signal quality issue for the second card?!?

I have a crazy request.
@C_Payne: Would it be possible to have a PCB in the shape of an ATX motherboard (although not as deep, i.e. just the part below the IO shield) where one could just slide the cards in without hassle, i.e. without having to turn 90° (all the PCIe slots, e.g. 8 slots, and then you have 2-3 entrance points for the "uplink" extension cables) ? Otherwise I'll have to do this using the RSC-R2U-2E8 or smth similar, although I guess it's going to be limited to PCIe 2.0 and it's going to be quite a hack with drilling holes etc.

This sounds like a project I can help you with. There is many variables still (type of cabling, singal conditioning etc.) Please drop a pm or write an email to [email protected]
 
This sounds a bit like @marcosscrivens issues with his B450, you could try to raise a support ticket with asrock, they are quite helpful.
What kind of riser/cables are you using? Maybe its a signal quality issue for the second card?!?
it seems that the problem was in the additional power supply 8-pin for bifurcation riser 8x8, I just forgot to connect it) and apparently because of this there was a burnout of the first three pins on the motherboard slot PCIE x16 and bifurcation riser or it can not be? the most interesting thing is that bifurcation riser continues to work only at a lower speed
 
Does anyone here know of any options for attaching two or more M.2 PCIe SSDs to a single M.2 slot? Could be either through bifurcation (if any motherboards support M.2 bifurcation), or even better would be using a PCIe switch.
 
Does anyone here know of any options for attaching two or more M.2 PCIe SSDs to a single M.2 slot? Could be either through bifurcation (if any motherboards support M.2 bifurcation), or even better would be using a PCIe switch.

Not that I am aware of.

Ryzen does support x2x2 for the CPU NVME lanes, or at least I seem to recall that I have seen a slide once. Chipset lanes on different platforms may support setups like that as well, however BIOS support is a whole other story. I doubt any Manufacturer has this implemented.

One option I see would be to use an m.2 to x4 adapter, a short x4 to x16 cable and one of the available x16 or x8 to 4x M.2 cards with an onboard packet switch.
Like this one:

https://www.amazon.de/dp/B07KG253NQ/ref=cm_sw_r_cp_apa_i_q6L-Db0QFR581
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Ah, that's a bit bulky for a small form factor. :/ I was hoping for something like an M.2-size card with two riser cables. Interesting to know that an switched solution exists for full-size PCIe though - perhaps as M.2 becomes more widespread, something will be released for that form factor too. Thank you for the info!
 
The low profile PCIe x8 Broadcom HBA 9400-16i 'Tri-Mode' seems to support 4 no. NVMe drives at x4 (8 no. at x2, and up to 24 no. with expanders).
This card would normally be limited to PCIe x8 (there is a x16 version), but if connected into PCIe x4 slot from M.2 then I wonder if this works, but just limited to x4 speed.
I have just ordered one from ebay to test.
upload_2020-1-8_10-38-3.png


Then M.2 to U.2 adapter, U.2 cable and U.2 to PCIe x4 adapter
upload_2020-1-8_10-37-23.png
upload_2020-1-8_10-39-0.png
upload_2020-1-8_10-52-50.png


or, M.2 to PCIe x4 adapter, with a x4 flexible riser cable
upload_2020-1-8_10-45-45.png
upload_2020-1-8_10-57-49.png


Quite pricey solutions though.
 
The low profile PCIe x8 Broadcom HBA 9400-16i 'Tri-Mode' seems to support 4 no. NVMe drives at x4 (8 no. at x2, and up to 24 no. with expanders).
This card would normally be limited to PCIe x8 (there is a x16 version), but if connected into PCIe x4 slot from M.2 then I wonder if this works, but just limited to x4 speed.
I have just ordered one from ebay to test.
View attachment 214088

Then M.2 to U.2 adapter, U.2 cable and U.2 to PCIe x4 adapter
View attachment 214087 View attachment 214089 View attachment 214093

or, M.2 to PCIe x4 adapter, with a x4 flexible riser cable
View attachment 214090 View attachment 214094

Quite pricey solutions though.

I have used several M.2 to PCIe 4x risers, they tend to work quite well in my experience. I'm curious how that whole setup will work. Please let us know.
 
ETA of 9400-16i end of next week (from US to UK)
Broadcom confirmed the HBA would work with a PCIe x4 electrical connection (x8 physical slot) but with max speed halved.
I've successfully connected a U.2 Optane 905P to an M.2 socket using that 'red pcb M.2-U.2 adapter' and the cable supplied with the drive.
I have the 'green pcb M.2 socket to PCIe x4 adapter' and x4 cable, but not tried yet, as I used one of C_Payne's x8x8 Bifurcation Risers instead.
 
ETA of 9400-16i end of next week (from US to UK)
Broadcom confirmed the HBA would work with a PCIe x4 electrical connection (x8 physical slot) but with max speed halved.
I've successfully connected a U.2 Optane 905P to an M.2 socket using that 'red pcb M.2-U.2 adapter' and the cable supplied with the drive.
I have the 'green pcb M.2 socket to PCIe x4 adapter' and x4 cable, but not tried yet, as I used one of C_Payne's x8x8 Bifurcation Risers instead.

btw if you want to skip a step in M.2 -> ribboned 4x slot you could get this: https://www.aliexpress.com/item/32860198563.html?spm=a2g0s.9042311.0.0.78db4c4djZarSS
 
Eagerly awaiting my x8x4x4 riser, but it looks like my Asrock board isn't showing multiple root ports, even thoughit's been set in the BIOS... :( Should they still show up even if no cards are installed?

Code:
$ lspci
00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07)
00:02.0 Display controller: Intel Corporation Device 3e96
00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th Gen Core Processor Gaussian Mixture Model
00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10)
00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)
00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10)
00:15.0 Serial bus controller [0c80]: Intel Corporation Device a368 (rev 10)
00:15.1 Serial bus controller [0c80]: Intel Corporation Device a369 (rev 10)
00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10)
00:16.1 Communication controller: Intel Corporation Device a361 (rev 10)
00:16.4 Communication controller: Intel Corporation Device a364 (rev 10)
00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)
00:1b.0 PCI bridge: Intel Corporation Device a340 (rev f0)
00:1b.6 PCI bridge: Intel Corporation Device a32e (rev f0)
00:1c.0 PCI bridge: Intel Corporation Device a338 (rev f0)
00:1c.1 PCI bridge: Intel Corporation Device a339 (rev f0)
00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port 9 (rev f0)
00:1e.0 Communication controller: Intel Corporation Device a328 (rev 10)
00:1f.0 ISA bridge: Intel Corporation Device a309 (rev 10)
00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10)
00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10)
02:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04)
03:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
06:00.0 Non-Volatile memory controller: Device 1d97:1160 (rev b0)
 
It should still show up in lspci, does bandwidth in the LnkCap section reduce for the root port?

Also I have good news:

My PLX x16->x8x8x8x8 board is finally in a state were I am willing to sell it. Its not perfect yet and I will make a larger one with more spacing in the next weeks.

IMAG3518 (1).jpg

https://peine-braun.net/shop/index.php?route=product/product&path=66&product_id=86

Please be aware that its super time consuming to test and assemble, also the PCBs and Parts are a lot more expensive compared to my bifurcation boards.
Hence the price.
 
Also I have good news:

My PLX x16->x8x8x8x8 board is finally in a state were I am willing to sell it. Its not perfect yet and I will make a larger one with more spacing in the next weeks.

View attachment 219006

https://peine-braun.net/shop/index.php?route=product/product&path=66&product_id=86

Please be aware that its super time consuming to test and assemble, also the PCBs and Parts are a lot more expensive compared to my bifurcation boards.
Hence the price.

That is amazing, what are the performance penalties of doing something like this instead of bifurcation? I assume everything is just going to have to share bandwidth but if the devices aren't getting hit simultaneously then each could receive their full bandwidth?
 
There is of course some small latency increase, but since a packet can be transferred twice as fast compared to x4 this will be offset back to a lower latency I guess.

Other then that the uplink is x16, and each downstream device can get x8.
If all were to transmit simultaniously, each one would only get x4, but this will be a rare occurrence.
Also cards can do direct transfer, IMHO crossfire uses this for example and I am sure some other workloads as well, but this is not really something I know much about.
 
There is of course some small latency increase, but since a packet can be transferred twice as fast compared to x4 this will be offset back to a lower latency I guess.

Other then that the uplink is x16, and each downstream device can get x8.
If all were to transmit simultaniously, each one would only get x4, but this will be a rare occurrence.
Also cards can do direct transfer, IMHO crossfire uses this for example and I am sure some other workloads as well, but this is not really something I know much about.

I have no idea what I'd do with it, but I want one lol. I want almost everything you make... but so far I've only bought one riser from you. Keep up the amazing work. Still working on finishing up my case I made that's using your other riser. I think you'd be proud.
 
It should still show up in lspci, does bandwidth in the LnkCap section reduce for the root port?

On further investigation that is the root port the M.2 slot is connected to. dmidecode indicates PCIE7 is PCIe bus 01:00.0 which isn't listed in lspci , possibly because the slot is empty?
 
Wanted to post one of Asrock's new ITX X570 boards that have been announced. Seems like the perfect fit for some interesting bifurcation builds. (Link)
X570D4I-2T-1(L).jpg

4 sodimms (ecc support)
Dual 10gbe intel ports
Up to 8 sata 3 ports (via a breakout cable)
X570 chipset w/ pcie 4.0x16 slot (overclockable?)
 
Here's the link to the specs of the board: https://www.asrockrack.com/general/productdetail.asp?Model=X570D4I-2T#Specifications

And I love the board, but wish it had a second M.2 on the back (back side for selfish sandwich case related reasons, even though it's impractical for server applications).
I hope someone tried to get a 4x pcie slot out of the oculink ports, because I have found conflicting info on if those are pcie and sata compatible or just sata carrying plugs.

Also that board is meant to have some serious airflow over it as it has no fans on a x570 chipset. Either mod it to add some fans or make sure it has direct airflow.

Also no 24 pin connector is super weird....
 
I hope someone tried to get a 4x pcie slot out of the oculink ports, because I have found conflicting info on if those are pcie and sata compatible or just sata carrying plugs.
If it's like my Asrock Rack E3C246D2I then there's a jumper to select whether the OCuLINK port is PCIe x4 or SATA. Maybe they could do it in software on the X570D4I-2T

Also no 24 pin connector is super weird....
Again my Intel board is the same - they provide a 24-pin to 4-pin adapter, it's basically to provide the soft power function - the board has the 8-pin adapter for the remaining power needs.
 
First off, a thank you to all who have been participating in this thread over the last couple of years. I've gone through this thread 3-4 times start to finish in the last year or so while working through my own setup.
I've successfully done Bifurcation on my ASRock X470 Gaming-ITX/ac with a Ryzen 5 1600. It boots, it sees both cards, if I run Folding@Home it will utilize both GPUs, that part is great.

What I have been struggling with, is getting SLI to work. I'm using a pair of PNY GTX 760s (760's because I've had them since new, with waterblocks, and never actually used them in anything other than temporary builds and case review photos, but also because their size is right for what I'm doing).

The strange thing is that in nVidia control panel, if I boot the system up with an SLI bridge, I don't get any of the SLI options, and running a full screen benchmark does not utilize the second card.
However, if I boot the system without an SLI bridge, it tells me "Connect an SLI bridge to enable blah blah blah", and I see the SLI options, but they're disabled (grayed out).

Is this something that anyone here has run in to? If my Vega Nano's were shorter height wise I'd use those instead, but that's not really an option for what I'm doing...
 
I have an extra (16x to x8x8) Ameri-rack ARC1-PELY423-C5V3 with 5cm ribbon cable. I recently upgraded to a 16x to 8x4x4x board for my ASRock X99E-ITXac. Let me know if anyone needs it.
 
First off, a thank you to all who have been participating in this thread over the last couple of years. I've gone through this thread 3-4 times start to finish in the last year or so while working through my own setup.
I've successfully done Bifurcation on my ASRock X470 Gaming-ITX/ac with a Ryzen 5 1600. It boots, it sees both cards, if I run Folding@Home it will utilize both GPUs, that part is great.

What I have been struggling with, is getting SLI to work. I'm using a pair of PNY GTX 760s (760's because I've had them since new, with waterblocks, and never actually used them in anything other than temporary builds and case review photos, but also because their size is right for what I'm doing).

The strange thing is that in nVidia control panel, if I boot the system up with an SLI bridge, I don't get any of the SLI options, and running a full screen benchmark does not utilize the second card.
However, if I boot the system without an SLI bridge, it tells me "Connect an SLI bridge to enable blah blah blah", and I see the SLI options, but they're disabled (grayed out).

Is this something that anyone here has run in to? If my Vega Nano's were shorter height wise I'd use those instead, but that's not really an option for what I'm doing...

From what I recall, SLI has to be certified by the motherboard to work. Motherboards have to be submitted to Nvidia for certification and then can have it allowed. ITX boards make little to no sense to justify certification so they do NOT support SLI.

HOWEVER, there are instances of people modifying things to make SLI work. I just can't find any instances at the moment because I'm at work.
 
From what I recall, SLI has to be certified by the motherboard to work. Motherboards have to be submitted to Nvidia for certification and then can have it allowed. ITX boards make little to no sense to justify certification so they do NOT support SLI.

HOWEVER, there are instances of people modifying things to make SLI work. I just can't find any instances at the moment because I'm at work.

Yeah, that second part was more what I was curious about. I'll just have to keep searching harder :) Hadn't been able to find anything as of yet. At least I'm not the only one who thinks they remember seeing it done in the past (the supporting SLI on non-SLI boards, that is)
 
Wanted to post one of Asrock's new ITX X570 boards that have been announced. Seems like the perfect fit for some interesting bifurcation builds. (Link)
....
Hi,
thanks for finding this board, for me it's a gold mine with that 2X10Gig port. Also I just wanted to share that I sent ASRock a direct support ticket to get a confirmation about the bifurcation support for this board. As soon as I got a response, I will post it here.

This thread is awesome, made me register to HF and I'm a long time lurker for this thread. So I would like to thank you all for all of your efforts, time spent on this bifurcation topic! Spec. thx and shout out to C_Payne, I've already bought a riser card from him, he has a great support excellent build quality!

My build will be a mini ITX 2U custom made server case with video production focused HW in it. I will share all the details (sketchup file for the case) as soon as I got results and something tangible here. :)
 
Back
Top