PCIE Bifurcation

I am planning a new batch of DIY Risers.
Anyone interested in a particular design?
(Single/Dual Slot, left/right turn, M.2 Slots?)
 
I love the spirit though, it's what pushed me to explore bifurcation in the first place. Give the man a beer.

Agreed. Cheers.

Maybe some eGPU's even have a packet switch that could be modded to bifurcate. (Lots of them are actually configurable)
Havent done much research in the eGPU field.
 
I am planning a new batch of DIY Risers.
Anyone interested in a particular design?
(Single/Dual Slot, left/right turn, M.2 Slots?)

I sold the boards I had for €50 each.
If it is your own special dimensions it will be €100, if there is at least 2 people with the same dimensions it will be €50.
 
I am planning a new batch of DIY Risers.
Anyone interested in a particular design?
(Single/Dual Slot, left/right turn, M.2 Slots?)
You mentioned M.2 slots.
I am looking for a solution to hook up a 10gig network card to a motherboard via M.2 slot. Is this something you could achieve?
 
That exists: €2.58 delivered.
I doubt i could beat that price-wise.
Not sure about quality though.

TISHRIC 2017 New NGFF M2 M.2 to PCI-E 4x 1x Slot Riser Card Adapter Male To Female PCIE Multiplier For BTC Miner Mining Machine
http://s.aliexpress.com/fQRJVfQF
(from AliExpress Android)
 
That adapter won't work. I've tried it about 6 months ago. Bought 2 off ebay, neither worked.
Only the Bplus tech M.2 PCIe x4 adapter will work. I think they screwed up something either with the singling or voltage.
 
That adapter won't work. I've tried it about 6 months ago. Bought 2 off ebay, neither worked.

Taking a closer look i see that its a dual layer board with no differential-signal routing... no wonder its not working.

(Those are high speed signal traces after all, despite of what some chinese "RISERS" might suggest)

Thats what I meant. Most of the time those guys get away with ignoring all design rules they can... maybe people wont even notice it running only gen2 or gen1 speeds... But there is such things as physics...

The one you were reffering to looks much more promising (Take a Look at how the PCIe Traces are routed)
http://www.bplus.com.tw/ExtenderBoard/P4SM2.html
 
Yep that bad boy works with "some" network adapters. I've tried this with an Intel NUC. What was really interesting is that for example when I got an Intel ethernet adapter and then the same exact adapter but with HP firmware, only the intel worked. Not sure why. Compatibility was hit and miss. Mostly only the expensive ethernet adapters worked like Intel and Broadcom. Realtek adapters by most part didn't work.
 
Compatibility was hit and miss.

My Interpretation: Cheap NIC-> Cheap tranceiver / Cheap PCB / Cheap routing on NIC->not working
As for firmware I think some might be more tolerant to transmission errors/ tolerate more retransmits or negotiate slower speeds or negotiate less lanes.
At least that is what I believe.

I had a similar situation where I put a cheap 2 layer x8 -> x16 riser or 1cm length inbetween my GPU and Riser and it stopped working in gen3. Replacing it with an adequatly routed one fixed it.

That is why i spend a considerable amount researching proper lane routing when i made my risers.
 
My Interpretation: Cheap NIC-> Cheap tranceiver / Cheap PCB / Cheap routing on NIC->not working
As for firmware I think some might be more tolerant to transmission errors/ tolerate more retransmits or negotiate slower speeds or negotiate less lanes.
At least that is what I believe.

I had a similar situation where I put a cheap 2 layer x8 -> x16 riser or 1cm length inbetween my GPU and Riser and it stopped working in gen3. Replacing it with an adequatly routed one fixed it.

That is why i spend a considerable amount researching proper lane routing when i made my risers.
Hi, here is the dimensions. Thanks man, for uh.. everything.
Cheers.
Download.png
 
Hi! First post on hardforum, hope I'm not intruding. First, impressive work to all of you who have made this a reality, this opens up quite a number of possibilities!

I've read this thread thoroughly and I have questions.

1) I've taken a long hard look at C_Payne's PCIe risers as well as most other risers seen in the thread, and I'm at a loss as to how all the pins are connected. As I understand it, pins Side B #14, then #15 through #82 on both sides are connected directly to the lines on both slots on the riser. The REFCLK comes from the mobo to the multiplexer/buffer chip, then the copied signals go to the slots on the riser on pins Side A, #13 and 14. What does the "copied signals" look like? Is the mobo's REFCLK just divided, 50MHz for each slot, or is there more to it?

2) I read that most other signals can be daisy chained. Which ones? The chip used on most risers also have SMCLK/SMDAT pins, can these be daisy-chained or must they be multiplexed too?

3) The PCIe slot on the motherboard is "sacrificed" to the riser, making this kind of build incompatible with pretty much any dual-slot mini-ITX case available on the market. I'm looking for a way to reclaim it. Just to be clear, I'm just putting this "out there", I don't have the financial capability to front any cost.

- Looking at the underside of motherboards, the PCIe pins are readily accessible.
- Looking at this article http://physxinfo.com/news/880/dedicated-physx-gpu-perfomance-dependence-on-pci-e-bandwidth/ , we understand there is a bit of room between the PCIe slot and the card, enough for a tape to fit.

My idea is to add a PCIe slot for bifurcation connected to the underside of the mobo, with or without modding the mobo. I'm thinking about soldering or using a custom-ended flexible flat cable pressed against the pins.

Depending on the answers to the questions above, it may as well be impossible. I'm very much concerned with the REFCLK signal in particular (and all the necessary signals that would eventually need multiplexing), as getting it from the mobo and sending it back could prove very difficult... unless there's a way to generate a new REFCLK or fit a flexible flat cable between the card and the mobo's PCIe's slot, and that's before taking into account signal integrity and timings.

Thank you for your time.
 
My idea is to add a PCIe slot for bifurcation connected to the underside of the mobo, with or without modding the mobo. I'm thinking about soldering or using a custom-ended flexible flat cable pressed against the pins.

This is madness and I wish you the best of luck!
 
My idea is to add a PCIe slot for bifurcation connected to the underside of the mobo, with or without modding the mobo. I'm thinking about soldering or using a custom-ended flexible flat cable pressed against the pins.

I like the idea!

This is madness and I wish you the best of luck!

True that it would involve some serious soldering job to your precious motherboard.

The only real issue is indeed the refclck signal. One would have to get it to the riser and clock buffer and then back up through or around the PCB.
In my imagination it could be done physically, but signal integrity could be an issue.

In any case it will Involve de- and resoldering the complete slot, and leaving the refclck pins isolated from the mobo. (Maybe by bending them sideways, or putting thin heatshrink around the pins, or if the reflck signal is on one of the top or ideally bottom layer of your mobo to delaminate the pth pads)

Is the mobo's REFCLK just divided, 50MHz for each slot, or is there more to it?

This leads me to think you do not have the technical knowlege to pull this off. Sorry.
Why would the clock singal be "divided"??? Every PCIe Device needs its 100Mhz refclck signal.
 
Interesting idea, but I think that the connector for this would be pretty complex and expensive or you'll end up with soldering the wire/ribbon to the bottom of the slot.

Let's say you want to make a PCB with connectors printed on top to press them against the pins standing out at the bottom of the board. What guarantee do you have that the pins are going to be equal length? You'll most likely end up with small variations that might be crucial for the connectivity.

If you want to use some soft conducting material for the connectors you will have to figure out how to connect it properly the first time, because after mounting once the contact surface will be already deformed.

The expensive way would be doing something like pins in the LGA socket, but well, that's not easy to make :)



My idea for tackling this would be a flexible ribbon that goes into the slot, with connectors on one side and isolation on the other one, so it would isolate the 8x lanes off the slot, but it would need to be really thin.
 
Last edited:
Just for the sake of simplicity.. How about a carefully considered case mod. Mess up the case... Such a mistake would usually be less expensive then potentially having to replace the motherboard. Move the pcie brackets out a bit or something.

But I understand the appeal of doing something for the sake of it because its a challenge.
 
Wow, thank you everyone for pitching in! This is very helpful.

In my imagination it could be done physically
You don't know how much it means to hear this from someone who made a working solution.

This leads me to think you do not have the technical knowlege to pull this off. Sorry.
Right on, no need to be sorry. That won't deter me from thinking outside the box though!

C_Payne, Saper_PL, I'm keeping your pointers when it comes to signal integrity and connection caveats from under the motherboard in mind.

So here goes!

I've been taking a closer look at the PCIe slot, and I've noticed there are holes with access to the pins at the top of the connector. These pins are definitely matching a specification to ensure PCIe 3.0 throughput, unlike the solder joints under the motherboard.

Connecting from a thin flexible flat cable between the slot and the card is the next possibility, since a tape fits. From what I gathered, flexible flat cable aren't that great with high-frequency signal integrity over a distance. I was looking at the available options PCIe 3.0 differential pair speed (8GT/s or 984.6MBps for PCIe 3.0 throughput per lane in one direction), and ethernet cables came to mind, most specifically the Cat 6a type. 500MHz, double-shield, capable of 10Gbps, or 1.2GBps over 100m.

Here is the idea: several short ethernet cables carefully sized and stripped, soldered to short thin flexible flat cable strips to fit in the PCIe slot, and use the holes in the slot for daisy-chaining the necessary signals. The REFCLK signal can be dealt with using both methods, getting it to the extra PCB by the holes, and sent back to the slot with a small flexible flat cable, ensuring the card's contact pads are insulated from the pins on the mobo and it gets a REFCLK signal. Edit: maybe even not use all 4 pairs available in one cable, to further reduce the possibility of crosstalk.

This is madness and I wish you the best of luck!
Why thank you!

How about a carefully considered case mod.
The idea is to get PCIe bifurcation working for more people ultimately. It's not every day that you hear about "mini-ITX" and "SLI/XF" in the same thread, if not to make it a drawback of going mini-ITX!

The best, already available solution to "SLI/XFire in a mini-ITX enclosure" is mini-DTX. This standard didn't take off, and AFAIK, there's only one motherboard with two physical PCIe 16x slots and mini-DTX width (fitting in most mini-ITX cases with two PCIe brackets), the Shuttle X79. Which requires going with at least an i7-3930k, on Sandy-Bridge-E architecture. Expect to give up on M.2, DDR4, or even building it with an AMD processor...

The solution found in this thread works, there's no denying it, but I don't believe it's perfect. This thread references three niche markets: "mini-ITX"/"SFF", "SLI/XFire", and "custom case design". I'm trying to move away from the "custom case design" niche to, at worst, the "water-cooling" niche. Let's face it, in the mini-ITX case market, either you go with an alternative (like a video capture or a sound card or whatever you need) or you go SLI/XFire, which means the first GPU will need cooling in a single slot depth. The ATX-laid-out Fractal Design Define Nano S (and the hopefully upcoming Meshify Nano S) is a great case for such application: space for one 240mm and one 280mm radiators, two PCIe brackets, mini-ITX motherboard, ATX PSU (the SilverStone ST85F-PT fits in the case and can deliver enough power for two 1080 Ti and a 7700k, all overclocked.)

One further. Take the NCASE M1. It has three PCIe brackets. Forget SLI/XFire, and even water-cooling. Use a solution that doesn't "sacrifice" the PCIe slot to a riser, and there, you can put a capture card in the first slot, and a blower-style GPU in the second slot. Include the upcoming 6-cores i7. That could be an SFF enthusiast streamer's dream!
 
Last edited:
... there's only one motherboard with two physical PCIe 16x slots and mini-DTX width (fitting in most mini-ITX cases with two PCIe brackets), the Shuttle X79. Which requires going with at least an i7-3930k, on Sandy-Bridge-E architecture.

I actually bought this motherboard. It does not have an SLI certificate. Also it won't fit in an NCASE M1 without mods to the power supply area, therefore it's useless. It does support 4xxx CPU's tho
 
The SLI "certificate" is just a fee paid to nVidia to say "yes, this mobo supports SLI". As we've seen in this very thread, non-certified mini-ITX boards support SLI through bifurcation. They didn't pay the fee for the Shuttle ;)

It is indeed useless, as I just pointed out the width fits. It won't fit in a Nano S either due to the cable management/hard drive recess area. Sorry, that was misleading of me. I've seen a water-cooled XFire build in a BitFenix Prodigy with this board though.

I cited the 3930k as a baseline, since that's the earliest, least expensive option available. I agree going Ivy-Bridge-E would help with power consumption or performance.

Edit: Back on topic. The idea of using the holes on top of the slot has one issue: the recently added reinforced slots. It won't work with them... Back to the drawing board on this one!
 
Last edited:
The SLI "certificate" is just a fee paid to nVidia to say "yes, this mobo supports SLI". As we've seen in this very thread, non-certified mini-ITX boards support SLI through bifurcation. They didn't pay the fee for the Shuttle ;)

The SLI certificate is required for the nVidia drivers to enable the "SLI enable" option in the Control Panel. So you do need it. The mobo is nice and 4xxx series produces decent gains over 3xxx series CPU's, especially in SLI.
I got SLI to work on non-SLI boards using bifurcation by injecting the required certificate. It didn't just magically work by itself :)
 
That's news to me. I found conflicting reports, so I'll take your experience over hearsay.

Back on topic. From my calculation, I'd need 8 4-pairs ethernet cables to get all the pins just from the back of the slot. I'm looking to avoid having that many. Simple question: can the ground pins of the additional PCIe slot on the daughterboard be connected to any ground? Like, directly to a PSU ground?

Edit: I was looking into LVDS and twisted pairs, and Cat 6a may not be suitable. I need to select a cable that can handle 8GT/s over a single twisted pair, over a relatively short distance compared to the 100m "standard", while not having too much shielding to help with the bend radius... My current knowledge doesn't allow me to make a decision. So much to learn!

Edit2: wait, people use PCIe ribbon cable adapters. Untwisted. Am I overthinking this?
 
Last edited:
That's news to me. I found conflicting reports, so I'll take your experience over hearsay.

Back on topic. From my calculation, I'd need 8 4-pairs ethernet cables to get all the pins just from the back of the slot. I'm looking to avoid having that many. Simple question: can the ground pins of the additional PCIe slot on the daughterboard be connected to any ground? Like, directly to a PSU ground?

Edit: I was looking into LVDS and twisted pairs, and Cat 6a may not be suitable. I need to select a cable that can handle 8GT/s over a single twisted pair, over a relatively short distance compared to the 100m "standard", while not having too much shielding to help with the bend radius... My current knowledge doesn't allow me to make a decision. So much to learn!

Edit2: wait, people use PCIe ribbon cable adapters. Untwisted. Am I overthinking this?
All I can say is good luck!
 
All I can say is good luck!

Thank you!

C_Payne, if you happen to pass by, may I ask which IDT-9DBL02 chip you used, what are the components around it and their roles, and is the spacing of the components important or can they be placed closer together?
 
C_Payne, if you happen to pass by, may I ask which IDT-9DBL02 chip you used, what are the components around it and their roles, and is the spacing of the components important or can they be placed closer together?

9DBL0242 - default 100 ohm output

Ballast is to be found in the datasheet. Most of it is power rail filtering. Should be as close as possible. Are you at ease with impedance controlled routing?

C.
 
Thank you much! I'll look more into it. I already went to it for the package dimensions and pinout to recreate it in Eagle. Baby steps, but I'm getting... somewhere.

Are you at ease with impedance controlled routing?

Not at all, but now that I know it's necessary, I'll learn about it!

I'll write down a set of functional specifications soon. I need it now.
 
Thank you much! I'll look more into it. I already went to it for the package dimensions and pinout to recreate it in Eagle. Baby steps, but I'm getting... somewhere.

Maybe thin flex-PCB's are your way to go. I think they exist double sided for the common signals and clock signal.

They could fit inbetween your card and connector. Connection between card and flex pcb will be an issue thou.

C.
 
I'm still looking into using the holes on top of the PCIe slot for the common signals. They are about 7.85mm deep, and connections can be secured thanks to the card in the slot pushing the pins away, and with adhesive tape on the side of the slot. I'm looking for 1mm pitch ribbon cable to achieve this. This would avoid connectivity issues with the card in the original slot completely.

For the clock signal I'm thinking of using both solutions: pin holes to get the signal, and ffc to bring it back. Again, connectivity issues could happen without proper contact.

Double-sided flex pcb looks interesting for the clock signal, it would be easier than the above, and the flexibility afforded by the thinner cable would help with connecting to the additional pcb. It just need some additional thickness to ensure proper contact, and that is up for research. Wouldn't there be crosstalk though?

I'd like to announce that I finally got a job, so cost may not be an issue for me anymore!
 
Last edited:
I was reconsidering using flex pcb on the common signals. The problem at hand would be that each pad of contact being connected to the fpcb, the connection quality would suffer. What if a cut was made between each pad, allowing each pin to press its pad against the pad on the card?
 
Hi.. still new at this bifurcation thing..
Does anyone have a list of mobo that supports yhis? Does asrock x99 taichi support bifurcation?
 
Unfortunately, finding mobos that support bifurcation with publicly available knowledge is not an easy task, since even when shuffling through several user manuals, there aren't many that specifically include PCIe bifurcation configuration / riser card support in the BIOS/UEFI description, including AsRock's.

Have a look through AsRock's X99E-ITX/ac manual here, a mobo we know supports bifurcation: ftp://europe.asrock.com/Manual/X99E-ITXac.pdf
Or even the Fatal1ty X370 Gaming ITX/ac's manual: http://asrock.nl/downloadsite/Manual/Fatal1ty X370 Gaming-ITXac.pdf
EDIT: WHOOPS! It is actually mentioned on page 57. Deleted the rest to avoid confusion.

My point still stands, the X99-ITX/ac manual hasn't been updated to reflect the bifurcation capability.
 
Last edited:
I have had some requests and made the following additional bifurcated risers.
I have PCB's left for each of those designs.

PM if interested.

Chris

516a2755858a3d6963a27b869a0f0aab.png


60mm.jpg


20171010_142941.jpg
 
I have had some requests and made the following additional bifurcated risers.
I have PCB's left for each of those designs.

PM if interested.

Chris

Any chance you would share your design? I have a need to break out a x16 into 4 x4... I have Altium for my design tooling and I promise to keep any Licenses you want or need to include in tact in the final product.
 
C_Payne Or just the schematic surrounding the zero delay buffer or a hint to the application note you sourced? I am breaking out a x16 to four x4 OCuLink connectors for fly-over cables in my design.
 
I'm interested in the schematics around the zero delay buffer too. Would you mind sending a PM my way?
 
Last edited:
Back
Top