PCIE Bifurcation

I believe I found an interesting way to split one slot into two -- at a rather high cost, I must warn: the SuperMicro AOC-SLG3-2E4 uses a PLX PE8718 to switch a PCIe x8 3.0 into two PCIe x4 3.0. These are presented as two SFF-8643 connectors. While that's primarily a storage standard it does carry four PCIe lanes and so U.2 (SFF-8639) to PCI-e 4 Lane Adapter this adapter converts back. You will need two of ‌SFF-8639 68P Straight + PWR to SFF-8643 36P HD Mini SAS U.2 Cable these cables as well. The cables is quite flexible compared to the usual risers.

Another possibility is the PCI Express Carrier Board for M.2 SSD modules - Amfeltec but I do not know about having PCIe 3.0, the specs page says it's only Gen2 but M.2 is Gen3. Odd. This costs about 450 USD and is hard to find -- although if you do the math on this page Amfeltec - PCIe Storage Adapter - PCIe Storage Adapters & RAID Cards the card should be 300 USD. Somone interested should ask Transitl whether they would sell a bare board for that price. Especially if it's 3.0 it would be rather efficient because the Supermicro is 250USD. As you've seen in another thread, Intel just released the cable to convert from M.2 to U.2 directly Intel bridges the U.2 gap with an M.2 cable for its 750 Series SSD

Alternatively, PE4C V4.1 (PCIe x16 Adapter) break out the M.2 into a physical x16 signal x4 directly. This is also only 2.0 apparently. Pity.
 
Last edited:
jb1: yea, the 1070GTX is also making me very curious. Unfortunately I already bought the 970 GTX and also the accompanying water blocks.
 
Im wondering whether if 2 1080s in SLI can be modded to 1 slot cards and watercooled.

But not sure where to fit the rads...
 
Hi folks, what a wonderful thread this is. Just registered to ask a few questions:

1) I have read the entire thread but I'm still a bit confused about which splitters work. Is it true that the only splitter that is confirmed to work for PCIe Gen 3 devices is the Ameri-rack one?

2) I have the ASRock X99 ITX motherboard. The beta BIOSs have disappeared from the ASRock site. Does this mean that support for bifurcation has gone into the stable BIOS? I don't have a CPU yet so I can't look for myself.

3) Is the Ameri-rack splitter entirely passive? As in no components on it, just traces? If so, would it be possible to make up my own by buying 2 PCIe x16 sockets and a PCIe x16 edge connector and wiring them all up carefully by hand? It's possible to buy x16 PCIe ribbon cables which have a socket on one end and an edge connector on the other. If I buy two I could reuse the ends and rewire the middle part...

4) If it's possible to wire one up, what is the wiring diagram? There is an image in this thread for the pin-out for a super micro splitter. Can I just wire that up and expect it to work?

5) If it's not possible to make my own and I have to get the Ameri-rack one, what length of ribbon should I get to achieve a configuration where the two slots are in exactly the same position as a dual slot GPU would be without the splitter but raised vertically upwards to make space for the splitter? 3cm or 5cm? How much extra height would the splitter add in this configuration?

If you are wondering what I'm up to, I'm doing an insane Mac Classic beige mod. Hopefully with TECs. I am not allowed to post a link to it until I have 3 posts here but if you search for Mac Classic Forever on overclock.net you will find it.

Thanks for your help!
 
Last edited:
Hi folks, what a wonderful thread this is. Just registered to ask a few questions:

1) I have read the entire thread but I'm still a bit confused about which splitters work. Is it true that the only splitter that is confirmed to work for PCIe Gen 3 devices is the Ameri-rack one?

2) I have the ASRock X99 ITX motherboard. The beta BIOSs have disappeared from the ASRock site. Does this mean that support for bifurcation has gone into the stable BIOS? I don't have a CPU yet so I can't look for myself.

3) Is the Ameri-rack splitter entirely passive? As in no components on it, just traces? If so, would it be possible to make up my own by buying 2 PCIe x16 sockets and a PCIe x16 edge connector and wiring them all up carefully by hand? It's possible to buy x16 PCIe ribbon cables which have a socket on one end and an edge connector on the other. If I buy two I could reuse the ends and rewire the middle part...

4) If it's possible to wire one up, what is the wiring diagram? There is an image in this thread for the pin-out for a super micro splitter. Can I just wire that up and expect it to work?

5) If it's not possible to make my own and I have to get the Ameri-rack one, what length of ribbon should I get to achieve a configuration where the two slots are in exactly the same position as a dual slot GPU would be without the splitter but raised vertically upwards to make space for the splitter? 3cm or 5cm? How much extra height would the splitter add in this configuration?

If you are wondering what I'm up to, I'm doing an insane Mac Classic beige mod. Hopefully with TECs. I am not allowed to post a link to it until I have 3 posts here but if you search for Mac Classic Forever on overclock.net you will find it.

Thanks for your help!

If you are planning on buying from Ameri-rack folks, don't. I just finished talking to their sales guys and they said that they just sell to system integrators now, which is a bummer since i was planning on buying a couple from them.
 
Schmov17 Yes, it worked with the new bios that support bifurcation.

But it seems its just a GEN2 riser?

Hi folks, what a wonderful thread this is. Just registered to ask a few questions:

1) I have read the entire thread but I'm still a bit confused about which splitters work. Is it true that the only splitter that is confirmed to work for PCIe Gen 3 devices is the Ameri-rack one?

2) I have the ASRock X99 ITX motherboard. The beta BIOSs have disappeared from the ASRock site. Does this mean that support for bifurcation has gone into the stable BIOS? I don't have a CPU yet so I can't look for myself.

3) Is the Ameri-rack splitter entirely passive? As in no components on it, just traces? If so, would it be possible to make up my own by buying 2 PCIe x16 sockets and a PCIe x16 edge connector and wiring them all up carefully by hand? It's possible to buy x16 PCIe ribbon cables which have a socket on one end and an edge connector on the other. If I buy two I could reuse the ends and rewire the middle part...

4) If it's possible to wire one up, what is the wiring diagram? There is an image in this thread for the pin-out for a super micro splitter. Can I just wire that up and expect it to work?

5) If it's not possible to make my own and I have to get the Ameri-rack one, what length of ribbon should I get to achieve a configuration where the two slots are in exactly the same position as a dual slot GPU would be without the splitter but raised vertically upwards to make space for the splitter? 3cm or 5cm? How much extra height would the splitter add in this configuration?

If you are wondering what I'm up to, I'm doing an insane Mac Classic beige mod. Hopefully with TECs. I am not allowed to post a link to it until I have 3 posts here but if you search for Mac Classic Forever on overclock.net you will find it.

Thanks for your help!

1. No. Supermicro is selling a few with Gen 3 too (RSC-G2FR-A66 and RSC-R2UG-A2E16-A)

2. No just use the new beta ones. I had problems with the actual one but there is also one older beta.

3. Yes the Ameri-rack seems to be passive. And the idea of making one by yourself is interesting... try it and tell us!

4. dont know this one

5. Depends on your configuration but dual slot should be 5cm. Extra high should be around 15mm with spacer (that the contacts at the bottom doesnt touch the case)


BTW Thanks chemist_slime for figuring out how it works! I use this to make a watercooled build with ITX and two 980ti. (BTW there is also a solution to use the cards without a bridge, tested it and works like a charm with no loss of performance. But need a few persons to test the instructions)
 
Last edited:
But it seems its just a GEN2 riser?



1. No. Supermicro is selling a few with Gen 3 too (RSC-G2FR-A66 and RSC-R2UG-A2E16-A)

2. No just use the new beta ones. I had problems with the actual one but there is also one older beta.

3. Yes the Ameri-rack seems to be passive. And the idea of making one by yourself is interesting... try it and tell us!

4. dont know this one

5. Depends on your configuration but dual slot should be 5cm. Extra high should be around 15mm with spacer (that the contacts at the bottom doesnt touch the case)


BTW Thanks chemist_slime for figuring out how it works! I use this to make a watercooled build with ITX and two 980ti. (BTW there is also a solution to use the cards without a riser, tested it and works like a charm with no loss of performance. But need a few persons to test the instructions)

What exactly is this riser-less solution you speak of?
 
If you are planning on buying from Ameri-rack folks, don't. I just finished talking to their sales guys and they said that they just sell to system integrators now, which is a bummer since i was planning on buying a couple from them.

Hmm. I think this might have been my only option...
But it seems its just a GEN2 riser?



1. No. Supermicro is selling a few with Gen 3 too (RSC-G2FR-A66 and RSC-R2UG-A2E16-A)

2. No just use the new beta ones. I had problems with the actual one but there is also one older beta.

3. Yes the Ameri-rack seems to be passive. And the idea of making one by yourself is interesting... try it and tell us!

4. dont know this one

5. Depends on your configuration but dual slot should be 5cm. Extra high should be around 15mm with spacer (that the contacts at the bottom doesnt touch the case)


BTW Thanks chemist_slime for figuring out how it works! I use this to make a watercooled build with ITX and two 980ti. (BTW there is also a solution to use the cards without a riser, tested it and works like a charm with no loss of performance. But need a few persons to test the instructions)

...since the supermicro ones won't fit.

Is there any other solution which will get me two cards into the space of a dual slot card with a bit of extra height?
 
What exactly is this riser-less solution you speak of?

ah sorry I meant "bridge"... to much risers :D

Hmm. I think this might have been my only option...


...since the supermicro ones won't fit.

Is there any other solution which will get me two cards into the space of a dual slot card with a bit of extra height?

the Ameri-rack isnt for dual slot cards... its for single slot.
 
ah sorry I meant "bridge"... to much risers :D



the Ameri-rack isnt for dual slot cards... its for single slot.

Yes, I want two single slot cards in the space of one dual slot card but they will have to be a bit higher up to fit the riser between the bottom of the cards and the motherboard. I think the Ameri-rack riser can do this if you bend the ribbon cable round. I'm looking for any solution like that which I can actually purchase.
 
Yes, I want two single slot cards in the space of one dual slot card but they will have to be a bit higher up to fit the riser between the bottom of the cards and the motherboard. I think the Ameri-rack riser can do this if you bend the ribbon cable round. I'm looking for any solution like that which I can actually purchase.

Great build on OC, I'll be monitoring it. and yes, for what you want which is what I'm doing as well, the ameri-rack seems to be the only solution. Not sure if they sell to individuals or not anymore according to the person above.
 
heb1001 are you adding two graphics cards or something else? Didn't read that part on OC yet.

2 GPUs is the current plan but I guess in the future I might want 1 and something else perhaps.

Is there a chip to duplicate the reference clock signal on the ameri-rack splitter?

I could probably get a few made if I could work out the wiring diagram. I visited the Shanghai electronics mall last weekend and found a few people offering PCB manufacturing. The mall is incredible. 5 football pitch sized floors full of stalls about the size of a single bedroom selling everything you could possibly imagine.
 
2 GPUs is the current plan but I guess in the future I might want 1 and something else perhaps.

Is there a chip to duplicate the reference clock signal on the ameri-rack splitter?

I could probably get a few made if I could work out the wiring diagram. I visited the Shanghai electronics mall last weekend and found a few people offering PCB manufacturing. The mall is incredible. 5 football pitch sized floors full of stalls about the size of a single bedroom selling everything you could possibly imagine.

Yes, it does. I'll send pics of the ameri-rack splitter in a bit.
 
It uses this chip. It's the only one on the splitter.
 

Attachments

  • Photo Aug 17, 12 26 12.jpg
    Photo Aug 17, 12 26 12.jpg
    145.5 KB · Views: 328
Too bad this can't be used on older chipsets. Imagine the quadSLI powerhouse you can make of any mATX setup...
 
Last edited:
heb1001 If you end up getting one of these made, can I buy one or two off you?

Ameri-rack didn't answer my email. I'm going to phone them. If they won't sell me one I may try to get some made. It looks borderline doable with the skills I have if I get lucky with the signal integrity. If it doesn't work first time though I will never be able to find out why as I don't have access to test equipment that would work at that speed. The last time I did something like this the bus was ISA and I was wire wrapping the wires individually. If I'm successful though it would be good to make them available to individuals somehow. So the answer is yes, I guess, but don't hold your breath. It may never happen.

Is the PI6C20400 wired up to the SMBUS? Are pins 13 and 14 of the PI6C20400 (counting anticlockwise from the indent in the package looking from the top of the chip) connected to pins 5 and 6 of side B (the component side) of the PCIe fingers? EDIT: nevermind, I can try it both ways and see which works.

I wonder if those little coax cables which are used for connecting WiFi antennas would work for the PCIe lane differential signals. It might be possible to do a single PCB with coax sockets that can be stacked to create a 2, 3 or 4 way splitter depending on how the coax cables were connected. The clock buffer supports 4 clock outputs so that would be enough.

EDIT: this twin axial cable looks like the right stuff: http://multimedia.3m.com/mws/media/673519O/3mtm-twin-axial-cable-sl8800-series.pdf&fn=Twin Ax Sales Sheet.pdf
 
Last edited:
Ameri-rack didn't answer my email. I'm going to phone them. If they won't sell me one I may try to get some made. It looks borderline doable with the skills I have if I get lucky with the signal integrity. If it doesn't work first time though I will never be able to find out why as I don't have access to test equipment that would work at that speed. The last time I did something like this the bus was ISA and I was wire wrapping the wires individually. If I'm successful though it would be good to make them available to individuals somehow. So the answer is yes, I guess, but don't hold your breath. It may never happen.

Is the PI6C20400 wired up to the SMBUS? Are pins 13 and 14 of the PI6C20400 (counting anticlockwise from the indent in the package looking from the top of the chip) connected to pins 5 and 6 of side B (the component side) of the PCIe fingers? EDIT: nevermind, I can try it both ways and see which works.

I wonder if those little coax cables which are used for connecting WiFi antennas would work for the PCIe lane differential signals. It might be possible to do a single PCB with coax sockets that can be stacked to create a 2, 3 or 4 way splitter depending on how the coax cables were connected. The clock buffer supports 4 clock outputs so that would be enough.

EDIT: this twin axial cable looks like the right stuff: http://multimedia.3m.com/mws/media/673519O/3mtm-twin-axial-cable-sl8800-series.pdf&fn=Twin Ax Sales Sheet.pdf

Unfortunately I don't have the knowledge to answer your technical questions but would more photos help? are there specific ones that you'd like for me to take?
 
At the start of this thread I was also looking into this, making a PCB that takes the 100MHz reference clock and splits it into 2 isn't that big of a project. Combine that with a working flexible riser and you can have a simple splitter and be even more flexible, you can have different lengths easily.
 
At the start of this thread I was also looking into this, making a PCB that takes the 100MHz reference clock and splits it into 2 isn't that big of a project. Combine that with a working flexible riser and you can have a simple splitter and be even more flexible, you can have different lengths easily.

Yes, I recall! I also signed up at SFF form, didn't know you were a mod there! I'll be backing Cerebreus once it's available as well!
 
Unfortunately I don't have the knowledge to answer your technical questions but would more photos help? are there specific ones that you'd like for me to take?

No need. It's only the SMBUS connections I'm not sure about and it will be easy enough to try out both possible implementations and see which works. It may all be documented anyway, I haven't even tried looking yet.

I've been thinking about the mechanical form factor and I think I'd want to avoid using a cable altogether and aim for the minimum possible height for the GPUs above the motherboard (with two single slot cards in the same horizontal location as a dual slot GPU would be without the splitter as I said in a previous post). There are two obvious options: 1 - vertical PCIe fingers with a right angle connector to a horizontal PCB that sits on top with two PCIe sockets and the clock buffer chip; 2 - vertical PCIe fingers with a single PCIe socket on the top edge of the same PCB directly above and the clock buffer and a connector in the middle for a parallel PCB stacked one slot away, the 2nd PCB also with a single PCIe socket on the top edge. I think the first option would probably be lower profile but I'm not sure.
 
Not sure if anyone mentioned this yet, but the Gigabyte GA-Z170N-WiFi got 8x/8x bifurication on the new rev2.0 models.

I don't suppose they're gonna manufacture riser cards themselves, or list compatible riser cards from third party suppliers.
You know, to avoid the hassle of someone buying a random splitter and it possibly not working, then wanting a refund.
 
This thread is pure gold, so much hard to find information. I also have a question that's a bit off topic but I fear this is the only place in the Universe with people knowledgeable enough to answer it. :) I built a system quite a while ago that I am still very happy with and am trying to upgrade and keep alive until it becomes a true senior citizen and needs to retire. It's a Gigabyte GA-X58A-UD3R, PCI-e 2.0 MB with 2x x16 & 2x x8 slots, SLI & Crossfire certified. I am more of a software developer than a gamer so it's not necessary to have the latest and greatest, my focus is maximizing I/O performance, which has always been a bottleneck for me. Remarkably the Samsung 950 Pro NVMe M.2 SSD is bootable on this pre-UEFI system and I've installed Win10 on it as my boot drive, via a PCI-e M.2 adapter card. Unfortunately the SM951 I purchased before that is not bootable, but after getting the 950 I had an extra M.2 stick. I decided to try RAIDing them together instead of selling it, and with some convoluted Windows tinkering I got it to work, and am getting some insane I/O performance considering it's on a 6-year-old motherboard based on an 8-year-old chipset:
m2raid-med.png

Not shown, achieving ~100,000 IOPs. This is all great except the M.2 drives are actually capable of much more than that on a PCIe 3.0 bus. So I have been trying to think of a way to tap that bandwidth without throwing out the motherboard.

From what I read above, this motherboard should support bifurcation, considering the tricks it already does with PCIe lane distribution. However that wouldn't help because all it would do is save a slot by allowing two x4 drives to connect via one x8 slot, but the connections would still be PCIe 2.0.

But also from what I read above, a PLX bridged riser should work as well (with M.2 -> PCIe x4 adapters plugged in to it). If the on-card M.2 slots are x4 3.0, and the card itself is x16, plugged in to an x16 2.0 slot, there should be enough bandwidth for the M.2 cards to communicate at full speed to the bridge chip, and in turn for the bridge chip to communicate with the motherboard.

However this hinges on a critical assumption -- that the bridge chip doesn't just act as a dumb lane switch, routing the 4 lanes from the M.2 devices directly to the CPU via four 2.0 lanes, but smartly buffers and distributes the actual data among all available lanes between it and the motherboard. So the riser would have to take x8 3.0 lanes on one side and use all x16 2.0 lanes on the other. Is this a pipe dream?
 
It's a Gigabyte GA-X58A-UD3R, PCI-e 2.0 MB with 2x x16 & 2x x8 slots, ...Samsung 950 Pro NVMe via a PCI-e M.2 adapter card....This is all great except the M.2 drives are actually capable of much more than that on a PCIe 3.0 bus.
... But also from what I read above, a PLX bridged riser should work as well (with M.2 -> PCIe x4 adapters plugged in to it). If the on-card M.2 slots are x4 3.0, and the card itself is x16, plugged in to an x16 2.0 slot, there should be enough bandwidth for the M.2 cards to communicate at full speed to the bridge chip, and in turn for the bridge chip to communicate with the motherboard.

However this hinges on a critical assumption -- that the bridge chip doesn't just act as a dumb lane switch, routing the 4 lanes from the M.2 devices directly to the CPU via four 2.0 lanes, but smartly buffers and distributes the actual data among all available lanes between it and the motherboard. So the riser would have to take x8 3.0 lanes on one side and use all x16 2.0 lanes on the other. Is this a pipe dream?

I would think this is a pipe dream. I believe they are dumb switches to start. Further to that, the byte encoding is different, so a re-encode would kill latency.

find PCI_Express on wikipedia, the History_and_revisions section shows the encoding and their bandwidth differences.

However, reading pex8717 description on avagotech's site, they do mention mixed generation mode environments, and the datasheet for 8734 has some diagrams depicting 16x host to (4) 4x slots. That doesn't show gen 3 clients on gen 2 host however.

Thx again for the wonderful thread. I hope to share some wonderful PCI Bifurcation news soon from Gigabyte.
 
Last edited:
Hi team, I was inspired by chemist_slime's success to roll my own mITX dual R9 Nano machine. I mostly do engineering simulation and rendering, not gaming, but this sounded like a cool way to build a cheap-ish parallel processing powerhouse in a really small enclosure. So I don't need Crossfire per se, but it would be fun to have as a bonus. I do want to at least be able to use one card for graphics and one for data, though- a poor man's Mac Pro.

I have an Asrock Fatality mITX board (with bifurcation BIOS support enabled), a Supermicro RSC-R2UT-2E8R passive bifurcation riser, and an EZDIY x16 passive riser cable to reach the second Nano around the first. Power supply is a Corsair CX650M 650W supply, so I should have plenty of juice.

However, I have a problem. After removing the AMD drivers, updating the Intel onboard graphics driver, and reinstalling the AMD drivers, I can run the system on onboard graphics stably and have both cards show up in the display adapters:

20161008_subsequent_amd_full_drivers_intsall.PNG


I can also run a single card at a time driving the display, so I know there's nothing wrong with either card (or the PCIE riser or splitter).

However, when I try to drive the display from one card with both cards plugged in, my system is unstable. Sometimes it will work in intermittent bursts, sometimes it will boot but then show graphical corruption then BSOD, and most of the time it won't boot at all.

During one period of relative stability, I was able to see that the AMD driver is seeing both cards:

Capture2.JPG


so there doesn't seem to be anything wrong with the PCIE connection per se. It's only that the system is unstable with both cards in and one card powering the display - everything else works.

The only thing I've been able to think of is that maybe the two Nanos are drawing too much power through the PCIE slot, overloading the slot and causing the instability and shutdowns. I wouldn't think this would be a problem - they aren't power-hungry cards, and they have separate 8-pin connectors. So before I go by a powered riser to try to solve the issue, I wanted to see what people think is the problem.

What do you think could be wrong here? Has anyone else had similar problems? What did they do to solve them? Is it just the PCIE slot being overdrawn or is it some other issue I haven't yet found? Any ideas?
 
Hi team, I was inspired by chemist_slime's success to roll my own mITX dual R9 Nano machine. I mostly do engineering simulation and rendering, not gaming, but this sounded like a cool way to build a cheap-ish parallel processing powerhouse in a really small enclosure. So I don't need Crossfire per se, but it would be fun to have as a bonus. I do want to at least be able to use one card for graphics and one for data, though- a poor man's Mac Pro.

I have an Asrock Fatality mITX board (with bifurcation BIOS support enabled), a Supermicro RSC-R2UT-2E8R passive bifurcation riser, and an EZDIY x16 passive riser cable to reach the second Nano around the first. Power supply is a Corsair CX650M 650W supply, so I should have plenty of juice.

However, I have a problem. After removing the AMD drivers, updating the Intel onboard graphics driver, and reinstalling the AMD drivers, I can run the system on onboard graphics stably and have both cards show up in the display adapters:

View attachment 8644

I can also run a single card at a time driving the display, so I know there's nothing wrong with either card (or the PCIE riser or splitter).

However, when I try to drive the display from one card with both cards plugged in, my system is unstable. Sometimes it will work in intermittent bursts, sometimes it will boot but then show graphical corruption then BSOD, and most of the time it won't boot at all.

During one period of relative stability, I was able to see that the AMD driver is seeing both cards:

View attachment 8645

so there doesn't seem to be anything wrong with the PCIE connection per se. It's only that the system is unstable with both cards in and one card powering the display - everything else works.

The only thing I've been able to think of is that maybe the two Nanos are drawing too much power through the PCIE slot, overloading the slot and causing the instability and shutdowns. I wouldn't think this would be a problem - they aren't power-hungry cards, and they have separate 8-pin connectors. So before I go by a powered riser to try to solve the issue, I wanted to see what people think is the problem.

What do you think could be wrong here? Has anyone else had similar problems? What did they do to solve them? Is it just the PCIE slot being overdrawn or is it some other issue I haven't yet found? Any ideas?

It could maybe be interference from one of the GPUs picked up in the riser ribbon cable you are using.

Just a guess.
 
If the "EZDIY riser" is this one, then I'd also suspect it. Something like the HDPLEX riser is relatively cheap (compared to the gold-standard 3M riser) and works well.
 
Hi team, I was inspired by chemist_slime's success to roll my own mITX dual R9 Nano machine. I mostly do engineering simulation and rendering, not gaming, but this sounded like a cool way to build a cheap-ish parallel processing powerhouse in a really small enclosure. So I don't need Crossfire per se, but it would be fun to have as a bonus. I do want to at least be able to use one card for graphics and one for data, though- a poor man's Mac Pro.

I have an Asrock Fatality mITX board (with bifurcation BIOS support enabled), a Supermicro RSC-R2UT-2E8R passive bifurcation riser, and an EZDIY x16 passive riser cable to reach the second Nano around the first. Power supply is a Corsair CX650M 650W supply, so I should have plenty of juice.

However, I have a problem. After removing the AMD drivers, updating the Intel onboard graphics driver, and reinstalling the AMD drivers, I can run the system on onboard graphics stably and have both cards show up in the display adapters:

View attachment 8644

I can also run a single card at a time driving the display, so I know there's nothing wrong with either card (or the PCIE riser or splitter).

However, when I try to drive the display from one card with both cards plugged in, my system is unstable. Sometimes it will work in intermittent bursts, sometimes it will boot but then show graphical corruption then BSOD, and most of the time it won't boot at all.

During one period of relative stability, I was able to see that the AMD driver is seeing both cards:

View attachment 8645

so there doesn't seem to be anything wrong with the PCIE connection per se. It's only that the system is unstable with both cards in and one card powering the display - everything else works.

The only thing I've been able to think of is that maybe the two Nanos are drawing too much power through the PCIE slot, overloading the slot and causing the instability and shutdowns. I wouldn't think this would be a problem - they aren't power-hungry cards, and they have separate 8-pin connectors. So before I go by a powered riser to try to solve the issue, I wanted to see what people think is the problem.

What do you think could be wrong here? Has anyone else had similar problems? What did they do to solve them? Is it just the PCIE slot being overdrawn or is it some other issue I haven't yet found? Any ideas?

Hi all,
I have been a long time reader (lurker) here and in other related fora - and have spent enormous amount of time on a very similar project inspired by this thread: ASRock x99e-itx/ac + 2x GTX1080's (butchered into single-slot cards and crammed in an NCase with the EKWB Predator 240 All-In-One solution to watercool everything and the SIlverstone 700W SFX-L PSU for power).
I have used Amerirack's ARC1-PERY423-C10, which is described as 402/C10, PCIe x16 splitter, one PCIe x16 to dual PCIe 16(x8 electric) reversed flexible riser/splitter w/10cm ribbon, RoHS, 3.0 Gen3 compatible and can even be screwed on the NCase right next to the mini-ITX m/b.
(link: ARC1-PELY423-C7V3) In other words perfect for this build. So far so good, except I have EXACTLY the same instability issues described above in the 2x Nano's setup!
Both cards are recognised in WIndows Device Manager but unless I disable one of them (by right-click disable) , if I try to use them both the machine will eventually (very quickly) lock-up, the cursor will freeze and only a hard reset restores things. In fact I think the display driver hangs but the rest of the OS underneath works normally, I can remote-login etc. I have tried endless combinations of Windows versions, nVidia driver versions etc but always same story - cursor freeze that "feels" like a hardware issue.
ASRock technical support have proven extremely helpful with multiple beta bios versions to make things work (Broadwell+bifurcation is broken in current public bioses, at least for my particular setup so they came back with a custom one) and I am still iterating with them trying to nail things down. If I get anywhere I will post here - I even started a case at nVidia blaming the drivers. Otherwise "interference" or the 75w limit per PCIe slot or who knows..
 
Back
Top