PCIE Bifurcation

Hi all,
I have been a long time reader (lurker) here and in other related fora - and have spent enormous amount of time on a very similar project inspired by this thread: ASRock x99e-itx/ac + 2x GTX1080's (butchered into single-slot cards and crammed in an NCase with the EKWB Predator 240 All-In-One solution to watercool everything and the SIlverstone 700W SFX-L PSU for power).
I have used Amerirack's ARC1-PERY423-C10, which is described as 402/C10, PCIe x16 splitter, one PCIe x16 to dual PCIe 16(x8 electric) reversed flexible riser/splitter w/10cm ribbon, RoHS, 3.0 Gen3 compatible and can even be screwed on the NCase right next to the mini-ITX m/b.
(link: ARC1-PELY423-C7V3) In other words perfect for this build. So far so good, except I have EXACTLY the same instability issues described above in the 2x Nano's setup!
Both cards are recognised in WIndows Device Manager but unless I disable one of them (by right-click disable) , if I try to use them both the machine will eventually (very quickly) lock-up, the cursor will freeze and only a hard reset restores things. In fact I think the display driver hangs but the rest of the OS underneath works normally, I can remote-login etc. I have tried endless combinations of Windows versions, nVidia driver versions etc but always same story - cursor freeze that "feels" like a hardware issue.
ASRock technical support have proven extremely helpful with multiple beta bios versions to make things work (Broadwell+bifurcation is broken in current public bioses, at least for my particular setup so they came back with a custom one) and I am still iterating with them trying to nail things down. If I get anywhere I will post here - I even started a case at nVidia blaming the drivers. Otherwise "interference" or the 75w limit per PCIe slot or who knows..

Also working with the same board and splitter, definitely post back if you get things working! Is the custom BIOS you're working with available in the beta section on the ASRock site? How is it different from the public ones that have bifurcation support?
 
Also working with the same board and splitter, definitely post back if you get things working! Is the custom BIOS you're working with available in the beta section on the ASRock site? How is it different from the public ones that have bifurcation support?
Good to know we're on the same boat. Btw the reason why I don't buy the interference argument without proof is that using the splitter but with one card disabled and the other x8 on the splitter I have no issues whatsoever...
Wrt the bios, I needed it because once I switched over to a 6800k from my 5820k the system wouldn't post with any of the 3.1-3.6 uefi's available on the website. They sent me a 3.63 beta straight from Taiwan that works. Am waiting for more input about the random freezes, maybe they come up with something who knows - they are impressively helpful I have to say...
 
Btw the reason why I don't buy the interference argument without proof is that using the splitter but with one card disabled and the other x8 on the splitter I have no issues whatsoever...
The worry is less about 'noise' from external sources (like the PSU, or other power circuitry), but 'crosstalk' between PCIe links. With only one link active this isn't a problem, but with two links running right next to each other you can start to get 'valid looking' signals 'leaking' between PCIe links.
 
First things first... I love this thread! I have not found any other place that has this kind of info on this topic, not anywhere. I was literally so excited when I found it that I stayed up to 4am reading the whole thing. :yawn: I'm shocked that no one here has a fully functional completed build based on this experiment yet. This thread started a year in a half ago! I was hoping that by the time I got to the end of reading that I would have seen tons of pictures of crazy builds. Oh well, lets get moving ;)

I thought I was the only one of the planet trying to cram two cards into a super small case and SLI them. Everywhere I looked people were saying "it won't work", "why would you do that", etc type of stupid unhelpful stuff. :mad:

I read this thread PCIe Splitter/Sli? which had me thinking about multiplying the PCIe X16 (initially I was only thinking in terms of multiplying the Mini PCIe or M.2 slot for the purpose of having the additional GPU). I then came upon the term "bifurcation", and here I landed.

Prior to finding this thread yesterday, I have been doing research into this for weeks after getting myself the Asrock X99E-ITX/ac board, Ncase M1 (in the mail as we speak), Intel Xeon E5 2683 V4, GTX 1070, etc. I wanted to have a 64GB mini ITX monster and experiment to see if I could do SLI in it, as well as cram absolutely everything else physically possible into it (extra add-on cards - USB 3.0, Wireless, whatever) and make it look completely professional. I came across the various add-on boards that have been suggested here, but brushed them off for the reason that I will discuss later... I also was looking into the M.2 adapters for the same purpose just like you guys were / are. I found something that would work for M.2 to PCIe at near full speed most likely (more on that later), however, I wanted to not lose the ability to have a super fast M.2 storage drive, and I need whatever option that is chosen to be completely internal. I then was looking at ways to multiply the M.2 slot to two slots, but no one on the planet who makes these types of things seems to think it worthy to create one based on PCIe 3.0 Gen3, which makes it a no-go for me - go figure.

I know that it has been mentioned earlier a few times in passing, but I believe the main problem we are running into using these slot multipliers is most definitely lack of power. Consider the multiplier I am using to split up the onboard Mini PCIe on this board:


Aflo3PS.jpg





Ep4eUKn.jpg

jgz2LfL.jpg





w9CD8lF.jpg





1wZ7mmW.jpg




Even these little cards require that the multiplier have moar powr. I actually didn't know why it wasn't working at first, until I realized I was an idiot and forgot to plug the SATA power into it. After that, boom, works good (although max speed seems to be limited to about 250MB/s, which is to be expected.

In my search for the M.2 option of adding a graphics card, I came across this (which I'm sure you guys have already seen):

[NGFF Version] V8.0 EXP GDC Beast Laptop External Independent Video Card Dock

Enter the "[NGFF Version] V8.0 EXP GDC Beast Laptop External Independent Video Card Dock"

As you can see, although external (it can be done internally), it requires mad power. Reviews from people are mixed, but it DOES work, and at near full speed.

According to the manufacturer, it does comply with Gen3 Spec, but supposedly it will not make a big difference from Gen2: M.2 pci speeds supported? - Banggood Forum

After some research, this seems like too much of a hassle to work completely internally, hence I find myself here.

My question is this:

If the Beast V8.0 requires tons of power to be provided to the slot, why would we think that we can get away with going without providing near the same amount of power using bifurcation? Isn't our best option to get a Gen3 spec
Supermicro RSC-R2UT-2E8R with the PCIe power connector on it?


iGfPPCw.jpg



If a PCIe Slot gives a certain amount of power to these cards, then aren't the cards designed to take that much power from the slot potentially before getting it from the power port on the card? Or is it able to "compensate" by just taking more from the power connector on the card? Or is this a non-issue because supposedly PCIe 3.0 gives up to 300W per slot?


Fun fact... did you know that this board supports 128GB's ram at least? I currently have 64GB working perfectly at 2400MHZ. Asrock's support page just a few weeks ago showed this board as supporting a max of 32GB's. This didn't make sense to me being that the board supports Xeon's, so I reached out to Asrock support and asked. They quickly replied that it would work with 64GB, and updated the page a few days later! Amazing... Since then (a few days after that), now the specs page shows compatibility up to 128GB's - seemingly all because I asked :) . I don't think its a hard limit, but what they've been able to test. Anywho.... That's my story.

Fun fact 2... "DifferentSLIAuto" will allow SLI with this little board, and the info here will show you how to make it work natively without booting into a test environment.
 
Last edited:
If a PCIe Slot gives a certain amount of power to these cards, then aren't the cards designed to take that much power from the slot potentially before getting it from the power port on the card? Or is it able to "compensate" by just taking more from the power connector on the card? Or is this a non-issue because supposedly PCIe 3.0 gives up to 300W per slot?
It depends on the card.
Most recent cards draw minimal power from the PCIe slot itself and draw almost all power from the PEG connector(s). The exceptions are the handful of bus-powered cards (no PEG connectors) and the R9 480/490 (and we've all seen the issues THAT resulted in). Overclocking may increase the power draw through the slot, but again most cards will draw from the PEG connectors instead.

PCIe does not provide 300W per slot. There is a maximum limit of 75W per slot, of which a maximum of 66W (5.5A) can be on the 12V rail. If more power is needed, it must be provided through PEG connectors.
 
breathless, welcome to the thread, glad the information here has been helpful!

It depends on the card.
Most recent cards draw minimal power from the PCIe slot itself and draw almost all power from the PEG connector(s). The exceptions are the handful of bus-powered cards (no PEG connectors) and the R9 480/490 (and we've all seen the issues THAT resulted in). Overclocking may increase the power draw through the slot, but again most cards will draw from the PEG connectors instead.

PCIe does not provide 300W per slot. There is a maximum limit of 75W per slot, of which a maximum of 66W (5.5A) can be on the 12V rail. If more power is needed, it must be provided through PEG connectors.

I'm curious then why the RX 480 was designed to draw so much power over the PCIe slot instead of drawing more over the PEG connectors. Seems like an odd decision, but maybe it was to keep costs down by not adding an additional PEG connector?

My feeling is that a big problem is crosstalk, as has been mentioned a few posts up. It would be great if there were a bifurcated version of the 3M twin axial cable to help us rule that possibility out. These ameri-rack splitters that we've been testing have shielding tape to isolate from external interference, but there is little preventing this crosstalk between individual PCIe lanes. However, I guess we can't rule out the 75W power over PCIe issue until it's been tried, or normal power draw over PCIe for the exact card that you want to try has been measured.
 
But aren't these multipliers designed for this purpose? If crosstalk was an issue, how would anyone use these things? Seems like the type of thing that they would iron out in the testing phase before releasing. I'm going to ask Supermicro and see what they say.

Update:

Here is SuperMicro's response:

"We do not have any certified riser cards to bifurcate the PCI-E slot on mini-ITX motherboards. RSC-R2UG-A2E16-A is an active riser card that is specifically designed for GPU motherboard and systems, not for standard min-ITX motherboards. It hard to know what can be the cause of signal instability. There are many factors that can impact that such as the PCI-E trace length being too long (mini-ITX systems do not support 2U form factor riser cards). The riser card is also an active riser card and there an additional latency for any active riser cards, so that can also be a cause for the instability. It could also be a power issue. Even the RSC-R2UT-2E8R is not designed or certified for min-ITX motherboards. Again, we do not have certified riser cards for 2U form factor for mini-ITX motherboards.

Technical support,

BZ"
 
Last edited:
But aren't these multipliers designed for this purpose? If crosstalk was an issue, how would anyone use these things? Seems like the type of thing that they would iron out in the testing phase before releasing. I'm going to ask Supermicro and see what they say.

Update:

Here is SuperMicro's response:

"We do not have any certified riser cards to bifurcate the PCI-E slot on mini-ITX motherboards. RSC-R2UG-A2E16-A is an active riser card that is specifically designed for GPU motherboard and systems, not for standard min-ITX motherboards. It hard to know what can be the cause of signal instability. There are many factors that can impact that such as the PCI-E trace length being too long (mini-ITX systems do not support 2U form factor riser cards). The riser card is also an active riser card and there an additional latency for any active riser cards, so that can also be a cause for the instability. It could also be a power issue. Even the RSC-R2UT-2E8R is not designed or certified for min-ITX motherboards. Again, we do not have certified riser cards for 2U form factor for mini-ITX motherboards.

Technical support,

BZ"

Sounds like they don't know either! Could be signal instability due to cable length or power issues.

Has anyone thought of trying the RSC-R2UT-2E8R with active power and no cabling and then using 3M twin axial cables to route the PCIe signal to the GPUs? This solution would avoid possible power issues and the crosstalk issues.
 
Last edited:
Hi all,
I have been a long time reader (lurker) here and in other related fora - and have spent enormous amount of time on a very similar project inspired by this thread: ASRock x99e-itx/ac + 2x GTX1080's (butchered into single-slot cards and crammed in an NCase with the EKWB Predator 240 All-In-One solution to watercool everything and the SIlverstone 700W SFX-L PSU for power).
I have used Amerirack's ARC1-PERY423-C10, which is described as 402/C10, PCIe x16 splitter, one PCIe x16 to dual PCIe 16(x8 electric) reversed flexible riser/splitter w/10cm ribbon, RoHS, 3.0 Gen3 compatible and can even be screwed on the NCase right next to the mini-ITX m/b.
(link: ARC1-PELY423-C7V3) In other words perfect for this build. So far so good, except I have EXACTLY the same instability issues described above in the 2x Nano's setup!
Both cards are recognised in WIndows Device Manager but unless I disable one of them (by right-click disable) , if I try to use them both the machine will eventually (very quickly) lock-up, the cursor will freeze and only a hard reset restores things. In fact I think the display driver hangs but the rest of the OS underneath works normally, I can remote-login etc. I have tried endless combinations of Windows versions, nVidia driver versions etc but always same story - cursor freeze that "feels" like a hardware issue.
ASRock technical support have proven extremely helpful with multiple beta bios versions to make things work (Broadwell+bifurcation is broken in current public bioses, at least for my particular setup so they came back with a custom one) and I am still iterating with them trying to nail things down. If I get anywhere I will post here - I even started a case at nVidia blaming the drivers. Otherwise "interference" or the 75w limit per PCIe slot or who knows..

Where'd you buy the Amerirack's splitter from? I was flat out told that they don't sell it individually anymore.
 
Where'd you buy the Amerirack's splitter from? I was flat out told that they don't sell it individually anymore.

Buy -RC1PELY423-C7 PCIe dual-lanes flexible splitter one PCIe x16 to 2-slots PCI

RC1-PELY423-C5V3 PCI-e Bifurcated Flexible Riser, One PCIe x16 to Dual PCIe x16 (OUT OF STOCK)


Has anyone thought of trying the RSC-R2UT-2E8R with active power and no cabling and then using 3M twin axial cables to route the PCIe signal to the GPUs? This solution would avoid possible power issues and the crosstalk issues.

I just purchased the Supermicro RSC-R2UT-2E8R. From wiredzone.com

I'll let you know how it works... This should help significantly if the issue happening is due to lack of power. its only Gen2, but who cares for now while we're just trying to find something that works fully.

Typical extender cables tend to get terrible reviews, so I may have to spring for the 3M ones suggested just to make it a non-issue.

 
Last edited:
Hi team! Thanks to your help, my bifurcated dual-Nano system is up and running! It turned out EdZ was right - the riser cable was the problem.

I had tested it by itself prior to installing the dual Nanos, and it was fine. But in the course of unsuccessfully trying to cram the dual Nanos into the case, I must have broke a connection in the riser. I should have been suspicious - this particular riser comes with a page of printed .txt file instructions on how you need to treat it with kid gloves or it will fail!

I figured this out before seeing EdZ's comment, though, so I now have a twice-as-expensive-as-HDPlex Lian Li riser installed. Oh well, it works! I'd post pictures but it's literally just bare components hanging out of the case supported on a stack of books right now, so I'll wait until I have something more presentable.

It's not out of the question that the card PCIe slot power draw is an issue, though. I noticed that running Furmark tends to crash the system unless I turn down the Nanos' power limit in the AMD driver software. This could be some weird Furmark-specific issue, though - I tried Unigine Valley and had no crashes. I'll let you know if it shows up in other benchmarks too once I have a chance to try them out.
 
Hi team! Thanks to your help, my bifurcated dual-Nano system is up and running! It turned out EdZ was right - the riser cable was the problem.

I had tested it by itself prior to installing the dual Nanos, and it was fine. But in the course of unsuccessfully trying to cram the dual Nanos into the case, I must have broke a connection in the riser. I should have been suspicious - this particular riser comes with a page of printed .txt file instructions on how you need to treat it with kid gloves or it will fail!

I figured this out before seeing EdZ's comment, though, so I now have a twice-as-expensive-as-HDPlex Lian Li riser installed. Oh well, it works! I'd post pictures but it's literally just bare components hanging out of the case supported on a stack of books right now, so I'll wait until I have something more presentable.

It's not out of the question that the card PCIe slot power draw is an issue, though. I noticed that running Furmark tends to crash the system unless I turn down the Nanos' power limit in the AMD driver software. This could be some weird Furmark-specific issue, though - I tried Unigine Valley and had no crashes. I'll let you know if it shows up in other benchmarks too once I have a chance to try them out.

Great to hear you have it working! The HDPlex riser is not bifurcated, correct? What are you using to bifurcate your PCIe connection?

EDIT: Nevermind, I see your previous post about using the RSC-R2UT-2E8R.

Congrats on getting it working!
 
so I now have a twice-as-expensive-as-HDPlex Lian Li riser installed.

Can you tell us exactly which Lian Li riser you got, and where you got it? I need one with a really low profile connector that doesn't extend very high after plugging into the motherboard.
 
So, I've received some hardware to test, including my 3M Cables ($100 a piece - one 250mm and one 500mm), and my SuperMicro RSC-R2UT-2E8R. Tests were with a GTX 970 and 750ti.

I was disappointed to find that the RSC-R2UT-2E8R that I received did not have the 6-pin power socket shown in the pictures, although it does have the holes for the socket, so I suppose one could be soldered in. I asked Supermicro about it, and they just got back to me and said the following:

"The optional power connector was removed on the latest 1.01 PCB revision of this riser card due to hardware change on the twin motherboards that this riser card is used with. Since this is not designed for the type of motherboard that you are using, the risks, if any, are unknown."

I'm only getting a very limited degree of success thus far. It seems that I have NEAR PERFECT functionality, but only when the PCI Link bios setting is set to "Gen 1" for both cards. Whichever card is set to Gen 2 gets no display. If both are set to Gen 2, all I get is motherboard beeps. If I try even just one card, but the setting is "Gen 2", I get beeps. Sometimes the beeps are not "no post" beeps, but rather no display, so the computer continues to boot, but just won't show anything on the screen.

The fact that it only works with the Gen 1 bios setting leads me to believe that its not a power issue in my case, but rather Riser card issue or Bios issue. I can run 3dMark nearly without any issue with my 750ti set as the physx card and the 970 as the primary. Everything seems to function normally, except I did have an error on exiting 3dMark once.

I'm using bios 3.10 BTW, and its pretty much the only bios thus far that has given me any real degree of success (IE: I can get it to boot). If my memory serves me correctly, the other bios's above 3.10 don't include the Gen 1 setting (hence no boot on those). I can't go below 3.10, because I have broadwell.

MaximumBurrito...

Can you tell me a few things about your setup?

What Bios Version are you using?
What CPU do you have?


@ Anyone else that has had any success...

Please tell me what bios version you are using
 
Last edited:
Just figured I'd give a little update for those still following this thread (though it seems to have died an unfortunately premature death).

-First things first, did you know that our board supports SLI Natively without the need for DifferentSLIAuto or HyperSLI?

IPaoJVb.jpg




However, for me, performance is terrible... in fact its worse than one card when SLI is enabled. It works, but when running benchmarks it stutters significantly (though sometimes it seems smooth).


-Bifurcation is indeed broken on Broadwell CPU's with the current available bios's past 3.20 as "alfaSZ" previously reported.

-Bios 3.63 (still unreleased, but available upon request) gives basic functionality back with Broadwell. You will be able to boot, and both cards will install and stay active in device manager.

-Ameri-rack had no problem selling me the ARC1-PERY423-C5 directly. It works too (see next point however).

-Even with basic functionality back, I have several different issues, and I'm not sure if they are because I have Broadwell (maybe there are still some underlying problems with the bios) or something else.

- Boot up takes FOREVER with my GTX 1070 Mini's installed. Its quicker when I swap one out for a GTX 970 Mini. Basically, it looks like its booting up, then stops and the screen goes black. If you wait for several minutes, it comes back to life and continues to boot and then appears to function normally.
- General performance is weird with little glitches when loading webpages and such when SLI is enabled.
- With SLI DISABLED, and physx dedicated to one card (which happens by default), I get blue screens referencing the Nvidia driver whenever I try to load any of the 3dmark applications. Strangely enough, I can load other things sometimes (like Crysis 2), but not the 3dmark benchmarks.
 
Last edited:
breathless, would you mind PMing me the beta bios, I'm also upgrading to Broadwell soon and have Dual 970s. Were things working well when you had v3?
 
breathless Thanks for the update. Unfortunate that things aren't running smoothly. You seem to point to Broadwell-E as a possible culprit, have you tried with a Haswell-E or Haswell-EP CPU?
 
I have not... I don't have one to test with.

Got it, was just curious why you mentioned that bifurcation is broken on the current BIOS with Broadwell-E... could it actually just be completely dysfunctional in the current, non-beta BIOS? I suppose from up above it seems like alfaSZ was able to get bifurcation working on the non-beta BIOS with Haswell-E, but not Broadwell-E.

Anyways, since you're using the 3M risers, it's pretty unlikely that interference is causing the issue. Either the riser card itself or the motherboard support. :\
 
Now I can imagine a compact gaming rig consisting of a Haswell-E CPU & dual Gigabyte GTX 1070 Mini GPUs in SLI…!

But then we would need an mITX version of the Project Orthrus chassis to house it all…!

Hmm…
 
Now I can imagine a compact gaming rig consisting of a Haswell-E CPU & dual Gigabyte GTX 1070 Mini GPUs in SLI…!

But then we would need an mITX version of the Project Orthrus chassis to house it all…!

Hmm…

You could conceivably house it all in an M1!
 
Exactly what I'm trying to do... Gotta get the mini's working first and then its go time.

I think you're really close to getting this working. There's a few permutations left to try it seems, different bifurcation cards and Haswell-E CPUs. Let us know if you hear anything back from ASrock about an updated BIOS as well!
 
I think you're really close to getting this working. There's a few permutations left to try it seems, different bifurcation cards and Haswell-E CPUs. Let us know if you hear anything back from ASrock about an updated BIOS as well!

Right now I'm going to try a SuperMicro RSC-R2UG-A2E16-A. Receiving tomorrow I think.


VDBq2H4.jpg



Being that this one is specifically listed as "active", I'm thinking that it could potentially bypass bios issues with bifurcation since it would be doing the signal splitting rather than the bios (at least thats my guess), and it is specifically designed for GPU support. Interestingly, it supposedly splits the signal into 2 x 16 instead of 2 x 8... I have no idea how it could do that.
 
Being that this one is specifically listed as "active", I'm thinking that it could potentially bypass bios issues with bifurcation since it would be doing the signal splitting rather than the bios (at least thats my guess), and it is specifically designed for GPU support. Interestingly, it supposedly splits the signal into 2 x 16 instead of 2 x 8... I have no idea how it could do that.

Ah, that card has a PLX chip on it, that's why each card gets x16 lanes rather than x8! 16 lanes are fed from the chip to each of the cards. However, there are only 16 lanes that come from the motherboard into the PLX chip so the two cards share the original x16 lanes from the motherboard, you're not magically getting more bandwidth. Since the PLX chip is expecting 16 lanes you may try disabling bifurcation in the BIOS to get this to work with that splitter. These chips are used to make dual GPU boards (like the GTX 690 and R9 295x2) work over a single PCIe port on most motherboards (like those that don't support bifurcation)
 
Ah, that card has a PLX chip on it, that's why each card gets x16 lanes rather than x8! 16 lanes are fed from the chip to each of the cards. However, there are only 16 lanes that come from the motherboard into the PLX chip so the two cards share the original x16 lanes from the motherboard, you're not magically getting more bandwidth. Since the PLX chip is expecting 16 lanes you may try disabling bifurcation in the BIOS to get this to work with that splitter. These chips are used to make dual GPU boards (like the GTX 690 and R9 295x2) work over a single PCIe port on most motherboards (like those that don't support bifurcation)

I tried both those cards, even the active one requires bios support.
 
Success suckaaaaaaaaz ;):


4jAEeIx.jpg




LcfbRs3.jpg



bQ5KFTr.jpg





Can someone tell me that I'm the first known example of a fully working 64GB RAM, 2 x 1070 Mini ITX SLI? :whistle: Seriously though, big shout out to Chemist_Slime for being the initial Guinea Pig and getting all this going and testing all these riser cards.


Anywho.... with this SuperMicro RSC-R2UG-A2E16-A, and bios 3.63 (insert standard "I'm not responsible if you blow up your system" blah blah), I didn't have to touch anything in the bios. It was set to x16, and it just worked. The effective "GT/s" is dynamic and goes up to 8.0 from what I've seen

Two minor issues:

-BOOT UP STILL TAKES FOREVER
-If you look at my GPU-Z, it only recognizes 4GB per card.

Other than that, the nvidia drivers recognized the cards, installed, rebooted, and enabled SLI on its own. I could tell that it was looking good when the menus in 3dMark weren't stuttering, which they were doing terribly before. The benchmark was very smooth, albeit with a few minor noticeable framedrops here and there.

Next step, fitting all this madness in my Ncase M1, 2 x USB 3.0 cards included. Its gonna be fun :rage:
 
Last edited:
Success suckaaaaaaaaz ;):

Can someone tell me that I'm the first known example of a fully working 64GB RAM, 2 x 1070 Mini ITX SLI? :whistle: Seriously though, big shout out to Chemist_Slime for being the initial Guinea Pig and getting all this going and testing all these riser cards.


Anywho.... with this SuperMicro RSC-R2UG-A2E16-A, and bios 3.63 (insert standard "I'm not responsible if you blow up your system" blah blah), I didn't have to touch anything in the bios. It was set to x16, and it just worked. The effective "GT/s" is dynamic and goes up to 8.0 from what I've seen

Two minor issues:

-BOOT UP STILL TAKES FOREVER
-If you look at my GPU-Z, it only recognizes 4GB per card.

Other than that, the nvidia drivers recognized the cards, installed, rebooted, and enabled SLI on its own. I could tell that it was looking good when the menus in 3dMark weren't stuttering, which they were doing terribly before. The benchmark was very smooth, albeit with a few minor noticeable framedrops here and there.

Next step, fitting all this madness in my Ncase M1, 2 x USB 3.0 cards included. Its gonna be fun :rage:

Awesome!! Nice work breathless. You're the first reported example of a fully working GTX 1070 in SLI off of a consumer ITX motherboard.

- Unfortunate that the boot takes forever. At which point does it hang? In the BIOS initialization phase, or in Windows?
- Hey, as long as all 8GB are recognized elsewhere then I wouldn't worry about GPU-Z :)

Now that it's working you can get on to the fun part, fitting it all in your M1! Have you thought about cutting out a window in the side panel so you can remind yourself of your achievement (2 GTX 1070's sandwiched together?)
 
Awesome!! Nice work breathless. You're the first reported example of a fully working GTX 1070 in SLI off of a consumer ITX motherboard.

- Unfortunate that the boot takes forever. At which point does it hang? In the BIOS initialization phase, or in Windows?
- Hey, as long as all 8GB are recognized elsewhere then I wouldn't worry about GPU-Z :)

Now that it's working you can get on to the fun part, fitting it all in your M1! Have you thought about cutting out a window in the side panel so you can remind yourself of your achievement (2 GTX 1070's sandwiched together?)

I have resolved the slow boot issue completely by enabling the "Above 4G Decoding" setting in the bios, as well as enabling the "Above 4G Decoding Patch" setting as well. If you don't enable the patch setting, you can't see the post screen (even though it posts) and therefore you can't see the screen when you are inside the bios - though the system does boot and eventually you can see the desktop. Enabling both settings immediately increased boot speed by several minutes, and now boot appears completely normal.

I have no current plans for a case window because with what I have planned, not only will the precious millimeters taken up by the window be needed, but things are going to be right up against the side of the case - so you wouldn't necessarily see anything anyway :)
 
So there are two very minor issues I've noticed that are still there:

1) I get screen tearing while in SLI when scrolling up and down with the middle mouse button on webpages (I'm using google Chrome). Never had this before. Changing the "Vertical Sync" setting to "Adaptive" helps some, but it still looks a little weird. Not horrific, but noticeable.

2) There are definitely massive frame rate drops in certain games, and seemingly not in others. IE: Battlefield 4 has them pretty bad, while Crysis 3 doesn't really seem to have them at all. Not sure if this is a typical SLI issue, or due to how we are doing this. I do notice them somewhat in the "Heaven" benchmark as well at very moderate settings, though not nearly as bad as Battlefield 4, and in Heaven they are more of a stutter than a "the whole game stops for 1 or 2 seconds" kind of thing.

I have another SLI Bridge connector coming, which for those that don't know, you can use two bridges and it increases performance under certain conditions even without getting the HB Bridge. We'll see how that goes.
 
Congrats!! I registered just to say that I've been following the progress here and can't wait to hear more.

Personally I'm interested in the possibilities of multi gpu in a small form factor for VM usage (unraid).
 
Great work breathless. Here's what I plan to be doing this weekend, still waiting for Dual Titan X's. hehe.

128GB x2 dimms + dual 970 + 24 core 2699 V4 <<--- NOT TYPO, Broadwell ES CPU, don't ask me where I got it. Here's a pic of the 128GB Dimms to wet everyone's appetite.

DpPvpe6.jpg
 
Last edited:
Great work breathless. Here's what I plan to be doing this weekend, still waiting for Dual Titan X's. hehe.

128GB x2 dimms + dual 970 + 24 core 2699 V4 <<--- NOT TYPO, Broadwell ES CPU, don't ask me where I got it. Here's a pic of the 128GB Dimms to wet everyone's appetite.

You beat me to it! Though I was honestly going to try 2 x 64GB first. You've got guts, keep us updated!

I have to say though, I'm not so sure you are going to get amazing performance with those Titan X's. I'd love to be wrong, but my score seems really low for two 1070's in SLI, so there may be some kind of bandwidth ceiling that I'm hitting, and that we are all going to hit with powerful GPU's. Or, maybe its just that my mini's are not quite as powerful as their full size counterparts, which I know is true to some extent.
 
Last edited:
You beat me to it! Though I was honestly going to try 2 x 64GB first. You've got guts, keep us updated!

I have to say though, I'm not so sure you are going to get amazing performance with those Titan X's. I'd love to be wrong, but my score seems really low for two 1070's in SLI, so there may be some kind of bandwidth ceiling that I'm hitting, and that we are all going to hit with powerful GPU's.

This rig will be used for machine learning, not necessarily gaming where bandwidth is concerned so I'm looking for raw computation power and cuda cores. Hopefully it works! I have my fingers crossed. Btw, who did you email at asrock for the 3.63 bios?
 
This rig will be used for machine learning, not necessarily gaming where bandwidth is concerned so I'm looking for raw computation power and cuda cores. Hopefully it works! I have my fingers crossed. Btw, who did you email at asrock for the 3.63 bios?

I've been in contact with John. Nice guy and quick response times.

So I guess my question would be the following: How can you get the raw computation power from those monsters *if* for arguments sake the bandwidth is capped? Doesn't every application rely on the amount of bandwidth available?
 
Back
Top