PCIE Bifurcation

Not entirely correct. Before his request on the ASRock forum there was another thread with the question if the z97e-itx/ac had bifurcation support, in that thread the ASRock_TSD responded with no and said X99e-ITX/AC had bifurcation support as well as future 100 series chipsets.

Weirdly enough they deleted the thread, but google cache still has it. see a couple of posts back.

Edit: seems that the cached page also isn't available anymore.

Here is the cache I archived:
https://archive.is/C27BA

I grabbed some rigid PCIe Gen3.0 x16 2-way and 3-way splitters from IBM servers that someone was parting with several months ago of which I've had no usage, now this and the price of Skylake CPUs being prohibitive are gradually pushing me towards X99E-ITX. (Though I'll probably wait till the full details of M8I become available.)
 
We are talking here SLI/CrossFire. for which there is no multithreading/multicore in DX11. DX12 provides this with several modes of operation: Alternative Frame Rendering, Split Frame Rendering and Multiadapter. Which means that frames can be drawn alternatively by the GPUs or multiple GPUs can draw a single frame, and different manufacturers graphics adapters can operate together. This all is related to multithreading jobs and distributing them to different cores.



This is not about the internal working of graphics cards but how the system manages operations when there are several graphics cards (SLI). In the present model the cards are managed sequentially by a single thread in single core which leads to bottlenecks. In the DX12 cards can be managed by multiple threads in multiple cores.

Maybe I'm not understanding, or you're using incorrect terminology, but DX12 allowing multiple threads to manage a resource doesn't magically increase performance.
What would magically increase performance (and this seems to be the case from how you're describing it) is the OS forwarding jobs to multiple GPUs itself and deciding for itself how those partial jobs get forwarded to the GPUs and how the workload is split.
This has nothing to do with it suddenly supporting multithreading or multicore, though, because those jobs were split to different cores on the GPU previously as well which all only did partial work. What seems to be happening now is that all cores of all GPUs are considered for this, not just the ones of the "currently managed" GPU.

Anyway, didn't the PLX based splitters not work previously either?
 
Maybe I'm not understanding, or you're using incorrect terminology, but DX12 allowing multiple threads to manage a resource doesn't magically increase performance. What would magically increase performance (and this seems to be the case from how you're describing it) is the OS forwarding jobs to multiple GPUs itself and deciding for itself how those partial jobs get forwarded to the GPUs and how the workload is split. This has nothing to do with it suddenly supporting multithreading or multicore, though, because those jobs were split to different cores on the GPU previously as well which all only did partial work. What seems to be happening now is that all cores of all GPUs are considered for this, not just the ones of the "currently managed" GPU.

I think there is a mixup of GPU and CPU cores here. The talk was about handling the SLI jobs. This is done by the CPU, in the previous model single core and single thread was involved in this. Take as example texture loading, that was done by the CPU core in sequential manner (and all GPUs had to be loaded with the same data). In the DX12 the GPUs can be loaded by the mulltiple cores and threads (probably different data can be loaded too). Regarding the handling to GPU processing, there is a mode in DX12 called SFR (single frame rendering) in which multiple GPU render single frame.
 
In theory:
- No BIOS support for PCIe bifurcation: Only splitters with PLX chips will work
- BIOS support for PCIe bifurcation: Passive splitters will also work
In practice:
This is an edge case most manufacturers won't test for, and definitely not something consumer GPUs are tested with. What works may be complete pot-luck depending on components used.
Correct though the PLX chip part should work in any system. Take any dual GPU card such as the GTX690/Titan Z/295X2, these are al PLX chip cards and they work without any fuss. I could be that that is because of the always present GPUs(Some kind of handshake that is done) but these card should work in any PCIe slot. regardless of any BIOS. If only someone could test one of these cards on a earlier bios for the X99e.
Short answer: Without a special BIOS, you'll need a PLX chip. Long answer: PCIe Bifurcation (the ability to split a 16x to 8x/8x without the use of a PLX chip) is less of a BIOS feature, and more of a chipset feature. It is INTEL who enables the bifurcation through their chipsets, and it is the motherboard manufacturers who in turn must enable it their BIOS. The Z87, Z97, X99, and Z170 (rumored) chipsets support Bifurcation. It's built-in. If you don't see a Bifurcation option in your BIOS, then you may need to ask your manufacturer. Chemist_Slime did just that and it was a one-day turnaround for ASRock to enable it.
I stand by the accuracy of my statements. Bifurcation support starts at the chipset level, and Intel makes the chipsets, not ASRock. According to Intel, the z87, z97, X99, and z170 chipsets support bifurcation. ASRock not choosing to support it in their BIOS is their decision. However, that decision doesn't condemn the entire platform. Intel ARK datasheets showing chipset bifurcation support. z97 (page 52)
z170 (page 21) To me, this looks like a BIOS support question, right?
Definitely seems like a bio issue to me as well from my experience. The 1.20E bio + the ameri-rack Gen3 passive PCIE splitter + asrock x99e-itx is currently the only confirmed reliable solution that I know of that will support Gen 3.0 graphic cards. The only exception to this is that the super micro active PLX splitter also works but will also require the 1.20E bios as well. The other passive super micro splitter, and the ameri-rack gen2 only work with PCIE2.0 cards, the 970 GTX wouldn't post.

I collected all different positions here. Since experiment prevails on any theory, it seems that even active splitter with PLX chip requires support in motherboards without the PLX chip(s).

My question is what is the situation in motherboards which have the PLX chip(s). They should have PLX BIOS support enabled. The problem could be that BIOS is limited to support only those PLX chips which are on-board (even if the chipset supports kind of generic bifurcation). It would be enlightening if somebody with access to PLX mobo could test with active PLX splitter.

Still, the issue how the PCIe expander boxes seem to work with any mobo is unexplained.
 
I collected all different positions here. Since experiment prevails on any theory, it seems that even active splitter with PLX chip requires support in motherboards without the PLX chip(s).
The PLX chip is not on the motherboard*, but on the PCIe splitter. BIOS support is needed for PCIe bifurcation, but the whole point of a PLX chip is to make thje lane splitting transparent to the motherboard.
The problem is, this doesn't necessarily work in practice. If the PCIe splitter board uses some non-standard pin assignments, or the PLX chip (or the GPU, or the motherboard) uses a technically-within-spec-but-non-standard implementation of PCIe that works perfectly fine in normal use but has some odd timing/capacitance/etc issue when lanes are split, the PLX splitter that should be 'transparent' may not necessarily work.


*Some large boards have PLX chips in order to produce 'extra' PCIe slots if for some reason you need 8 16x slots and/or a whole bunch of SATA connectors (whose controller also needs some PCIe lanes). In this case, because the motherboard manufacturers KNOW a specific PLX chip is going to be used, extra effort will be put into testing and validation to make sure that nothing (or as few things as possible) break.
 
The PLX chip is not on the motherboard*, but on the PCIe splitter. BIOS support is needed for PCIe bifurcation, but the whole point of a PLX chip is to make thje lane splitting transparent to the motherboard. The problem is, this doesn't necessarily work in practice. If the PCIe splitter board uses some non-standard pin assignments, or the PLX chip (or the GPU, or the motherboard) uses a technically-within-spec-but-non-standard implementation of PCIe that works perfectly fine in normal use but has some odd timing/capacitance/etc issue when lanes are split, the PLX splitter that should be 'transparent' may not necessarily work. *Some large boards have PLX chips in order to produce 'extra' PCIe slots if for some reason you need 8 16x slots and/or a whole bunch of SATA connectors (whose controller also needs some PCIe lanes). In this case, because the motherboard manufacturers KNOW a specific PLX chip is going to be used, extra effort will be put into testing and validation to make sure that nothing (or as few things as possible) break.

From the above I rather doubt you have detailed knowledge of the issues here. Explaining why the PLX chips are used is good for less advance people than those which are discussing. Note I have motherboard with two PLX chips. This means they have to be supported in BIOS. What I am looking for is if I add a PCIe splitter with the PLX chip, it will be recognized and enabled. This is absolutely possible in principle though it may not work due to some kind of cutting corners by BIOS implementers.
 
I extremely rarely do this but this needs to stop.
wirk, you shouldn't call other people "less advanced" when you still don't understand the difference between PLX chips and PCIe bifurcation, even though it has been explained to you multiple times. You asked this on page 2, it has been answered plenty of times and you keep ignoring the answer. EdZ and QinX are both very knowledgeable people in my book, so please respect what they have to say.
 
I extremely rarely do this but this needs to stop.
wirk, you shouldn't call other people "less advanced" when you still don't understand the difference between PLX chips and PCIe bifurcation, even though it has been explained to you multiple times. You asked this on page 2, it has been answered plenty of times and you keep ignoring the answer. EdZ and QinX are both very knowledgeable people in my book, so please respect what they have to say.

<3

Also wirk, if you have the ASRock X99 WS-E/10G or X79 Extreme11 and want to know if it works just buy the PLX card and if it doesn't work ask/complain to ASRock that your $650 motherboard doesn't work with it.

Secondly you could approach the retailer that sells the PLX card and ask if they would accept a return of the card because you are unsure if it works with your motherboard.

PS:
I've just sent an email to Avago with the question(simplified):
Do your PLX chips needs any hardware or software/BIOS support from a motherboard in order to work.

If it is no, then we know that the X99e just had a bit of flimsy PCIe code in the BIOS. That could happen on any motherboard.
If it is yes, then lets hope we get a nice detailed answer of what or why
 
This is absolutely possible in principle though it may not work due to some kind of cutting corners by BIOS implementers.
It;s possible in princible, but may not work because PCIe is hard. It's an exceptionally high speed bus, with multiple parallel lines, and a rather high clock speed. GPUs are also some of the devices that push the bus closest to the limit after dedicated data transfer controllers (storage controllers and high-speed network interfaces) so issues that may not be visible with a gigabit Ethernet card or USB3 card that may only be barely taxing a lane or two will rear their head when you've got two GPUs that could be hammering 16 lanes both sharing the same set of 16 lanes. Operating on the bus also means talking to multiple components at a rather low level to perform the proper handshaking, which is further complicated when you have two devices both on the 'same' lanes, because you now have a PLX chip talking to two devices and pretending to be the motherboard, and talking to the motherboard pretending to be one device when it actually has two attached.

That you can just drop a GPU into a motherboard and have it 'Just Work' (mostly. There are still plenty of times you can look through BIOS update notes and find mentions of fixes for PCIe issues) is due to motherboard vendors doing a lot of validation to iron out bugs. When a PLX chip is on the motherboard itself, that also gets folded into validation testing, and any quirks of that PLX chip are worked around. When you have a PLX chip on a PCIe splitter card, however, it hasn't been validated in a specific system as a whole. If any one of the PLX chip, the GPUs used, or the motherboard implementation has a quirk that causes an issue, that won't be caught until that specific combination of components is brought together. Just because a motherboard has a specific PLX chip in a certain location, does not mean that same PLX ship transported onto a riser card on a different set of PCIe lanes (which could be fro mthe CPU rather than the chipset) will work, let alone a different model of PLX chip from the same manufacturer, or even a PLX chip from a completely different manufacturer!
If you ever wonder why workstation and server boards publish lists of approved hardware, with warnings that Bad Things may happen when using unapproved hardware, this is why. If something hasn't been tested and validated, there's no guarantee it will work without issues (or at all). Most of the extra cost of workstation and server boards is validation, rather than extra component costs.

What QinX and others are doing with PCIe splitters and GPUs is far from normal usage. Motherboard manufacturers are not testing for this, both because it's always going to be a niche practice and because they probably haven't even considered someone might want to do it in the first place. The same goes for PCIe bifurcation. While things should 'Just Work' in theory, in practice implementing these sorts of things are hard, and we effectively never see all the work required to get things working nicely. We just see the end result that dropping a PCIe card into a slot Just Works (mostly).
Right now, we're seeing the sausage being made in real time; messing with different bits of hardware and seeing what works and what doesn't and watching ASRock release BIOS updates to fix issues that they never even considered testing for as they are found.

I have to take my hat off to QinX, he's acting as a one-man R&D section for a whole new usage case.
 
I extremely rarely do this but this needs to stop.
wirk, you shouldn't call other people "less advanced" when you still don't understand the difference between PLX chips and PCIe bifurcation, even though it has been explained to you multiple times. You asked this on page 2, it has been answered plenty of times and you keep ignoring the answer. EdZ and QinX are both very knowledgeable people in my book, so please respect what they have to say.

I understand the issue but some people do not understand the relation between the PLX and PCIe or in other words between the passive and active splitters.

Also wirk, if you have the ASRock X99 WS-E/10G or X79 Extreme11 and want to know if it works just buy the PLX card and if it doesn't work ask/complain to ASRock that your $650 motherboard doesn't work with it.

That is unfortunately not that simple since I have ASUS X99-E WS mobo. From what I am reading it looks that ASRock support is way,way better than Asus. There is absolutely no chance Asus would do anything special , I tried to ask in official forums.

Secondly you could approach the retailer that sells the PLX card and ask if they would accept a return of the card because you are unsure if it works with your motherboard.

Yes, that is potential option though first questioning on the net should be exhausted.


PS: I've just sent an email to Avago with the question(simplified): Do your PLX chips needs any hardware or software/BIOS support from a motherboard in order to work. If it is no, then we know that the X99e just had a bit of flimsy PCIe code in the BIOS. That could happen on any motherboard. If it is yes, then lets hope we get a nice detailed answer of what or why

The problem is bit different since it is now quite clear from the ASRock case that bifurcation should have some BIOS support both for the passive and active splitters in case of motherboards without PLX chips (implies PLX chips need some BIOS support). The real question is thus about the case of motherboards with PLX chips: do they need separate BIOS support needed for active PLX splitter or not?
 
I understand the issue but some people do not understand the relation between the PLX and PCIe or in other words between the passive and active splitters.
We've been trying to explain this to you for some time: In theory, PLX chips on splitters should not require any work on the part of the motherboard manufacturer. In practice, it does require some work because PCIe is not a simple bus. Complaining that motherboard manufacturers are 'lazy' or 'cutting corners' helps no-one, and makes light of the amount of effort needed to implement PCIe.
There is absolutely no chance Asus would do anything special , I tried to ask in official forums.
It's actually pretty rare for 'official forums' to have actual company representatives active on them.
If you have a hardware incompatibility issue, the best route would be to contact Asus directly as a fault ticket, and from there ask whatever frontline staff you end up in contact with to connect you with the engineering staff (or if there is a language barrier, act as a go-between). To be taken seriously, it would be best to document the steps you have taken extensively, and have on hand as much information as possible on the hardware you are using and detailed descriptions of the errors you are seeing (or how the board behaves if there is no explicit error message) for each configuration you have tried. Attaching an external PLX splitter to a board is a rare use-case, so a "it doesn't work, fix it!" with minimal investigation probably won't be given a high priority. Having a case with plenty of testing performance to start with makes it an easier job to tackle, and thus more likely to be looked into.
 
fF5U8AD.png


VP7uqrO.png
 

Good find, but the support is expecte Z97 has it as well.
All motherboards out there that have SLI or Crossfire support have PCIe bifurcation code enabled in the BIOS, because it needs to split the regular 16x lanes to 2 8x lanes when multiple lanes are used.
The only problem is that almost no manufacturers implement the PCIe bifurcation option on ITX boards, which makes sense given the fact it only has 1 PCIe slot and possibly a M.2 slot. but the support could be retrofitted easily. If enough support can be gathered, either through use of PLX risers or the X99e board, other manufacturers might jump on board, a ROG Impact with dual GPU support would be something ASUS would love to promote for I'd reccon.

Imagine a Ncase M1 sized case with 2x Fury X or 2x Fury X2 ;)
 
Imagine an A4 sized case with 2x Fury Nano's!

That might actually be a harder fit then you think.
The case has a length of 314mm, so 12 something inches, the Nano is 6 inches, 152.4mm so 304.8mm that leaves 9.2mm for 2 PCIe 8-Pins, which have a body length of 21,4mm. netting 326,2mm in length.

You can't have them behind eachother because of this. perhaps if you try to ghetto it even more, but you'd have to have the case to try it.

You could solder the PCIe cables to the back of the card if you really really really wanted to. I don't know if there is enough space for that.
 
That might actually be a harder fit then you think.
The case has a length of 314mm, so 12 something inches, the Nano is 6 inches, 152.4mm so 304.8mm that leaves 9.2mm for 2 PCIe 8-Pins, which have a body length of 21,4mm. netting 326,2mm in length.

You can't have them behind eachother because of this. perhaps if you try to ghetto it even more, but you'd have to have the case to try it.

You could solder the PCIe cables to the back of the card if you really really really wanted to. I don't know if there is enough space for that.

Yeah I realize it's a tight fit–– I'm imagining placing one in the normal position, and the other pointed upward vertically. Not sure what the height of the R9 Nano is, but then you would just have to worry about the length of one card, the height of the other, the height of one PCI-e connection, and the height of one of the PCI-e power connectors.
 
If I'm reading that correctly:
- Lanes from the Z170 chipset don't support PCIe bifurcation, except for those dedicated for SATA Express (likely to enable the swap-between-being-two-SATA-ports-or-one-SATA-Express-port functionality)
- Splitting lanes from the CPU requires toggling CFG[5], but this may not be the only requirement

Do you have a full copy of the datasheet? It doesn't appear to be on Intel's website yet.
 
What QinX and others are doing with PCIe splitters and GPUs is far from normal usage. Motherboard manufacturers are not testing for this, both because it's always going to be a niche practice and because they probably haven't even considered someone might want to do it in the first place. The same goes for PCIe bifurcation. While things should 'Just Work' in theory, in practice implementing these sorts of things are hard, and we effectively never see all the work required to get things working nicely. We just see the end result that dropping a PCIe card into a slot Just Works (mostly). Right now, we're seeing the sausage being made in real time; messing with different bits of hardware and seeing what works and what doesn't and watching ASRock release BIOS updates to fix issues that they never even considered testing for as they are found.I have to take my hat off to QinX, he's acting as a one-man R&D section for a whole new usage case.

This indeed is enthusiast activity at its best. Now I wonder what it really means "BIOS support for bifurcation". Since the processors and chipsets support it in many cases and bifurcation is enabled for PCIe slots, maybe this is just one command "enable extended bifurcation"? Then one would need to contact enthusiast BIOS hackers to do it for the community :D.
 
Yeah I realize it's a tight fit–– I'm imagining placing one in the normal position, and the other pointed upward vertically. Not sure what the height of the R9 Nano is, but then you would just have to worry about the length of one card, the height of the other, the height of one PCI-e connection, and the height of one of the PCI-e power connectors.

standard height PCIe cards are 130mm(including PCIe bracket)
the A4 is 114mm wide, that doesn't fit.
 
standard height PCIe cards are 130mm(including PCIe bracket)
the A4 is 114mm wide, that doesn't fit.

This is how I imagine it:

CMv11xf.jpg


Crude and not quite to scale, but I've tried to scale based off of the PCI-e pins.

The A4 is 12.52 inches long, 7.87 inches high. The R9 Nano is 6 inches long. Rotating one card like this would hopefully let you fit both in, with ~1.5 inches to fit your PCI-e power cables for the rotated card if you don't mount any SSDs below
 
Ah I see, yeah that could work. But I think you would need 2x long PCIe risers, and also the clock signal multiplexer chip. the 2x flexible riser might be fairly expensive and you also need the PCI clock signal splitter PCB. I would probably be cheaper and more efficient to go with a dual GPU Fury X2 card at that point, IF they release a aircooled version, because with 375W TDP card you can run dual nano on 1 PCB.
 
Ah I see, yeah that could work. But I think you would need 2x long PCIe risers, and also the clock signal multiplexer chip. the 2x flexible riser might be fairly expensive and you also need the PCI clock signal splitter PCB. I would probably be cheaper and more efficient to go with a dual GPU Fury X2 card at that point, IF they release a aircooled version, because with 375W TDP card you can run dual nano on 1 PCB.

Oh yeah it would definitely be better to go with a dual GPU Fury X2, I just have a feeling it's going to be a liquid cooled card given the 295x2. I wonder if Nvidia will do a dual Maxwell card? My only concern with that is their Titan Z is a triple slot card... and extremely expensive, seems like they would do something like that again.
 
Ah I see, yeah that could work. But I think you would need 2x long PCIe risers, and also the clock signal multiplexer chip. the 2x flexible riser might be fairly expensive and you also need the PCI clock signal splitter PCB. I would probably be cheaper and more efficient to go with a dual GPU Fury X2 card at that point, IF they release a aircooled version, because with 375W TDP card you can run dual nano on 1 PCB.

I do not see the need for clock multiplexer chip, just 2x flexible riser of good quality?

Oh yeah it would definitely be better to go with a dual GPU Fury X2, I just have a feeling it's going to be a liquid cooled card given the 295x2. I wonder if Nvidia will do a dual Maxwell card? My only concern with that is their Titan Z is a triple slot card... and extremely expensive, seems like they would do something like that again.

Nvidia should bring dual Maxwell card sometime in the fall. Otherwise when AMD will bring dual Fury, it will claim to have the speediest card in the Milky Way damaging Nivida reputation. I can see Nividia not bringing such card only in the case if the new Pascal chip is coming sooner than expected. Titan Z was expensive but maybe now one can get it second hand cheaper?

Dual or triple slot cards can be easily converted to single slot watercooling. One has to remove the DVI connector just but cutting it out. I recently tried it with Titan X and it went fine, the connector is cut out and the card is OK though I do not have it under water yet.

Big iron cards in small case would be nice to have watercooled that however requires custom radiators I think.
 
I do not see the need for clock multiplexer chip, just 2x flexible riser of good quality?

You NEED a clock multiplexer chip!

because of the specification for the reference clock signal you can't just split it, you need to double it using a chip that is designed to do that.
 
Last edited:
While researching some other stuff I realised that instead of bifurcating the pci-e slot we could go for a M.2 -> pci-e X4 converter like this:

P4SM2_1.jpg


Does this mean we could easily get two gpu's on mITX board? Did anyone test this?
 
  • Like
Reactions: D10S
like this
There is only one mITX board that I know of where this would fit and it's the X99E-ITX board that already supports PCIe bifurcation :)
 
Looks like all Z170 board are having this slot at the back if they have m.2. Lets hope someone'll start making m.2 -> pci-e ribbons like pci-e x4 risers :)
 
Yeah I've been looking for those too, no luck though. Or not the desired specs, namely PCIe 3.0 x4.
 
Yeah I've been looking for those too, no luck though. Or not the desired specs, namely PCIe 3.0 x4.

Is the one pictured above not PCIe 3.0?

With one of these you could do a triple CrossFireX setup... 3 R9 Fury Nano's, 2 at x8 and 1 at x4 speeds on the X99 motherboard.
 
The m.2 port on most ITX boards is connected to the chipset, rather than the CPU. On all chipsets prior to the Z170 series, the chipset links to the CPU with DMI 2.0, which is essentially a PCIe 2.0 4x link. This woud be the bottleneck for any GPU attached tothat m.2 port (as it would be competing on that 4x link with everything else in the system, like all your storage, all your I/O, etc).
Z170 has DMI 3.0, which is effectively a PCIe 3.0 4x link. Still could end up being a bottleneck once you have storage etc. sharing that link.
 
The m.2 port on most ITX boards is connected to the chipset, rather than the CPU. On all chipsets prior to the Z170 series, the chipset links to the CPU with DMI 2.0, which is essentially a PCIe 2.0 4x link. This woud be the bottleneck for any GPU attached tothat m.2 port (as it would be competing on that 4x link with everything else in the system, like all your storage, all your I/O, etc).
Z170 has DMI 3.0, which is effectively a PCIe 3.0 4x link. Still could end up being a bottleneck once you have storage etc. sharing that link.

Right, this is as I would expect on the Z170 board, where the CPU only has 16 lanes of PCIe. However, on the X99 board the CPU can have up to 40 lanes of PCIe. Presumably in this case the M.2 port would draw lanes from the CPU?
 
I'm glad someone brought this up, I already have that that M2 -> PCIE 4X adapter, a 4X -> 16X PCI-E splitter and hooked up my graphics card to it and guess what? No display! However, this was prior to the 1.20E bios from asrock. So when I get back home tomorrow, I'll give it another shot. I already know that you can do 4 x 4x from the bifurcation, but maybe I can add in another from the M2! :)
 
What is this animal? It is a splitter or should be called 'adapter'??

I'm guessing chemist_slime means a x4 to x16 adapter.

@chemist_slime, looking forward to the results! Are you testing this in OS X? Which GPU are you using? Could this be similar to your problem with PCI-e 2.0 with the GTX 970?
 
Is Crossfire or SLI even possible over the DMI bus ?

I don't think so? If you're using the X99 ITX board this wouldn't be an issue because the M.2 slot connects straight to the CPU.
 
I don't think so? If you're using the X99 ITX board this wouldn't be an issue because the M.2 slot connects straight to the CPU.

Hey, jb1 - I saw your post on ASRock forum about the bifurcation on Z170 Gaming-ITX. Did you manage to get a direct answer from them? As I see that the thread wasn't answered...
 
Back
Top