1155 mobo with 16x2 lanes

harmattan

Supreme [H]ardness
Joined
Feb 11, 2008
Messages
5,129
I'm looking to upgrade to SB at some point in the near future and will likely go Crossfire or SLI with high-end GPUs (580s, 6970s). Are there any 1155 mobos out there now that offer at least two full 16x PCI-E lanes?

The ASUS P8P67, which seems to be the highest-end board currently, only has one 16x and one 8x (and one 4x).
 
1155 isn't really meant for your usage. This is the mainstream chip and as such the needed PCI-E lanes simply are not supported.

Socket 2011 is what you are looking for.
 
It's true that the high end P67 mother boards have 2 x16 lanes.. however when you add 2 video cards they each run at x8. It's my understanding that this limitation applies to all P67 boards.
 
I will have to dig up the review i saw (think it may have been Overclock3d.net video)where the reviewer was talking about the Nvidia chip on the P67-UD7 motherboard that boosted the second PCI-e to x16 with crossfire or SLI. So you would have x16/x16 with two cards. I may have mistaken what he was saying though.
 
The cards both run at 16X into the nvidia bridge but the NVdidia bridge only has one x16 connection to the CPU (More marketing than anything useful).

Bigddybn was right in that you need to wait for LGA2011 if you want two true X16 lanes direct to the CPU. You will also get PCIEV3 with the upcoming LGA2011 so 4 times the PCIE bandwidth in total to the cpu vs LGA1155.
 
However, in real world usage, [H] testing found little appreciable difference between x16x16 and x8x8, so it isn't like it is a deal-killer today.
 
However, in real world usage, [H] testing found little appreciable difference between x16x16 and x8x8, so it isn't like it is a deal-killer today.

Can you reference some reviews on this? It's not that I don't believe it but when I used to run 4850's Xfired I noticed a big difference when I stepped up to a x16x16 mobo. Thanks!
 
What's the point of the UD7 having 4 x16 slots (2 that run at x8) if in the end it's all x8? If there is only 1 x16 connection and you have 4 cards installed, would all four run a x4 instead of each pair (x16 and x8) sharing x8 like the spec say?
 
What's the point of the UD7 having 4 x16 slots (2 that run at x8) if in the end it's all x8? If there is only 1 x16 connection and you have 4 cards installed, would all four run a x4 instead of each pair (x16 and x8) sharing x8 like the spec say?

I believe the intended benefit is that the cards can all talk to each other at x16 (through the NF200 chip) but then all the traffic for all the cards is funneled into the connection between the NF200 and the chipset (whatever spped that may be). So it improves inter-card communication, but doesn't really improve card to chipset communication. Testing seems to show that the added latency ends up making the whole thing a wash anyway, so at best it is a half-measure to add some functionality that otherwise wouldn't exist.
 
I'm looking to upgrade to SB at some point in the near future and will likely go Crossfire or SLI with high-end GPUs (580s, 6970s). Are there any 1155 mobos out there now that offer at least two full 16x PCI-E lanes?

The ASUS P8P67, which seems to be the highest-end board currently, only has one 16x and one 8x (and one 4x).

Some boards may claim support for that but in reality it is untrue.

That is not even close to correct. ASUS has about 5 or 6 motherboards higher than the P8P67.

Here's a P67 board that does 16x2 lanes:

http://www.hardocp.com/article/2011/01/05/asus_p8p67_ws_revolution_motherboard_review

It's true that the P8P67 isn't ASUS' high end board. The P8P67 WS Revolution and EVO boards are higher end products. There is also a Sabertooth P67 board coming. As for the whole 16x2 thing, not really. They use an nForce 200MCP to multiplex the lane configuration.

I do believe the Gigabyte P67-UD7 has 2 full x16 PCI-e lanes for video. I think it runs about $350.00(ish) U.S.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813128465

Negative. The P67A-UD7 has an nForce 200MCP onboard. This again multiplexes the lanes. Your bottleneck is still 16 PCI-Express 2.0 lanes which the nForce 200MCP uses for communication. This also adds latency.

It's true that the high end P67 mother boards have 2 x16 lanes.. however when you add 2 video cards they each run at x8. It's my understanding that this limitation applies to all P67 boards.

More or less. Some use an nForce 200MCP to "get around this" but the fact is your still bottlenecked by the PCI-Express link the nForce 200MCP uses to communicate with the system.

I will have to dig up the review i saw (think it may have been Overclock3d.net video)where the reviewer was talking about the Nvidia chip on the P67-UD7 motherboard that boosted the second PCI-e to x16 with crossfire or SLI. So you would have x16/x16 with two cards. I may have mistaken what he was saying though.

Your not mistaken in that's what the specifications for the P67A-UD7 say. The truth is that they use an nForce 200MCP to multiplex the lane configuration. So you end up with essentially the same bottleneck you would have without the chip, but with added latency on top of that.

The cards both run at 16X into the nvidia bridge but the NVdidia bridge only has one x16 connection to the CPU (More marketing than anything useful).

Bigddybn was right in that you need to wait for LGA2011 if you want two true X16 lanes direct to the CPU. You will also get PCIEV3 with the upcoming LGA2011 so 4 times the PCIE bandwidth in total to the cpu vs LGA1155.

Exactly.

However, in real world usage, [H] testing found little appreciable difference between x16x16 and x8x8, so it isn't like it is a deal-killer today.

Quite true. In many cases the 8x8 setup performed better because it didn't have the added latency of the nForce 200MCP.

I believe the intended benefit is that the cards can all talk to each other at x16 (through the NF200 chip) but then all the traffic for all the cards is funneled into the connection between the NF200 and the chipset (whatever spped that may be). So it improves inter-card communication, but doesn't really improve card to chipset communication. Testing seems to show that the added latency ends up making the whole thing a wash anyway, so at best it is a half-measure to add some functionality that otherwise wouldn't exist.

Negative. Communication between multiple graphics cards is handled through their SLI or Crossfire bridges. On lower end cards without such connections communication does go through the PCI-Express bus and the nForce 200MCP might help there but for most cards this is not the case. The benefit to the use of the nForce 200MCP is that it allows dynamic lane allocation so that switch cards do not have to be used to reallocate PCI-Express lanes to various slots. Additionally it overcomes the two device limit that Lynnfield and Westmere CPUs have on the P55 and similar chipsets. The same limitations apply to Sandy Bridge as well on the P67 chipset. These processors have a built in PCIe controller which has a maximum of 16 PCI-Express lanes. Unfortunately they are restricted to two devices for connectivity regardless of how many PCI-Express lanes a given device actually needs. The PCIe controller in the chipset is actually given the onboard devices while the 16 lanes in the processor's controller is either used for a single PCI-Express x16 device or two PCI-Express x8 devices. Again the limit is two, no matter what the lane configuration. So without the nForce 200MCP you are limited to a single PCI-Express x16 slot for single GPU systems or dual PCI-Express x8 slots for SLI or Crossfire no matter what.

The block diagram for P67 Express can be viewed here.

The advantage of using the nForce 200MCP is that it presents itself to the CPU as but one device regardless of what's connected to it. Routing of commands that go across the PCI-Express bus is handled by the nForce 200MCP. So in essence the nForce 200MCP takes all the built in PCIe lanes in the CPU and gives back 32 in return. There is still the choke point of PCIe lanes in the processor which the nForce 200MCP communicates to the CPU through, but the benefit is that instead of connecting one or two devices, you can connect any combination of devices via the nForce 200MCP. You could conceivably connect 32 PCI-Express x1 devices, or 3 x8 devices and 2 4x devices, or any combination you can think of. Again it's a multiplexor and little more. In older boards without the switching technology the nForce 200MCP provides, you physically had to switch between one PCI-Express x16 slot and a second slot in PCI-Express x4 mode and a dual 8x8 mode. This was done by reversing a switch card or jumpers on the PCB of the motherboard. The nForce 200MCP does this sort of thing automatically. So with one card installed you get 16 lanes to the primary slot. With two cards installed you get 16x8. With three installed you usually get 8x8x8. So again you get all that without the BS switch cards or jumpers. Physically the nForce 200MCP takes up less room than switch cards do. PCB real estate these days is valuable. There are other switch chips that do what the nForce 200MCP does in terms of dynamic lane allocation, but they don't multiplex PCI-Express lanes. Well maybe the Lucid Hydra does, I'd have to re-familiarize myself with it to know for sure.
 
Thanks for the clarification Dan. Interesting... so if I went Xfire or SLI I would be getting x16 on the first card and x8 on the second. In real world usage do you think we're getting any appreciable difference by using x16x8 with the P67 over x16x16 with the x58 mobos?
 
Thanks for the clarification Dan. Interesting... so if I went Xfire or SLI I would be getting x16 on the first card and x8 on the second. In real world usage do you think we're getting any appreciable difference by using x16x8 with the P67 over x16x16 with the x58 mobos?

No. Testing on the subject typically reveals no significant different between a 16x8 configuration compared to a 8x8 or a 16x16 PCIe lane configuration.
 
Back
Top