Intel Z170 Chipset Summary @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,598
Intel Z170 Chipset Summary - The Intel Z170 Chipset will be seen on most motherboard reviews that we publish here at HardOCP for the coming months. Before you dig through those however, what does the Z170 Chipset actually bring to desktop PC users? We give you a quick write up in hopes of covering what will be most important to you.
 
Searched, and no mention of NVMe. Since is the latest and greatest, this should be fully supported right?
 
For someone requiring at least 10 sata ports, wld going x99 be better or the new z170 board from gigabyte?
 
For someone requiring at least 10 sata ports, wld going x99 be better or the new z170 board from gigabyte?

If you want that many SATA ports without getting an additional controller then X99 is the better choice. There are/will be Z170 based boards with 10x SATA ports but these will have to incorporate a third party controller. Those are never as flexible as the Intel controllers are. They also do not cross RAID with the Intel controller.
 
I wonder if Intel will release 750 ssd in m2 form factor for m-atx crowd.
 
the dmi needs to go. intel offers 20 pcie 3.0 lanes off the chipset, but you are limited by the dmi link. there could be 100 pcie 3.0 lanes, but it is still only going to be as fast at the dmi link.

i also find it interesting that there are no hard numbers on the speed of the dmi 3.0 link. is intel hiding something?
 
the dmi needs to go. intel offers 20 pcie 3.0 lanes off the chipset, but you are limited by the dmi link. there could be 100 pcie 3.0 lanes, but it is still only going to be as fast at the dmi link.

i also find it interesting that there are no hard numbers on the speed of the dmi 3.0 link. is intel hiding something?

There are hard numbers on the DMI 3.0 link speed. It is an 8GT/s link or when translated amounts to just under 40Gb/s. 3930MB/s to be exact or 3.93GB/s. As I pointed out in the article, it's fine until you start talking about RAID striping M.2 drives.
 
There are hard numbers on the DMI 3.0 link speed. It is an 8GT/s link or when translated amounts to just under 40Gb/s. 3930MB/s to be exact or 3.93GB/s. As I pointed out in the article, it's fine until you start talking about RAID striping M.2 drives.

so you have 20 lanes with 4 lanes of bandwidth?
 
Here is my big question.

Between X99 and Z170, which chipset is better for 2 or 3 way SLI?

With the right CPU, X99 has 40 lanes which is 16X, 16X and 8X for 3 way SLI and 16X, 16X for 2 way SLI.
What is the best that Z170 can do now, and will it get better somehow in the future?

If Z170 can only do 16X, 8X for 2 way SLI, what is the in game performance difference when compared to a system that has two 16X PCI-E 3.0 lanes?
 
so you have 20 lanes with 4 lanes of bandwidth?
Basically. Again that isn't s bad as it sounds. Most of the integrated devices that use those links don't need or use nearly that much bandwidth. It's with M.2 drives in RAID that we start seeing issues. Even then you probably won't saturate the bus unless your benchmarking or doing large file transfers regularly. With a single drive it's fine. We got by with a lot less on Z97 and earlier chipsets. If it bothers you, X99 offers 40 lanes direct to the CPU.
Here is my big question.

Between X99 and Z170, which chipset is better for 2 or 3 way SLI?

With the right CPU, X99 has 40 lanes which is 16X, 16X and 8X for 3 way SLI and 16X, 16X for 2 way SLI.
What is the best that Z170 can do now, and will it get better somehow in the future?

If Z170 can only do 16X, 8X for 2 way SLI, what is the in game performance difference when compared to a system that has two 16X PCI-E 3.0 lanes?

it should be 8x8 for 2-Way SLi. Only 16 lanes come off the CPU. A third card will be limited to 4x unless a PLX is implemented. This is due to the sheer number of devices which all use the PCIe bus. 3x8 is theoretically possible but it would be a very lean board featurwise.

This isn't going to get better. Intel won't increase the DMI bandwidth. I don't think they can at this point. A revised X99 PCH or a 100 series PCH for Broadwell-E would be your only hope anytime soon. We have no idea if either scenario is going to happen. My money says no. The reality is, it's enough. There is no difference most of the time comparing 2x8 to 2x16 configurations with 2-Way SLI.
 
Last edited:
There is no difference most of the time comparing 2x8 to 2x16 configurations with 2-Way SLI.

Hmm...

Maybe I should go with Z170 instead of X99 then. X99 only has 3 processors and I have a feeling that if I go with Z170 there will
be more CPUs launched which will give me more upgrade options in the future.
 
Hmm...

Maybe I should go with Z170 instead of X99 then. X99 only has 3 processors and I have a feeling that if I go with Z170 there will
be more CPUs launched which will give me more upgrade options in the future.

X99 will have Broadwell-E sometime next year. Possibly Q1.
 
I'm not sure how you got these numbers. Also, I fixed your Bytes to bits.

First of all that 64Gbps is only bidirectional. This is still an x4 link running at 8GT/s. You should clarify that in your article.

Second, where do you get this claim from

Unfortunately for users, you will find limited value in using multiple M.2 devices in RAID. DMI 3.0 has a maximum transfer rate of 8GT/s as stated earlier. Roughly translated, this should allow for up to 64Gbits/s of bandwidth for DMI 3.0 but you would be wrong. While the performance penalty of DMI 3.0 is extremely low, around 1.5% overhead, you are still limited by PCI-Express on the backend. In other words after overhead is accounted for you are going to see an actual limit of 40Gbits/s of bandwidth across the DMI bus.

How does this overhead happen? That chipset diagram shows a direct path between the processor and the chipset, using 128b/130b encoding. The PCIe 3.0 lanes in the chipset use this same encoding. You mention a 1.5% overhead, which is in-line with 128b/130b encoding.

Where is this transition documented, and how does it have 40% overhead on a bus that was designed to be much more efficient?

Just a little confused here :D
 
Last edited:
I'm not sure how you got these numbers. Also, I fixed your Bytes to bits.

First of all that 64Gbps is only bidirectional. This is still an x4 link running at 8GT/s. You should clarify that in your article.

Second, where do you get this claim from



How does this overhead happen? That chipset diagram shows a direct path between the processor and the chipset, using 128b/130b encoding. The PCIe 3.0 lanes in the chipset use this same encoding. You mention a 1.5% overhead, which is in-line with 128b/130b encoding.

Where is this transition documented, and how does it have 40% overhead on a bus that was designed to be much more efficient?

Just a little confused here :D

I don't recall saying anything about 40% overhead. The clarification comes from both ASUS and Intel, who told me that DMI 3.0 could only handle 3930MB/s. I could have worded that part of the article better I suppose. Perhaps one of the ASUS guys can explain better.
 
Hmm...

Maybe I should go with Z170 instead of X99 then. X99 only has 3 processors and I have a feeling that if I go with Z170 there will
be more CPUs launched which will give me more upgrade options in the future.

In over 20 years of building my own systems, I've never ever upgraded CPUs while keeping the same motherboard... Maybe your habits are different, but with Intel changing sockets every two generations these days I don't see why most people would do this either.

Just choose based on what's out now. If you have a Microcenter nearby the price difference is like $100-150 for the most economical setup on either platform...

Do you want the 2+ extra cores and potentially more PCI-E lanes with the more expensive CPU options (tho less 3.0 lanes than Skylake with the 5820K), or do you want the slightly higher OC headroom and IPC plus a slightly more modern chipset? Core count's the thing really IMO, either you want/need more or you don't.

Intel knows this too, that's why they only have one hexa core SKU and they keep most 6+ core parts on the HEDT/X##, they know they can milk those that really need/want it. :eek:
 
X99 will have Broadwell-E sometime next year. Possibly Q1.

Sounds good to me. I will do some searching for Broadwell-E rumors and whispers.

In over 20 years of building my own systems, I've never ever upgraded CPUs while keeping the same motherboard...

Yeah, that's true. I've never just slapped in a new CPU.

If you have a Microcenter nearby the price difference is like $100-150 for the most economical setup on either platform...

I do have a MC nearby. I went there to check it out once, and I look forward to actually buying something from them.

Do you want the 2+ extra cores and potentially more PCI-E lanes with the more expensive CPU options (tho less 3.0 lanes than Skylake with the 5820K), or do you want the slightly higher OC headroom and IPC plus a slightly more modern chipset? Core count's the thing really IMO, either you want/need more or you don't.

I am just a PC gamer so I don't need 6 cores, but I do like me some PCI-E lanes.


Right now I am thinking I might go with X99 and a 5820K or 5930K.
 
Dan, I know your answer to this will probably be something like "none, M.2 to a U.2 adapter" but still, thought I'd ask... What do you think think is the best location for the sole or primary M.2 slot? (specially for SLI/CF users, which probably takes me back to "none")

I've noticed some boards have it next to the top-most x1 PCI-E slot, seems like that would be one of the hottest locations possible (sandwiched between/below CPU and GPU)... Others have it along the bottom PCI-E slots, those with multiple might have them at either location obviously...

Don't think I've seen any Z170 boards with vertical slots jutting out like some of ASUS' X99 boards, think I've read of M.2 behind the motherboard but I don't think I've seen that at all. I'm surprised no one built that M.2/U.2 adapter into their board, tho I guess having the option to use either is more flexible.
 
Grammatical error in the first paragraph of page 2:
"Z170 is only moderately more exiting by itself."

"exiting" needs to be changed to "exciting".
 
Dan, I know your answer to this will probably be something like "none, M.2 to a U.2 adapter" but still, thought I'd ask... What do you think think is the best location for the sole or primary M.2 slot? (specially for SLI/CF users, which probably takes me back to "none")

I've noticed some boards have it next to the top-most x1 PCI-E slot, seems like that would be one of the hottest locations possible (sandwiched between/below CPU and GPU)... Others have it along the bottom PCI-E slots, those with multiple might have them at either location obviously...

Don't think I've seen any Z170 boards with vertical slots jutting out like some of ASUS' X99 boards, think I've read of M.2 behind the motherboard but I don't think I've seen that at all. I'm surprised no one built that M.2/U.2 adapter into their board, tho I guess having the option to use either is more flexible.

You will be seeing U.2 as a built in connector soon.and yes a U.2 adapter is definitely the way to go for SLI/CF. Aside from that, it's fine where they typically reside so long as they aren't under a GPU. The length of M.2 drives is why they usually end up where they are. Vertical ports wouldn't be ideal for 110mm drives either. The form factor is great for some applications but not ideal for the desktop. I think we will start seeing heat spreaders / sinks on M.2 drives before too long to help deal with the heat in enthusiast machines. Just wait some asshole company will make a red and black M.2 drive with a heat sink "optimized" for gaming. They'll call it the M.2 Killer Fatal1ty drive or something ridiculous other name that gives some out of touch marketing weasel a chubby.

Broadwell E will probably be crap - whole magic of that CPU comes from L4 cache in igpu.
I have concerns about that too. Skylake-E would make more sense.
 
I think the whole platform is best described as, "meh".

Some nice points, some not so nice, and only modest gains.

Intel is giving AMD a chance to get back in this.

Granted, rumors right now are that Zen will be on part with Haswell, leaving intel a generation ahead, but the gains are so modest that if AMD can deliver Haswell speeds and overclock on a similar or superior chipset to the 170 series for a cheaper cost, AMD could conceivably finally jump back into this thing.

Not going to hold my breath, but there's a window here for them now.
 
I don't recall saying anything about 40% overhead. The clarification comes from both ASUS and Intel, who told me that DMI 3.0 could only handle 3930MB/s. I could have worded that part of the article better I suppose. Perhaps one of the ASUS guys can explain better.

No, you can't do math Dan. Let me fix your mistake :D

DMI 2.0 bandwidth per-lane, bidirectional = 5.0 GT/s per-lane, 4.0 Gb/s real transfer rate after 8b/10b encoding overhead.

DMI 3.0 bandwidth per-lane, bidirectional = 8.0 GT/s per-lane, 7.87 Gb/s real transfer rate after 128b/130b encoding overhead.

Your lane width is x4, and each lane can send ONE BIT per transfer.

So you get 4 x 4Gbps = 16 Gbps/s bidirectional with DMI 2.0,
and you get 4 x 7.87Gbps = 31.5 Gbps/s bidirectional with DMI 3.0.

31.5 Gbps / 8 bits per byte = 3.938 GigaBytes per second.

3.938 Gigabytes per second * 1000 Megabytes per Gigabyte = 3938MB/s. Close enough for engineering!

That is where they get the 3930MB/s figure from. It's not overhead, just a realistic transfer rate from the new bus. It's twice the speed of DMI 2.0! :D

1. Dan, please update your article. You shouldn't spread misinformation like that, if there is no actual "overhead."

2. Dan, please reconsider bitching about throughput that high "not being enough" when you actually have no frame of reference. You keep laying into this chipset only because "there's only 4 dedicated lanes of bandwidth."

If you could actually buy an SSD that could sustain transfer speeds at 4GIGABYTES PER SECOND in REALISTIC WORKLOADS, you wouldn't be able to tell the difference from it, and accessing main memory. Only a server needs more bandwidth than that for local storage, and this chipset is made for the mainstream.
 
Last edited:
Just wait some asshole company will make a red and black M.2 drive with a heat sink "optimized" for gaming. They'll call it the M.2 Killer Fatal1ty drive or something ridiculous other name that gives some out of touch marketing weasel a chubby.

Like this?
 
Well, SATA Express is dead already. U.2 just makes a lot more sense, it solves the same problem but with none of the downsides.
 
I am just a PC gamer so I don't need 6 cores, but I do like me some PCI-E lanes.

Right now I am thinking I might go with X99 and a 5820K or 5930K.

Depending on what games you play you may receive a noticable boost in Min/Avg FPS in certain games. Surely only to grow larger as more games are developed with console core count in mind.

Personally I noticed a large gain in fps in BF4 when going from 4-6 cores. I'd rather build a X99 rig, or x79 if the price is right then go back to 4 cores ever again. If only more games saw the same gains.
 
I think the whole platform is best described as, "meh".

Some nice points, some not so nice, and only modest gains.

Intel is giving AMD a chance to get back in this.

Granted, rumors right now are that Zen will be on part with Haswell, leaving intel a generation ahead, but the gains are so modest that if AMD can deliver Haswell speeds and overclock on a similar or superior chipset to the 170 series for a cheaper cost, AMD could conceivably finally jump back into this thing.

Not going to hold my breath, but there's a window here for them now.

That's exactly what I've been thinking.

AMD has a shot at getting back into this.
 
Broadwell E will probably be crap - whole magic of that CPU comes from L4 cache in igpu.

The 5775c is certainly all about the L4 cache.

I see two possible scenarios for broadwell.
1) The FIVR at 14nm really limit clock speed so broadwell simply can't hit haswell clock speeds so a broadwell-e makes no sense at all since we'd get downclocked versions of the same haswell archtectiure.

2) It's basically just haswell on a different process, clocks are similar, power consumption is slightly down, but basically a wash from a performance perspective.
 
The 5775c is certainly all about the L4 cache.

I see two possible scenarios for broadwell.
1) The FIVR at 14nm really limit clock speed so broadwell simply can't hit haswell clock speeds so a broadwell-e makes no sense at all since we'd get downclocked versions of the same haswell archtectiure.

2) It's basically just haswell on a different process, clocks are similar, power consumption is slightly down, but basically a wash from a performance perspective.

What is taking advantage of even performance gains available with Haswell outside of the bleeding edge? Also, the even larger question is how much pushback is there toward even re-tooling/re-engineering software to take advantage of possible gains available (again, even with Haswell, or Sandy Bridge/Ivy Bridge, for that matter - which is even older than Haswell)?

What is the majority Intel CPU architecture out there among users? (Those numbers should be in Valve's continuing hardware surveys.) I would wager that it is - for the most part - STILL Core 2 (as opposed to Core-i of any generation). That's what the REAL disappointment is - Core 2 is still ruling the existing hardware base - despite its age. Core 2 is still viable - however much we may wish it weren't; and as long as it is seen to BE viable, why would developers move forward? Gamers may be driving the wishlists - however, accountants and budgets are driving what actually gets built or bought (everywhere else but (H), of course). For all too much of what gets done on a daily basis (and that even includes gaming), Sandy Bridge/Ivy Bridge and Z8x/Z9x is overkill - let alone Haswell or anything later. The reason I'm looking at Skylake is an admittedly edge-case - development/virtualization. (However, I actually know it's an edge case.)
 
Maybe someone can clarify this for me. If I can get a PCIe NVMe SSD like this Intel 750 that just plugs into a PCIe x4 slot why do we need all this M.2 and U.2 nonsense? Is this just about needing more connections?

My thought is that I only need 1 boot SSD and anything else is just large storage so is there some other advantage besides not taking up a physical PCIe slot? They still use PCIe lanes so what does it matter?

Just trying to understand.
 
steam survey doesn't break it down.

I'm not so sure most people are still using core 2 chips. My wife for instance had a core 2 duo laptop we bought in 2008 (in 2009 we were already on to core i laptops).

That laptop had a GPU failure I replaced.
A glass of water spilled into it that killed a mobo that I replaced
The backlight on the LCD failed, that I replaced.
The HD failed which I replaced with an SSD.
The keyboard got stuck keys (that's kids) which I replaced twice!

Finally last year it started getting too weird to continue using.

I think a vast majority of users would would've pitched that laptop and bought a new one sometime sooner than 6 years. Most users get laptops and most laptops are going to die from physical wear and tear.
 
My thought is that I only need 1 boot SSD and anything else is just large storage so is there some other advantage besides not taking up a physical PCIe slot? They still use PCIe lanes so what does it matter?

I think it's all about form factor.

Something that will work for laptops AND desktops.

Also cooling can be an issue with multi-gpu setups and having additional cards.

We could also consider the mini-ITX form factor.
 
You will be seeing U.2 as a built in connector soon.and yes a U.2 adapter is definitely the way to go for SLI/CF. Aside from that, it's fine where they typically reside so long as they aren't under a GPU. The length of M.2 drives is why they usually end up where they are. Vertical ports wouldn't be ideal for 110mm drives either. The form factor is great for some applications but not ideal for the desktop. I think we will start seeing heat spreaders / sinks on M.2 drives before too long to help deal with the heat in enthusiast machines. Just wait some asshole company will make a red and black M.2 drive with a heat sink "optimized" for gaming. They'll call it the M.2 Killer Fatal1ty drive or something ridiculous other name that gives some out of touch marketing weasel a chubby.


I have concerns about that too. Skylake-E would make more sense.

Oh yeah, that's inevitable, your very post might've just spurred it... There's someone working on the spaceship or the babe for the box design already. :p I agree M.2 is a kludge on the desktop, I'm just surprised they didn't figure out a better alternative sooner after SATA Express was basically stillborn.

I still bought a 256GB SM951 just because it fits my needs/budget the best... Might very well try placing some low profile heatsinks on the controller to see if it makes any difference, I doubt the NAND chips themselves need it, could be wrong.

Mine will likely sit under a second card (with an open cooler no less) but any situation that stresses it probably doesn't involve the second card, and there's plenty of airflow, so between that and Anandtech's review (where the slight throttling didn't make much difference) I'm cautiously optimistic.
 
Maybe someone can clarify this for me. If I can get a PCIe NVMe SSD like this Intel 750 that just plugs into a PCIe x4 slot why do we need all this M.2 and U.2 nonsense? Is this just about needing more connections?

My thought is that I only need 1 boot SSD and anything else is just large storage so is there some other advantage besides not taking up a physical PCIe slot? They still use PCIe lanes so what does it matter?

Just trying to understand.

Yeah, pretty much. In my case I wouldn't even have where to plug it in (with two GPUs and a sound card) unless i sandwiched it in a really bad spot, I'm a niche within a niche tho. It's quite possible most PCI-E/NVMe/next gen drives just end up being, well, actual PCI-E cards.

That's what they'll use in the enterprise space anyway and what workstations will favor I imagine. Still, even Intel isn't sure, since they also offer the 750 in a 2.5" form factor with a U.2 connector adapter... When even Intel isn't sure what to get behind, you know things are a mess. :p
 
steam survey doesn't break it down.

I'm not so sure most people are still using core 2 chips. My wife for instance had a core 2 duo laptop we bought in 2008 (in 2009 we were already on to core i laptops).

That laptop had a GPU failure I replaced.
A glass of water spilled into it that killed a mobo that I replaced
The backlight on the LCD failed, that I replaced.
The HD failed which I replaced with an SSD.
The keyboard got stuck keys (that's kids) which I replaced twice!

Finally last year it started getting too weird to continue using.

I think a vast majority of users would would've pitched that laptop and bought a new one sometime sooner than 6 years. Most users get laptops and most laptops are going to die from physical wear and tear.

Steam users are already not exactly most people tho... My sister's an engineer with a fast workstation at her workplace (working at APL which sub contracts work for NASA)... Yet still seems content with an old Core 2 Duo laptop at home, and she uses Lr and other decently stressful photography stuff.

That system does have a 500GB SSD I gifted her, and this is well beyond anecdotal, but she's clearly aware she could have a much faster system yet hasn't felt the need to upgrade. The average user has even less need...

I think we're in an era where upgrades are driven as much by calamity and break downs as by need, for laptop users anyway, who are now the majority. Phones are already there and they're basically a microcosm of the desktop evolution...
 
No, you can't do math Dan. Let me fix your mistake :D

DMI 2.0 bandwidth per-lane, bidirectional = 5.0 GT/s per-lane, 4.0 Gb/s real transfer rate after 8b/10b encoding overhead.

DMI 3.0 bandwidth per-lane, bidirectional = 8.0 GT/s per-lane, 7.87 Gb/s real transfer rate after 128b/130b encoding overhead.

Your lane width is x4, and each lane can send ONE BIT per transfer.

So you get 4 x 4Gbps = 16 Gbps/s bidirectional with DMI 2.0,
and you get 4 x 7.87Gbps = 31.5 Gbps/s bidirectional with DMI 3.0.

31.5 Gbps / 8 bits per byte = 3.938 GigaBytes per second.

3.938 Gigabytes per second * 1000 Megabytes per Gigabyte = 3938MB/s. Close enough for engineering!

That is where they get the 3930MB/s figure from. It's not overhead, just a realistic transfer rate from the new bus. It's twice the speed of DMI 2.0! :D

1. Dan, please update your article. You shouldn't spread misinformation like that, if there is no actual "overhead."

2. Dan, please reconsider bitching about throughput that high "not being enough" when you actually have no frame of reference. You keep laying into this chipset only because "there's only 4 dedicated lanes of bandwidth."

If you could actually buy an SSD that could sustain transfer speeds at 4GIGABYTES PER SECOND in REALISTIC WORKLOADS, you wouldn't be able to tell the difference from it, and accessing main memory. Only a server needs more bandwidth than that for local storage, and this chipset is made for the mainstream.


This, and the easiest way to think about it is DMI is a PCIe link .. basically, and it is an x4 link. DMI 2.0 uses PCIe 2.0 PHY, and DMI 3.0 uses 3.0 -- so DMI 3,0 is the same b/w as PCIe 3.0 x4 (32Gbit) -- which is the same as ONE M.2 drive (if it maxxes out the interface).

DMI 2.0 was 20Gbit
 
On another topic entirely, I wanted to point this out, the BCLK is no longer linked to PCIe and stuff ... so this means we are back to like the Nehalem was -- where you can overclock multiplier locked CPU's !

This means we can overclock i3's and stuff, not just K series, we can easily overclock everything again!

That is, unless Intel changes things with the non K series CPU's, and makes the BCLK tied to PCIe again or something, -- although I have a feeling they will not be doing this...
 
On another topic entirely, I wanted to point this out, the BCLK is no longer linked to PCIe and stuff ... so this means we are back to like the Nehalem was -- where you can overclock multiplier locked CPU's !

This means we can overclock i3's and stuff, not just K series, we can easily overclock everything again!

That is, unless Intel changes things with the non K series CPU's, and makes the BCLK tied to PCIe again or something, -- although I have a feeling they will not be doing this...

unless the bclk gets disabled in bios by non-K chips,

But you are probably right!
 
If you want that many SATA ports without getting an additional controller then X99 is the better choice. There are/will be Z170 based boards with 10x SATA ports but these will have to incorporate a third party controller. Those are never as flexible as the Intel controllers are. They also do not cross RAID with the Intel controller.
So why does Intel still insist on limiting the number of SATA ports on their chipsets?
 
I'm still planning to get two M.2 drives and run them in RAID0. Maybe in that MSI XPower Titanium board. If you're not saturating the bus, you're just not trying hard enough, is how I see it.
 
Back
Top