Intel Z170 Chipset Summary @ [H]

unless the bclk gets disabled in bios by non-K chips,

But you are probably right!

Depends on the mobo manufacturers. Remember that H81 and H85/B85 weren't supposed to have OC functionality. And yet most boards do.
 
I'm still planning to get two M.2 drives and run them in RAID0. Maybe in that MSI XPower Titanium board. If you're not saturating the bus, you're just not trying hard enough, is how I see it.

Well, that was my point: two of the top-end consumer NVMe drives you can buy today will BARELY max-out that bus if you run them in RAID0, and then only if you do a block read. For your average enthusiast, this will be more than enough bandwidth for the next 5 years. This goes double because most people who have multiple drives don't run them in RAID, so simultaneous data running in the same direction will be unlikely. Copies from one device to another will REALLY benefit from the bidirectional bandwidth!

Also, I challenge you to find anything noticeably faster when you run NVMe drives in RAID0. NOT a benchmark result, a real-world thing you're working with or loading.
 
The Alpine Ridge chip is also the only way (for now) you'll be able to get HDMI 2.0 with HDCP 2.2 from a Skylake CPU.
 
Yeah, pretty much. In my case I wouldn't even have where to plug it in (with two GPUs and a sound card) unless i sandwiched it in a really bad spot, I'm a niche within a niche tho. It's quite possible most PCI-E/NVMe/next gen drives just end up being, well, actual PCI-E cards.

That's what they'll use in the enterprise space anyway and what workstations will favor I imagine. Still, even Intel isn't sure, since they also offer the 750 in a 2.5" form factor with a U.2 connector adapter... When even Intel isn't sure what to get behind, you know things are a mess. :p

Not necessarily. Many OEMs like Supermicro are already making prototype boards with a plethora of m.2 slots to get mass storage density in 1U. Datacenter folks don't think like consumers/gamers.
 
I'm loving the x4 pcie 3 m.2 slot on nearly every Z170 board. I'm hoping by the time NVME version of the samsung 951 become more readily available prices on the 6700K will have settled a bit.
 
I've read several of these Z170 previews and none have said if the DDR4 is dual or quad channel on these boards.
 
This, and the easiest way to think about it is DMI is a PCIe link .. basically, and it is an x4 link. DMI 2.0 uses PCIe 2.0 PHY, and DMI 3.0 uses 3.0 -- so DMI 3,0 is the same b/w as PCIe 3.0 x4 (32Gbit) -- which is the same as ONE M.2 drive (if it maxxes out the interface).

DMI 2.0 was 20Gbit

That was my point.

So why does Intel still insist on limiting the number of SATA ports on their chipsets?

It's market segmentation. Most users on P67-Z97 do not need 6x SATA ports. If you do, Intel believes you'll be willing to buy X99 (or whatever the equivalent was at the time.) It's the same as keeping processors with more than four cores exclusive to LGA2011 platforms.

I'm still planning to get two M.2 drives and run them in RAID0. Maybe in that MSI XPower Titanium board. If you're not saturating the bus, you're just not trying hard enough, is how I see it.

And as I said before, you won't be going flat out on all your devices all the time. DMI 3.0 bandwidth is sufficient most of the time. Again limitations are artificial here. Though X99 still uses DMI 2.0, you have more PCIe lanes that don't connect through the PCH.

The Alpine Ridge chip is also the only way (for now) you'll be able to get HDMI 2.0 with HDCP 2.2 from a Skylake CPU.

Like I said, only if Thunderbolt 3.0 and other features are utilized in "alternate mode." Aside from that, the ASM1142 should handle USB 3.1 just as well. I believe MSI and ASUS are ASM1142's and Alpine Ridge is on all the GIGABYTE boards this time around.
 
Trying to compare this to the X99 chipset (where all PCIe lanes go to the processor), so here it is:

On Z170 motherboards that have three PCIe x16 slots:
  • PCIe x16_1 and _2 go directly to the processor (x16 or x8/x8).
  • PCIe x16_3 goes through the PCH at x4 mode.

On most Z170 motherboards that have two PCIe x16 slots:
  • PCIe x16_1 goes directly to the processor (x16).
  • PCIe x16_2 goes through the PCH at x4 mode. *higher priced small boards can do (x8/x8) to the processor.

Everything else including M.2 goes through the PCH. Some standard SATA ports may be disabled when using certain PCIe x16 or M.2 configurations depending on the board. In conclusion, if you wanted one NVMe SSD to go directly to the processor you'd have to put it in PCIe x16_1 or _2 right up against the gpu on a motherboard with 3 slots (or higher priced boards with 2 slots). It'd probably be a better idea to go through the PCH in the last slot to lower SSD temperatures.
 
Intel knows this too, that's why they only have one hexa core SKU and they keep most 6+ core parts on the HEDT/X##, they know they can milk those that really need/want it. :eek:

I know I got milked...:D

Personally, I am waiting to see what Broadwell-E will offer for me.
I am running a Xeon e5-2680v3 ES though so might stick with this for a while.
 
Trying to compare this to the X99 chipset (where all PCIe lanes go to the processor), so here it is:

On Z170 motherboards that have three PCIe x16 slots:
  • PCIe x16_1 and _2 go directly to the processor (x16 or x8/x8).
  • PCIe x16_3 goes through the PCH at x4 mode.

On most Z170 motherboards that have two PCIe x16 slots:
  • PCIe x16_1 goes directly to the processor (x16).
  • PCIe x16_2 goes through the PCH at x4 mode. *higher priced small boards can do (x8/x8) to the processor.

Everything else including M.2 goes through the PCH. Some standard SATA ports may be disabled when using certain PCIe x16 or M.2 configurations depending on the board. In conclusion, if you wanted one NVMe SSD to go directly to the processor you'd have to put it in PCIe x16_1 or _2 right up against the gpu on a motherboard with 3 slots (or higher priced boards with 2 slots). It'd probably be a better idea to go through the PCH in the last slot to lower SSD temperatures.

I agree that the best choice seems to be the x4 slot for your SSD, but while it goes through the PCH and not direct to the CPU can you say for sure there is any meaningful impact to doing that?

I feel like there is this negative there about not being direct to the CPU, but I have yet to see a real world impact.
 
So what is DDR3L it looks like DDR3, is this just marketing BS? I also noticed that all of corsairs DDR3L ram is 1600Mhz

Its like some gimmick to make us think DDR4 is faster than DDR3?
 
So what is DDR3L it looks like DDR3, is this just marketing BS? I also noticed that all of corsairs DDR3L ram is 1600Mhz

Its like some gimmick to make us think DDR4 is faster than DDR3?

L stands for Low Voltage.

Correct. You will not find any DDR3L that uses more than 1.35v. For DDR3, 1.65v and greater modules are out there.
 
Also, I challenge you to find anything noticeably faster when you run NVMe drives in RAID0. NOT a benchmark result, a real-world thing you're working with or loading.
GTA V? The load times are long enough that it should make a difference. That's assuming that it's actually just loading the game stuff and not talking to Rockstar and shooting the shit for 5 mins every time you start up.
 
I want something with a warranty, gigabyte or asrock

all of them offer the same 3 year Warranty, I assume you have had to RMA something with ASUS and had an issue?

If I only cared about customer services I would choose EVGA
 
What is the next X-Series chipset after X99? Should I expect to see a X170 chipset anytime soon?
 
I'm still planning to get two M.2 drives and run them in RAID0. Maybe in that MSI XPower Titanium board. If you're not saturating the bus, you're just not trying hard enough, is how I see it.

Well, it will be faster than a single drive in those rare cases where you can actually benefit from RAID 0 on the desktop. That by itself is debatable. Read speeds will suffer compared to theoretical numbers in some tests of course. According to Samsung's specifications on the 950 Pro NVMe, it can sustain sequential read speeds of 2,500MB/s. Two of those in RAID 0, assuming decent scaling would saturate the DMI 3.0 link and thus performance will suffer. Writes on the other hand should be fine given that those are typically slower than the reads on SSDs. According to the specifications, the 950 Pro NVMe can only sustain sequential writes of 1,500MB/s which wouldn't saturate the bus fully.

I'm not, nor would I ever suggest that people shouldn't go for it.
 
There are hard numbers on the DMI 3.0 link speed. It is an 8GT/s link or when translated amounts to just under 40Gb/s. 3930MB/s to be exact or 3.93GB/s. As I pointed out in the article, it's fine until you start talking about RAID striping M.2 drives.

Looks like i'll be building a PC with a pretty big ssd raid and an m.2 drive - so which chip do I want? I dont mind going server grade...
 
Is there a m.2 converter for the boards without the m.2 slot and only room for the Intel thing (can't remember the name of i...)
 
There are M.2 to PCI-Express adapter cards out there. Unfortunately, NVMe support for using such drives as an OS drive requires the BIOS to be able to support that feature. It's hit and miss on Z97 and all Z170 motherboards can do that.
 
Is there any difference in speeds for the m.2 socket on the z97 based chips?

Thanks
 
Is there any difference in speeds for the m.2 socket on the z97 based chips?

Thanks

The M.2 slot on Z97 boards are connected with PCIe 2.0 and the DMI interface is PCIe 2.0 x4. With Z170 the M.2 slot can be connected with PCIe 3.0 and the DMI uses PCIe 3.0 x4. DMI is the connection between the CPU and chipset. So Z170 got a large speed advantage over Z97 in that matter.
 
Hope you can load up all 6 SATA ports and it won't choke. I had this issue with my Z87, 4 drives and a BD burner, if I add a 5th drive it hangs. Removing the burner and it works.
Could also be my two sets of raid on the Intel controller too.
Not a deal breaker by any means, I still have 4 ports on the Marvell.
 
Back
Top