GIGABYTE Aorus Z270X Gaming 9 Motherboard Review

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,533
GIGABYTE Aorus Z270X Gaming 9 Motherboard Review - GIGABYTE’s Z270X Gaming 9 is one of the most feature rich and ultra-high end offerings you’ll see for the Z270 chipset this year. We were super fond of last year’s similar offering and as a result, the Z270X Gaming 9 has very large shoes to fill. With its massive feature set and overclocking prowess, it is poised to be one of the best motherboards of the year.
 
Interesting review and really a rather stuffed motherboard. Not my cup of tea but some great points on the cost of the audio of the same quality will cost you separately a good chuck extra. The extra NIC, included WiFi (I use WiFi for example to communicate with my IPhone for a web camera, so it can be useful beyond gaming with tablet, phone communication and even streaming to them).

As for PLX, does it help particularly with AMD CFX configurations, where the cards use the PCIe bus with XDMA? 16x pcie should help between the cards or will the added latency kill it? This I think would be a good selling point if true, maybe when Vega launches a test of CFX with a PLX 16x/16x configuration and 8x/8x on another no PLX setup. Then a 16x/8x/8x PLX with a X99 board maybe checking for scaling as well. Without a clear convincing good result to me makes the PLX solution limited for usefulness. It does add flexibility for add in cards that only a X99 could deliver so that can be good if you use it.

I am glad Gigabyte and hopefully others push the envelop each generation - I wished Intel had a more powerful cpu for this motherboard beyond 4 cores. So for the quality, very hard engineering making it all work together, pushing it to the extreme -> I do understand the Gold award for Gigabyte. I am sure there maybe some great discounts as time goes on for this motherboard or some combo's.
 
Feature wise I wanted to consider this for my next build but the price was hard to work with. Especially with Ryzen now just days away.
 
The added latency of the PLX chip is minimal at best. You can see it in benchmarks, but it only ever amounts to 2 or 3 FPS or something like that. As I said, the PLX's real benefit is lane flexibility, not lane availability. It doesn't change the bandwidth to the CPU's PCIe controller. As for discounts, there won't be any. GIGABYTE never discounted the Z170X Gaming G1. Retailers dropped them to around the $450 mark at the lowest. If you want the Z270X Gaming 9 at a price like that or better, you'll have to have discount codes, coupons, buy it as part of a bundle or something like that to get the price down. Stacking discounts is the only way to make the price of this thing more palatable.
 
Holy christ Raid M.2?

I do sort of wonder where does M.2 go from here? As mentioned you were unhappy with the placement in an enthusiast board that likely would be utilizing multi-gpu. Do you thinking Manufacturers are or should be acknowledging m.2 in future designs? Deprecating external junkware features to squeeze in more preferred M.2 placement?

As someone else mentioned if Ryzen weren't on the horizon and I hadn't impulse bought my current setup I would seriously consider this.
 
Holy christ Raid M.2?

I do sort of wonder where does M.2 go from here? As mentioned you were unhappy with the placement in an enthusiast board that likely would be utilizing multi-gpu. Do you thinking Manufacturers are or should be acknowledging m.2 in future designs? Deprecating external junkware features to squeeze in more preferred M.2 placement?

As someone else mentioned if Ryzen weren't on the horizon and I hadn't impulse bought my current setup I would seriously consider this.

M.2 was a laptop / mobile format first. SFF-8639 (now known as U.2) was a standard for SSD's in servers. Motherboard manufacturers have tried to lead SSD makers into the U.2 form factor but consumer SSD manufacturers are having none of it. Intel is the only SSD manufacturer that uses U.2 for consumer drives. The reason why SSD makers have probably embraced M.2 is because it's a mobile standard that the desktop also has access to. Because of this, I think the SSD makers would rather just keep to the form factor that reaches the widest possible market. Essentially, motherboard manufacturers are stuck supporting the SSD makers and SSD makers are going to keep using the M.2 form factor because it hits the mobile and desktop markets. It's now a vicious cycle that won't get broken anytime soon. The motherboard manufacturers are just responding to what they feel the public and industry want. Right now, that's M.2 even though the form factor is less than ideal for desktop systems. There are some designs that do place M.2 slots in better areas but problems arise when you need to support two or three M.2 slots.
 
  • Like
Reactions: noko
like this
Lately I find myself skipping right to the conclusion section to find out the price. This will ultimately determine if the reviewed item is even something I'd consider.

Thanks for putting the price in the opening paragraph. I'll find my way out.
 
M.2 was a laptop / mobile format first. SFF-8639 (now known as U.2) was a standard for SSD's in servers. Motherboard manufacturers have tried to lead SSD makers into the U.2 form factor but consumer SSD manufacturers are having none of it. Intel is the only SSD manufacturer that uses U.2 for consumer drives. The reason why SSD makers have probably embraced M.2 is because it's a mobile standard that the desktop also has access to. Because of this, I think the SSD makers would rather just keep to the form factor that reaches the widest possible market. Essentially, motherboard manufacturers are stuck supporting the SSD makers and SSD makers are going to keep using the M.2 form factor because it hits the mobile and desktop markets. It's now a vicious cycle that won't get broken anytime soon. The motherboard manufacturers are just responding to what they feel the public and industry want. Right now, that's M.2 even though the form factor is less than ideal for desktop systems. There are some designs that do place M.2 slots in better areas but problems arise when you need to support two or three M.2 slots.
I prefer the M.2 drive to be on the back of the board which may sound really hard to get to but cutting a hole or access point in case is not too hard. M.2 is then shielded from most heat sources and getting a fan on it is also easier as well. Some cases make this easier than others. Also once a M.2 drive is installed it almost becomes part of the motherboard anyways unless it breaks.

As for CFX, since there is no bridge other than the pcie bus, available bandwidth is used to communicate directly between the cards which is more than just framebuffer information I would think. I just don't know nor have seen tests comparing limited bandwidth conditions. Since dual cards seems to be less popular today than before maybe a mute point. I see VR could potentially make very good use of dual cards so that may keep dual solutions useful if VR content pushes single card abilities.
 
Last edited:
I'd save $120 and get the Asrock Supercarrier.

I haven't actually tested the Super Carrier so I can't say with 100% certainty which way I'd go but I'm inclined to go with the GIGABYTE. The GIGABYTE gives you better MOSFET cooling and much better audio. I prefer Intel NICs except for the wireless which I have to give credit to the AC-1535 for being faster than the Intel wireless controllers. From the look of it, GIGABYTE has a much better and beefier electrical design. I'm just looking at pictures and specs rather than getting down and dirty with it so that's conjecture on my part. I agree it's not for everyone but I could justify the cost increase for the GIGABYTE. Granted I wouldn't pay anything over $350 for a non-HEDT motherboard but that's just me.
 
$500 and Killer NICs? Why?!?!

It's my one beef with the design but the wireless controller kicks ass. The Killer NICs also work pretty well in Windows 10. Honestly, unless you want to throw server OSes on this thing I doubt you'd never know the difference.
 
Still, it seems silly to go uber extra ultra on everything and then go with Killer for LAN.
 
I ended up purchasing the Gaming 7 board and a SoundBlaster ZxR. All in all I think I was gonna save $100 or so. However, the board or the soundcard hiccuped at first - freezing system, failing to load OS, until I changed slots on the soundcard. It's been a few weeks now and seems to be stable for the most part.

Looking back, I think I should've taken the plunge with the gaming 9. More money to spend, but... Worth it, it seems.
 
I've never really been sold on the value of PLX chips. I could better understand if the chip solely facilitated GPUs talking to each other, but the bottleneck is still the number of PCIe lanes that flow to/from the CPU. Isn't that still a hard limit, and a rather choked off limit if the multiple m2 drives are using 4x PCIe 3.0 lanes apiece?
 
I've never really been sold on the value of PLX chips. I could better understand if the chip solely facilitated GPUs talking to each other, but the bottleneck is still the number of PCIe lanes that flow to/from the CPU. Isn't that still a hard limit, and a rather choked off limit if the multiple m2 drives are using 4x PCIe 3.0 lanes apiece?

No. M.2 goes over the DMI 3.0 bus which is a different path to the CPU. It does not go over the 16 lanes dedicated to the CPU. PLX chips do multiplex the 16 PCIe lanes that are provided by the CPU to 32. The benefit is not that they provide more bandwidth, but that they allow for more flexibility in the PCIe lane allocation to the expansion slots.
 
Why do laptops have more usb type c ports these mobo's, 1 port really?

Some motherboards have two USB 3.1 Gen 2 Type-C ports. The reality is, you'll probably see more mobile oriented devices use the technology first such as external storage. While some desktop users will take advantage of this, Thunderbolt has generally done far better on the laptops historically. I'm not surprised that mobile systems have more than desktops do at the moment. Also keep in mind that in Thunderbolt 3 mode things can be daisy chained, thus making the lack of ports a minor issue. For USB 3.1 Type-C, it may be possible to adapt to the Type-A ports and get the full performance so long as it's a Gen2 Type-A port. In which case you are golden on the desktop. Although, I haven't looked into this myself. Proliferation of USB 3.1 Type-C and Thunderbolt 3 devices is slow at best.
 
Some motherboards have two USB 3.1 Gen 2 Type-C ports. The reality is, you'll probably see more mobile oriented devices use the technology first such as external storage. While some desktop users will take advantage of this, Thunderbolt has generally done far better on the laptops historically. I'm not surprised that mobile systems have more than desktops do at the moment. Also keep in mind that in Thunderbolt 3 mode things can be daisy chained, thus making the lack of ports a minor issue. For USB 3.1 Type-C, it may be possible to adapt to the Type-A ports and get the full performance so long as it's a Gen2 Type-A port. In which case you are golden on the desktop. Although, I haven't looked into this myself. Proliferation of USB 3.1 Type-C and Thunderbolt 3 devices is slow at best.

If it is all DMI, then isn't the bandwidth still more than fully saturated when using a pair of M.2? I know it's a wiki quote, but DMI 3.0 is limited to 3.93GB/s, so with a single Samsung Pro reading at 3500MB/s... having two, plus SATA ports, plus USB ports is going to get more than a little crowded.

Was that a change from Z97 and Z170? I just hopped over to Newegg and for instance the some board like the MSI Z97 Gaming 5 would have a populated M.2 replace two SATA ports whereas the Asus Z97 PRO Gamer would either eat two SATA ports in SATA or borrow bandwidth from two of the PCIe slots.

Lots of Z170 boards were advertised as having M.2 that use 4x PCIe 3.0. The Asus Z170 Hero uses a mix of PCIe lane bandwidth and also disables SATA ports.
 
If it is all DMI, then isn't the bandwidth still more than fully saturated when using a pair of M.2? I know it's a wiki quote, but DMI 3.0 is limited to 3.93GB/s, so with a single Samsung Pro reading at 3500MB/s... having two, plus SATA ports, plus USB ports is going to get more than a little crowded.

Was that a change from Z97 and Z170? I just hopped over to Newegg and for instance the some board like the MSI Z97 Gaming 5 would have a populated M.2 replace two SATA ports whereas the Asus Z97 PRO Gamer would either eat two SATA ports in SATA or borrow bandwidth from two of the PCIe slots.

Lots of Z170 boards were advertised as having M.2 that use 4x PCIe 3.0. The Asus Z170 Hero uses a mix of PCIe lane bandwidth and also disables SATA ports.

Keep in mind that the bandwidth limitation of DMI 3.0 isn't as big a deal as we generally make it out to be. You won't be saturating the bus full time with M.2 devices. You won't push them that hard outside of benchmarking that often. It's the same deal with network, USB etc. Normally the limitation is more of a theoretical concern than an actual one. Still it is certainly possible to saturate this bus making two or more M.2 drives a pointless configuration option. DMI 3.0 was introduced with Z170. DMI 2.0 had roughly half the bandwidth of DMI 3.0 and a great deal more overhead. What's really changed is that all our SATA ports are actually SATA Express ports which use more lanes than the older SATA controllers did. Those used fewer lanes. Technically, SATA Express can support up to 10Gb/s or even 16Gb/s depending on how the lanes are allocated.

Z97 also rarely allocated 4x PCIe lanes to M.2. It was even rarer for those designs to allocate Gen3 lanes to M.2 Every Z170 Express motherboard I've looked at had 4x Gen3 lanes allocated to M.2. As a result, you always lose SATA ports when using multiple M.2 devices.
 
But putting M.2 in RAID 0 is absolutely crucial for timely loading and saving of .docx files! /snark

There are enough prosumer uses that I can see it as valuable

I know most of the Z97 boards ran M.2 at SATA Express speeds by disabling two SATA III ports the rest ran at 2x or 4x PCIe 2.0 or 3.0.

Yeah, the Z170 boards would only have a single M.2 on PCIe because there are so few lanes, but it still seems like its an issue for Ryzen computers as well as a potential issue for Z270 (certainly with dual video cards). So while I guess the PLX can help with managing communication between devices over PCIe, it still seems like some real degree of bottlenecking is likely.
 
But putting M.2 in RAID 0 is absolutely crucial for timely loading and saving of .docx files! /snark

There are enough prosumer uses that I can see it as valuable

I know most of the Z97 boards ran M.2 at SATA Express speeds by disabling two SATA III ports the rest ran at 2x or 4x PCIe 2.0 or 3.0.

Yeah, the Z170 boards would only have a single M.2 on PCIe because there are so few lanes, but it still seems like its an issue for Ryzen computers as well as a potential issue for Z270 (certainly with dual video cards). So while I guess the PLX can help with managing communication between devices over PCIe, it still seems like some real degree of bottlenecking is likely.

Not really. Z97 was all over the place about how it handled M.2. Many Z97 motherboards didn't support any flavor of M.2. In some cases they supported only shortened M.2 drives or SATA based M.2 drives only. Most of the time Z97 motherboards supported only 2x PCIe Gen2 lanes. That's it. Sometimes this disables SATA ports, sometimes not. With 4x Gen3.0 implementations it was pretty much a certainty if memory serves. There were also plenty Z170 motherboards that supported 2 and even 3 M.2 slots. Again, the PLX has nothing to do with the performance of M.2. What happens is that any lanes the M.2 drives / slots need are mapped to the PCH and go through the DMI bus anyway. The PLX is largely irrelevant even when its present. Having a PLX really only benefits your expansion slots, allowing them to be configured in dynamic ways. Often times manufacturers still physically limit the slots to x8 lanes outside of the two designated for x16 lanes but theoretically you could allocate 16 lanes worth of connections to them and dynamically allocate any combination of lane configurations between them. x4/16/x0/x8 or anything you want. The most common are x16/x16 or x8/x8/x8/x8. Without a PLX you are basically limited to x16/x0 or x8/x8/x4 where the last 4 lanes come from the chipset.
 
But that's the thing I never really got unless it was Intel specifying that regardless of how many slots (2, 4, 6 or 7+) have full x16 electrical connections the cards would only use x16/0, 8/8, or 8/8/4.
 
But that's the thing I never really got unless it was Intel specifying that regardless of how many slots (2, 4, 6 or 7+) have full x16 electrical connections the cards would only use x16/0, 8/8, or 8/8/4.

With a PLX chip you can actually have two cards running at x16/x16 but they do still have to go through the CPU's existing 16 lanes and the PLX chip. That does introduce some latency. It's not generally worth it for two cards but it's essential for 3-Way and 4-Way configurations. While I question the wisdom of people spending money on 3-Way + SLI configurations at all, it makes zero sense to do so on Z270. Still if you are going to do it the benefit is you don't limit that third card to x4 lanes. (SLI doesn't support this, Crossfire does.) All your cards have the same bottleneck and all your cards have the same amount of bandwidth. Fortunately, at times even x4 vs. x8 lane configurations for graphics cards show minimal performance differences.

Basically it's not ideal theoretically but in practice it works better than you might think when 3-Way or 4-Way GPU scaling actually works worth a shit. Again, being able to put controllers or graphics cards wherever you like in the x16 slots and get their full performance makes it worth while to pay for the PLX chip in some cases.
 
Back
Top