MSI AM4 pron

I like the MSI X370 Xpower Gaming Titanium. Not mainly for looks , but it has more control over it VRMS with 8+ power phasing . Asus has failed to show an X370 high end board yet, so if they are slow and drop the ball I am likely to go for this MSI. I believe it will give top-notch overclock.

Whats with all those weird shrouds and covers though? Are they functional, or just for looks?

Personally I'd take a basic green old school looking board though. I liked this hobby better back before it turned all "pimp my ride".
 
  • Like
Reactions: Meeho
like this
All I do is game. BF1 being the main game. So I'm looking to replace my 3570k. Being mostly an MSI fan. The titanium looks great.
 
i'm going to buy (hopefully) an asus impact itx grade am4 board and build a bad ass SFF system and go from 2 gpu's to 1, sli support just ain't what she used to be, ain't what she used to be...
 
You are going to tell me if the platform is stable. I'm going to read your review before I purchase anything.

I'm of the same opinion on these 2 points. Got to admit though, I find it quite amusing that AMD will be reliant on Kyle's reviews for purchases, hopefully this time they release a product that isn't impacted by their thin skin.
 
I'm interested in Ryzen because at 8c/16t if it follows previous AMD designs and supports IOMMU it would be a great replacement for my home ESXi server (currently a FX8350 w/32GB RAM).

Doubling the cores *AND* the RAM would give me a (hopefully) powerful enough server for all my VMs without breaking the bank compared to the Broadwell-E/Skylake-E CPUs..

As nice as dedicated server designs are with important features like IPMI I find I don't need that kind of performance and I like the fact I can repurpose the server as a desktop without major re-engineering to make it more user-friendly.
 
With so severely limited I/O as the AM4 platform is. You are going to see creative measures to add slots. PCIe, PLX bridges etc.
 
With so severely limited I/O as the AM4 platform is. You are going to see creative measures to add slots. PCIe, PLX bridges etc.


"Severely limited" seems like an exaggeration.

It may not have 40 lanes like I am used to with Intel's -E parts, but they still have more than non-E parts.
 
"Severely limited" seems like an exaggeration.

It may not have 40 lanes like I am used to with Intel's -E parts, but they still have more than non-E parts.

The X370 chipset got 8 PCIe 2.0 lanes. The CPU(Not APUs) got 16+2 PCIe 3.0. Unless you are willing to sacrifice 2 SATA ports, then its 16+4.
 
The X370 chipset got 8 PCIe 2.0 lanes. The CPU(Not APUs) got 16+2 PCIe 3.0. Unless you are willing to sacrifice 2 SATA ports, then its 16+4.
So what are the M2 drives going to use with PCIe? 2.0 or 3.0? If 3.0 that would limit GPU bandwidth, if 2.0 the M2 drives would be limited (while in reality real world performance with pcie drives don't seem to be enhanced that much).

So maybe sacrifice 2 Sata ports for one 4x PCIe M2 drive option using PCIe 3. Looks like Intel E processors will have a clear advantage for PCIe lanes.
 
So what are the M2 drives going to use with PCIe? 2.0 or 3.0? If 3.0 that would limit GPU bandwidth, if 2.0 the M2 drives would be limited (while in reality real world performance with pcie drives don't seem to be enhanced that much).

So maybe sacrifice 2 Sata ports for one 4x PCIe M2 drive option using PCIe 3. Looks like Intel E processors will have a clear advantage for PCIe lanes.

I think mobo manufactors will either do "half speed" or PLX chips for the graphics lanes to create more 3.0 lanes. I dont get what went wrong, These chipsets should have been PCIe 3.0 and obviously more than 8 lanes. The SATA part isn't as important since SATA is dying so you can go for the extra 3.0 lanes.

We are at the dawn of 2.5-10Gbit Ethernet and multiple M.2/U.2 with NVME.
 
Last edited:
I think mobo manufactors will either do "half speed" or PLX chips for the graphics lanes to create more 3.0 lanes. I dont get what went wrong, These chipsets should have been PCIe 3.0 and obviously more than 8 lanes.

We are at the dawn of 2.5-10Gbit Ethernet and multiple M.2/U.2.
Well our favorite (not!) source indicated two 16x PCIe lanes for the GPU and additional lanes via PLX chips
http://wccftech.com/amd-ryzen-am4-x370-motherboards-ces/

The MSI has three PCIe 16x sockets, 1 wired for 16x and the other two 8x each which goes along with 32 PCIe lanes for the GPU.

Sounds like much more than 20 PCIe 3 lanes.
 
Well our favorite (not!) source indicated two 16x PCIe lanes for the GPU and additional lanes via PLX chips
http://wccftech.com/amd-ryzen-am4-x370-motherboards-ces/

The MSI has three PCIe 16x sockets, 1 wired for 16x and the other two 8x each which goes along with 32 PCIe lanes for the GPU.

Wccftech was wrong (What a surprise), the chipset and CPU config got revealed.
https://hardforum.com/threads/a320-b350-x370-chipset-configs.1921663/

Its best compared to LGA1151 and a slightly better H110 or so.

am4_platform_cpu.jpg

am4_platform_chipset.jpg
 
Wccftech was wrong (What a surprise), the chipset and CPU config got revealed.
https://hardforum.com/threads/a320-b350-x370-chipset-configs.1921663/

Its best compared to LGA1151 and a slightly better H110 or so.

am4_platform_cpu.jpg

am4_platform_chipset.jpg


I don't see how this can possibly be accurate when AMD has claimed 16x-16x SLI compatibility and the board that started this thread seems to have at least enough lanes for 16x-8x-8x configurations.

I guess we will just have to wait until final launch of parts to know for sure.

I mean, AMD is claiming support for tri-crossfire (or whatever it is called). You can't do that with only 16 lanes, and you typically should even use chipset lanes for GPUs.

Something seems a miss here.
 
Last edited:
Well a computer from 30 years ago would be a wonder to behold in some countries. Got to remember that there are a lot of places where you have to have a satellite phone to make a call still. Many places don't even have electricity or running water. As they become more modernized, stuff that we threw out ages ago will be very new to them.
 
I don't see how this can possibly be accurate when AMD has claimed 16x-16x SLI compatibility and the board that started this thread seems to have at least enough lanes for 16x-8x-8x configurations.

I guess we will just have to wait until final launch of parts to know for sure.

I mean, AMD is claiming support for tri-crossfire (or whatever it is called). You can't do that with only 16 lanes, and you typically should even use chipset lanes for GPUs.

Something seems a miss here.

LGA1151 boards also got this. Do they have more than 16 lanes for graphics from the CPU? You just add a PLX chip. A single PLX8747 can make 16(or less) lanes into 32.

50217547_snaphsot0012.png
 
LGA1151 boards also got this. Do they have more than 16 lanes for graphics from the CPU? You just add a PLX chip. A single PLX8747 can make 16(or less) lanes into 32.

50217547_snaphsot0012.png

Yeah, I forgot about these. They've been around forever, but if I recall until Skylake or so they were pretty rare I think? I wonder, did they get cheaper in recent years maybe?


There should definitely be a lot of promise in using these, as traditional statically connected PCIe lanes are typically idle most of the time, so there is plenty of opportunity for sharing them effectively upstream, since not all expansion cards are going to be hitting their full bandwidth at the same time.

I wonder what kind of latency a PLX chip introduces to the bus.
 
Yeah, I forgot about these. They've been around forever, but if I recall until Skylake or so they were pretty rare I think? I wonder, did they get cheaper in recent years maybe?


There should definitely be a lot of promise in using these, as traditional statically connected PCIe lanes are typically idle most of the time, so there is plenty of opportunity for sharing them effectively upstream, since not all expansion cards are going to be hitting their full bandwidth at the same time.

I wonder what kind of latency a PLX chip introduces to the bus.

They are very common actually and goes back to 2012 or so as real usage. The product itself goes back to 2010.

Up to 150ns latency added.
https://www.broadcom.com/products/pcie-switches-bridges/pcie-switches/
 
They are very common actually and goes back to 2012 or so as real usage. The product itself goes back to 2010.

Up to 150ns latency added.
https://www.broadcom.com/products/pcie-switches-bridges/pcie-switches/


Ahh, well. Last serious motherboard I bought was in late 2011 (I've bought some low end MiniITX boards for HTPC's since) so I can see how I missed it.

150ns seems pretty minor, but I don't have enough specific PCIe bus experience to say whether or not it is significant. Anyone?
 
I'm amused that in 2017 they are still putting legacy PCI slots on that one on the left. Who still uses those?

On a mainstream or mid range board, I have no issues with those existing. Sometimes, you need a pci slot for an older piece of hardware that cannot be replaced. Also, many gamers still prefer a PS/2 port and an old style IBM Mechanical keyboard.
 
For my linux based PVR I still have a PCI based tuner for analog recording so having 1 PCI slot will be helpful to me. Hopefully Zen will support ECC in its consumer boards as well and have an option for 8+ SATA ports although I could easily solve this with an LSI card but that will add ~10W of power to an always on system.
 
On a mainstream or mid range board, I have no issues with those existing. Sometimes, you need a pci slot for an older piece of hardware that cannot be replaced. Also, many gamers still prefer a PS/2 port and an old style IBM Mechanical keyboard.

Yeah, I don't mind PS2. I used to use a black Model M13 Trackpoint II (damn I loved that keyboard) and used the PS/2 ports. In many ways they work better than USB. I have since switched to new production Unicomp keyboards, and while they are good to type on, they have nowhere near the quality and fit and finish as the old ones. (no one is willing to pay for that kind of quality anymore, keyboards have to be cheaper) For a while I was having trouble with getting into the BIOS because the USB on the keyboards would initialize too slowly. I'd have to try several times, repeatedly tapping the del key hoping that it would finish initializing before I ran out of post screen.

I only got rid of the M13 because I wore all the lettering clean off the keys. When I bought replacement keycaps for it, Unicomp informed me I got the last set they had, and weren't planning on making anymore, and I never found any aftermarket manufacturers.

Since these keyboards are getting pretty rare, and some people collect them, I decided to clean up the M13, install the new caps, make it look like new and then put it away in its box. I didnt want to wear out a collectors item :p

I took a picture in my white box after cleaning it up, before putting it away. I think it looks brand new. Hoping it appreciates :p

5091177536_61b3c181bd_o.jpg
 
For my linux based PVR I still have a PCI based tuner for analog recording so having 1 PCI slot will be helpful to me. Hopefully Zen will support ECC in its consumer boards as well and have an option for 8+ SATA ports although I could easily solve this with an LSI card but that will add ~10W of power to an always on system.

Since Zen is essentially a SoC I believe the SATA controller is on die, and not a part of the chipset like we are used to.

Not sure how many ports it supports.

Judging by the pics of the "tomahawk" board I'm thinking it looks like 6 Sata ports and 2 U.2 ports. Who knows if they used all of them though.
 
Since Zen is essentially a SoC I believe the SATA controller is on die, and not a part of the chipset like we are used to.

Not sure how many ports it supports.

Judging by the pics of the "tomahawk" board I'm thinking it looks like 6 Sata ports and 2 U.2 ports. Who knows if they used all of them though.

The CPU got 0-2 SATA ports depending on if you sue 2 or 4 lanes for a M.2 slot. The chipset got up to 4 SATA ports for X370 and 2 SATA ports for A320/B350.
 
It is great to see all these boards getting ready to be released. Looks like everything is on schedule but I sure wish I could get it NOW! :D
 
Yeah, I don't mind PS2. I used to use a black Model M13 Trackpoint II (damn I loved that keyboard) and used the PS/2 ports. In many ways they work better than USB. I have since switched to new production Unicomp keyboards, and while they are good to type on, they have nowhere near the quality and fit and finish as the old ones. (no one is willing to pay for that kind of quality anymore, keyboards have to be cheaper) For a while I was having trouble with getting into the BIOS because the USB on the keyboards would initialize too slowly. I'd have to try several times, repeatedly tapping the del key hoping that it would finish initializing before I ran out of post screen.

I only got rid of the M13 because I wore all the lettering clean off the keys. When I bought replacement keycaps for it, Unicomp informed me I got the last set they had, and weren't planning on making anymore, and I never found any aftermarket manufacturers.

Since these keyboards are getting pretty rare, and some people collect them, I decided to clean up the M13, install the new caps, make it look like new and then put it away in its box. I didnt want to wear out a collectors item :p

I took a picture in my white box after cleaning it up, before putting it away. I think it looks brand new. Hoping it appreciates :p

5091177536_61b3c181bd_o.jpg


My address is in North Carolina. How much is shipping? j/k :)

Congrats that keyboard looks spiffy! I love it!
 
I miss the days when AMD excited me :(

Hell, the old original Athlon rig I have at home I would like to get going this weekend excites me more...
 
It is great to see all these boards getting ready to be released. Looks like everything is on schedule but I sure wish I could get it NOW! :D

Man you should have seen Reddit/r/AMD after the CES "announcement". If the internet had a physical address, they would have burned down that entire block. Never promise an announcement and then release marketing videos on Youtube. LOL!

Nvidia was almost as bad with their car announcements. Only thing I remember from that is that it's $25 for 20 hours of gaming on the cloud with the Shield. :) On Twitch after they let the hordes of Nvidia fans speak, they cursed the CEO of Nvidia to hell.

In short, never tease people with B.S.. It doesn't turn out well.
 
Kyle, You are right on, once again. Permit me to repeat my focus on storage performance.

What bothers me most about Intel's latest chipsets is the max bandwidth of the DMI 3.0 link,
which is exactly equal to the max bandwidth of a single M.2 NVMe slot.

This observation motivated me to recommend to AMD's CEO that AMD might OEM Highpoint's model 3840A NVMe RAID controller,
because that 3840A is one of the few (that I can find) with an full x16 edge connector and PCIe 3.0 support.

As benchmarks are now showing, fast NVMe SSDs like Samsung's 960 Pro are already bottle-necked by the upstream DMI 3.0 link.

The math is straightforward: x16 PCIe 3.0 lanes @ 8 GHz / 8.125 bits per byte = 15,753.6 MB/second,
and that calculation is for only one x16 NVMe RAID controller.

I am sure that all of your readers already appreciate that x16 bandwidth has been available to video cards for many years.

It's time we all faced the music and did something to give storage the same upstream bandwidth -- and stop throttling storage progress.

(My 2 cents.)
You are 100% correct and I could not agree with you more. Onboard M.2 RAID is worthless for speed right now for all intents and purposes.
 
My address is in North Carolina. How much is shipping? j/k :)

Congrats that keyboard looks spiffy! I love it!

Lol. I won it on eBay for $120 back in 2006. At the time I thought it was an insane amount to spend on a keyboard, especially a used one. I'd never spent more than $30 on a new keyboard before, but I really loved Model M's and the M13 trackpoint II was the only one ever made that wasn't that communist grey color. I had to have a black one to go with my setup.

Apparently they are going for $300 on eBay now. I'll just hold on to it and hope it continues appreciating :p
 
I saw some photos at the links above, which show the patterns of solder points on the underside:
the second and third x16 PCIe slots only show x8 solder points. Hope this helps.

EDIT: here you go:
http://media.gamersnexus.net/images/media/2017/CES/msi/msi-x370-backside.jpg

That's good info, thanks.

I don't do SLI anymore (I've experienced multi-GPU from both AMD and Nvidia, and I am just fed up with it) so my demands on PCIe expansion are lesser than they used to be.

Right now I have 4 PCIe cards in my machine.

- Pascal Titan X (Gen3 x16)
- Sound Blaster X-Fi Titanium HD (Gen1? x1)
- Intel 10 Gigabit Ethernet (Gen 2 x8)
- Intel SSD 750 400Gb (Gen 3 x4)

As long as I can fit these at these speeds into a Zen board I'll be happy.

It also depends on what on board Ethernet chip they have (or is the Ethernet on die as part of the SoC?)

If it's Realtek, I'll have to install an Intel gigabit NIC as well, which will take another x4 at Gen 1 (I think?) I don't do Realtek. They have pissed me off too much over the years. I stick to Intel NIC's and sometimes Broadcom (NetXtreme) but never Realtek. I use my 10gig ethernet for a direct link to my NAS server. Everything else goes over traditional gigabit to my switch. I always disable the on board Realtek sound as well. I'll be holding on to my EMU20k2 chip until it dies!

Actually, does anyone know if the Ethernet is onboard in the SoC? if it is, is it AMD's design or licensed from someone else?
 
I've been doing some rough estimates, using our limited hardware here + some available Internet stories.
For now, my rough assumptions predict about 20% aggregate controller overhead for SATA-III RAID-0 arrays,
and about 10% aggregate controller overhead for NVMe RAID-0 arrays, with a focus on READ speed.
Now, as a "worst case" prediction, assume AT MOST 25% aggregate controller overhead
for a RAID-0 array of 4 x Samsung 960 Pro M.2 SSDs, thus: 15,753.6 x 0.75 = 11,815.2 MB/second.
That performance should run circles around Intel's DMI 3.0 link = a single M.2 NVMe SSD.
On this same point, Highpoint's estimate of max x16 bandwidth is almost exactly the same:
http://highpoint-tech.com/PDF/RR3800/RocketRAID_3840A_PR_16_08_09.pdf
i.e. 15,760 MB/second !! And, the bonus is that the 3840A is reportedly designed
to inter-operate with multiples installed in motherboards having two or more full x16 PCIe 3.0 slots.
How long would it take for AMD to finish the QC (quality control testing) of a single 3840A
in one of their latest AM4 chipsets? I see a big WIN-WIN for AMD and Highpoint.

Kyle, maybe you should get on the telephone to Dr. Su, AMD's CEO, and replay these numbers for her??

Yuck... Highpoint. I don't trust their products at all. I've had nothing but problems over the years.

For my NAS server I went with two LSI 9220-8i SAS cards instead, flashed to IT (JBOD/HBA) mode - of course - so I could use them with ZFS. They have been perfectly reliable for years now.

My Highpoint controllers have all gone in the trash. They are absolute junk as far as I'm concerned.
 
M.2 with a cable

That way you can get future larger SSDs and still have the bandwith without any heat issue. However I believe only Intel has created drives for this. I guess its better than SATA express. Not sure why these are even being used anymore.
 
I understand why you have written that. I've stayed with RAID-0 arrays, for raw performance,
and I've learned to avoid the pitfalls that do come with the Highpoint 2720SGL.
One necessary step is to STUDY the README file that must be downloaded from their website.
The latest driver must be installed when that card is NOT wired to any storage devices.
Then, the BIOS must be flashed to ensure 6G transmission speeds.
Lastly, because INT13 is ENABLED at the factory, this setting can result
in DISABLING motherboard RAID support. Thus, the solution to that problem
has been to DISABLE INT13 when flashing the latest BIOS. Then, the 2720SGL
will allow a system to BOOT without interfering with other storage controllers.
If one is doing a fresh install of Windows to SSDs wired to the 2720SGL,
it's OK to leave the factory defaults unchanged: INT13 ENABLED because
that's the proper setting to BOOT from the 2720SGL.
And, it is typically necessary to change the motherboard BIOS setting
to "Standard IDE" or "AHCI" if the 2720SGL conflicts with a "RAID" option.
By observing all of the above, all 6 of our 2720SGLs have been running
perfectly for several years now. Just yesterday, I installed another 2720SGL
in an aging PCIe 1.0 motherboard -- as a backup storage server --
and it's working perfectly with 4 x Samsung 750 SSDs (4 integrated caches
@ 250MB = 1.0GB of cumulative cache). But, on a practical level,
you can forget calling Highpoint Tech Support: as far as I can tell,
it doesn't exist. That's why I bit the bullet and graduated with
honors from Hard Knocks University (see above).

I've had issues with multiple different Highpoint controllers over the years, but it's good to know this model has workarounds.

I didn't realize Intel 750's came in 250GB. I thought I had the smallest one at 400GB.

I too have a good amount of cache in my server, but mine is set up differently:

I run Proxmox for my virtualization server, and have the folloing pools

Code:
# zpool status

  pool: rpool

    NAME        
    rpool      
     mirror-0  
       Samsung 850 EVO 512GB
       Samsung 850 EVO 512GB


  pool: zfshome

    NAME                                            
    zfshome                                          
     raidz2-0                                      
       WD Red 4TB
       WD Red 4TB
       WD Red 4TB
       WD Red 4TB
       WD Red 4TB
       WD Red 4TB
     raidz2-1
       WD Red 4TB
       WD Red 4TB
       WD Red 4TB
       WD Red 4TB
       WD Red 4TB
       WD Red 4TB
    logs
     mirror-2
       Intel SSD S3700 100GB
       Intel SSD S3700 100GB
    cache
     Samsung 850 Pro 512GB
     Samsung 850 Pro 512GB

In other words, two mirrored 512GB Samsung 850 EVO's for boot drive and VM storage

Then I have my main 48TB pool for storage, with two mirrored Intel S3700 as ZIL/SLOG devices, and two striped 512GB Samsung 850 Pro drives as cache, so a TB of SSD cache.

The server also contains 2x 128GB Samsung 850 Pro drives, one for a Swap drive and one for a Live TV buffer for my MythTV PVR backend, as well as a 1TB Samsung 850 EVO as a scheduled recording drive. MythTV records new recordings to the 1TB drive, and as space is needed a cron script moves the oldest remaining recordings to my ZFS storage pool.

The mirrored boot/VM drives are connected to on board SATA, as are the two 128GB drives and the 1TB drive. The big pool with its 12 hard disks and 4 SSD's are all connected to my LSI controllers.

It's a solution that has worked pretty well for me. Some think I am crazy to have used EVO drives in a server, but I did the math on write cycles before I decided on them, and determined it would be fine, and they have actually performed much better than my calculations, with the wear level indicator barely having budged at all.


Anyway, now I have gotten us thoroughly off topic. Sorry everyone. I tend to get carried away sometimes. This server stuff is one of the few things I get excited about these days :p
 
> I didn't realize Intel 750's came in 250GB. I thought I had the smallest one at 400GB.

Not Intel 750.

Samsung 750 e.g.:
http://www.newegg.com/Product/Product.aspx?Item=9SIA12K4285519&Tpk=9SIA12K4285519
(cache = 256MB per SSD even on the 120GB model)

I've preferred to rely on the power available from an idle CPU core,
rather than to spend lots more on a hardware RAID controller.

I watch Windows Task Manager often, and our quad-core Intel CPUs
are almost NEVER at 100% utilization.

My major focus has been to identify upstream bottlenecks
in storage subsystems.

Even with older LGA775 Intel CPUs, like the Q9550,
one of its idle cores runs at > 3.0 GHz with SpeedStep,
and its Level 2 cache is 12MB. That much power
is more than sufficient to push a RAID-0 array
without needing to spend a lot more on a
hardware RAID controller.

(These are just my idiosyncratic preferences.)

No it makes sense. For my server I think ZFS is actually a better solution than hardware raid. I bought the LSI SAS adapters not because their hardware RAID capabilities (I don't use that at all) but because of their reliability and performance.
 
You might want to take a look at the SanDisk Extreme Pro models:
we've had ZERO problems with those SSDs, and they come
with a 10-YEAR factory warranty, like Samsung's 850 Pro SSDs.
Unfortunately, the reviews of the SanDisk Extreme Pro models
were so positive, their popularity resulted in a very large
price increase recently.


Interesting.

The school of hard lessons has - over the years - taught me to trust nothing but Intel or Samsung SSD's but maybe this means I have a 3rd option to consider next time!
 
X370 for home, B350 for work. :) Now, I may have to buy online at the Microcenter website before I do a 3 hour drive to pick things up. I do not want to show up and everything be sold out.
 
Well, I'm hoping for a good workstation board. None of this gaming nonsense. I'm happy to accept plain looks, no fancy shrouds or heatsinks, no color themes and no special sound chipsets or amps LED's LED headers or any junk like that.

I just want a basic down to earth motherboard, with good overclocking capabilities, and a metric truckload of PCIe slots, in eATX or CEB form factors and nothing more on board than it absolutely needs.

The limited number of PCIe lanes is concerning me though. I was hoping they would copy intel's 40 lanes of their -E parts, or even one up them. 16x is kind of a let down.

This is what I'm coming from, and what I'll be judging it against:

P9X79WS.jpg
 
Last edited:
Back
Top