MSI AM4 pron

Is there boards are above 2667 or 3000Mhz on memory? Or is it simply the limit for Ryzen, just as previously with 2400Mhz on APUs.

MSI for example list 2667Mhz+ as OC max. While its 4133Mhz+ on their Z270 boards.

Is there even much practical benefit in clocking the RAM that high?

I was planning on buying non-oc ram and sticking to the highest stock clocks.

I haven't bothered overclocking RAM in forever. My 3930k is using 1866 DDR3 and it's not overclocked.
 
Is there even much practical benefit in clocking the RAM that high?

I was planning on buying non-oc ram and sticking to the highest stock clocks.

I haven't bothered overclocking RAM in forever. My 3930k is using 1866 DDR3 and it's not overclocked.

There is a quite large benefit yes. Even more with more cores. Your 3930K got quadchannel. AM4 and LGA1151 is dualchannel.

You got roughly ~60GB/sec bandwidth.
AM4/LGA1151 with 2667Mhz is ~42.5GB/sec
 
There is a quite large benefit yes. Even more with more cores. Your 3930K got quadchannel. AM4 and LGA1151 is dualchannel.

You got roughly ~60GB/sec bandwidth.
AM4/LGA1151 with 2667Mhz is ~42.5GB/sec

Yeah, but as we all know for typical workloads quad channel offers absolutely no benefit over dual channel, regardless of the core count.

http://www.pcworld.com/article/2982...e-shocking-truth-about-their-performance.html

So the question is if RAM bandwidth matters enough at all today to make it worthwhile to overclock it, or if it is just for bragging rights in synthetic benchmarks like sisoft Sandra.
 
Yeah, but as we all know for typical workloads quad channel offers absolutely no benefit over dual channel, regardless of the core count.

http://www.pcworld.com/article/2982...e-shocking-truth-about-their-performance.html

So the question is if RAM bandwidth matters enough at all today to make it worthwhile to overclock it, or if it is just for bragging rights in synthetic benchmarks like sisoft Sandra.

Skylale/Kaby Lake eats up memory at 3600Mhz and beyond.
 
Skylale/Kaby Lake eats up memory at 3600Mhz and beyond.

Granted, this is not the best review, (they should have tested more demanding titles, and also included some rendering/encoding workloads to see how they did) but it was the first one I found when I googled the topic, and as I suspected it shows only marginal performance improvement moving all the way from DDR4-2133 all the way up to DDR4-3733 in actual titles.

Synthetic and canned benchmarks showed more of an improvement, but no one plays synthetic benchmarks.

This seems like the type of review the H could do so much better.
 
Granted, this is not the best review, (they should have tested more demanding titles, and also included some rendering/encoding workloads to see how they did) but it was the first one I found when I googled the topic, and as I suspected it shows only marginal performance improvement moving all the way from DDR4-2133 all the way up to DDR4-3733 in actual titles.

Synthetic and canned benchmarks showed more of an improvement, but no one plays synthetic benchmarks.

This seems like the type of review the H could do so much better.

Hardocp already tested memory.
http://www.hardocp.com/article/2015...76700k_ipc_overclocking_review/6#.WHKSYYWcEuU
 


Ahh, thanks for that link. I completely missed that first time around.

Looks to me like in order to see a benefit from faster RAM, you need to turn settings down to ones that no one plays at anymore, in order to get frame rates above those that any monitor on the market can properly display :p

So, it seems like father RAM is still mostly a waste. At least that is my conclusion.
 
Ahh, thanks for that link. I completely missed that first time around.

Looks to me like in order to see a benefit from faster RAM, you need to turn settings down to ones that no one plays at anymore, in order to get frame rates above those that any monitor on the market can properly display :p

So, it seems like father RAM is still mostly a waste. At least that is my conclusion.

Not really. Its just to exclude other options including scripting. I can show you the same results in 1440p and high settings if you want.
http://www.in.techspot.com/features...-a-difference/articleshow/52395281.cms?page=3
 
Not really. Its just to exclude other options including scripting. I can show you the same results in 1440p and high settings if you want.
http://www.in.techspot.com/features...-a-difference/articleshow/52395281.cms?page=3

Now, this is good data shwoing benefits to faster RAM.

If Zen winds up being worth buying, and this holds up on Zen (different architectures have different sensitivities to RAM speed) I'll have to make some decisions.

I like having lots of RAM for VM's and RAMdisks and the like.

I've had 64GB in my 3930k system for years now. 64GB of high speed DDR4 RAM will be pricy...
 
The article got some errors.

But to compare MSI got 106 LGA1151 boards. A lot of the boards are actually the same with tiny extras.

They will need 6 boards just to use all the AM4 chipsets once.
 
Last edited:


Well, I hope at least one of them is a good workstation board, a spartan X370 board with very little in the way of on-board devices, instead repurposing the chipset PCIe lanes for added expansion. A basic subtle color scheme, and none of those ridiculous non-functional decorative covers and heatsinks would be nice too.

Essentially, cut out all the junk (fancy audio, extra SATA ports, the two in the SoC are more than enough, Killer NIC's etc.) to keep down the added costs for stuff I don't need. Give me one or two on board INTEL gigabit NIC's (heck, or even better, two on board Intel NIC's, one gigabit, and one 10GBase-T)

Recipe for the PERFECT function over form motherboard:
  • SSI CEB or eATX form factor
  • basic black, dark green or dark blue color of board. Only functional heatsinks/shrouds, and keep them pedestrian looking. Black anodized both looks nice and subtle, and is the most effective for heat transfer. No artwork on the board. Everything that adds to the cost should add function, not appearance.
  • Integrate USB 3.1, but don't go nuts. No one needs eleventy billion USB ports, and if they do, let them buy hubs, or decide so themselves by getting PCIe expansion cards. Headers for 4 USB ports on the front panel, and maybe an additional 6 ports on the back I/O panel is plenty
  • Use the two built in Sata controllers in the SoC, but don't add any more than that. Don't add cost if it's not needed. Most of these will have one m.2 SSD attached to them, and nothing else. People who want more can use an expansion card with the massive number of PCIe slots you'll be giving us :p
  • Include a basic Realtek audio chip, but don't waste money on fancy amps or better audio chips. Those of us who care about audio will be using external DAC's and amp's anyway, so fancy on board shit is just wasted. That, and every GPU on the market these days includes audio as well... If possible, make it switchable. Allow the PCIe lane used by on board audio to be available to a PCIe slot if on board audio is disabled in bios.
  • As far as NIC's go, please whatever you do stay away from Realtek (Or Killer). Put one or two Intel gigabit NIC's on there, or better yet, one Intel Gigabit NIC and one Intel 10G BaseT NIC. If possible make these switchable too, like the audio above, such that the lanes they use can be otherwise utilized if disabled in BIOS.
  • In the end, give us as many PCIe lanes as possible routed to actual PCIe slots, and for the love of god, make all of those slots x16, even if they aren't x16 electrically, so we can fit whatever boards we want in them.
  • Lose the VGA/DVI/HDMI/DP ports on the I/O Shield. We're not going to use this motherboard for APU's anyway
  • Good overclocking features are a must.
  • Good fully customizeable fan control. Make all fan headers 4 pin with full PWM control, and include fully custom fan profiles in BIOS.. I mean, fully custom like 20% PWM at this temp, 40% PWM at this temp, etc. with 10+ points per fan header, and a little chart. Come on guys, UEFI BIOS:es are graphical now. Take advantage of that for something useful!
  • Oh, and for the love of god, please make the graphical POST splash screen professional looking! I don't need to look at angry jet planes, "MILITARY GRADE" etc. etc. every time my machine POST's. I'm not a hormone raging 13 year old. Make it nice, subtle and professional looking.

Now, someone, please run off and make me this perfect motherboard!
 
Last edited:
Does anyone know how the M.2 and U.2 PCIe lanes work?

I mean, I know it is too early to know for these unlaunched boards, but how do they typically work for existing boards, like the Z170's?

Here's where I am going with this:

So this MSI board - as an example - has 2x m.2 ports and 1x u.2 port, each using 4 PCie lanes.

What happens when these slots are unused? Are they typically statically linked, or can the lanes - if not in use - be re-routed to PCIe slots?

Just like on my Asus P9X79 WS, where if I insert a card in slot two, it suddenly becomes an 8x slot, and slot 1 goes from a 16x slot to an 8x slot, could there be PCIe slots that share lanes with m.2 slots or u.2 ports?

Reason I'm asking is, I currently use a PCIe 400GB Intel SSD750. It is more than fast enough and more than large enough for me, but if Zen winds up being worth buying, it's relatively limited number of PCIe lanes has me wondering if I'd be better off ditching the Intel PCIe SSD in favor of an M.2 drive, if that will allow me to save PCIe lanes.
 
Does anyone know how the M.2 and U.2 PCIe lanes work?

I mean, I know it is too early to know for these unlaunched boards, but how do they typically work for existing boards, like the Z170's?

Here's where I am going with this:

So this MSI board - as an example - has 2x m.2 ports and 1x u.2 port, each using 4 PCie lanes.

What happens when these slots are unused? Are they typically statically linked, or can the lanes - if not in use - be re-routed to PCIe slots?

Just like on my Asus P9X79 WS, where if I insert a card in slot two, it suddenly becomes an 8x slot, and slot 1 goes from a 16x slot to an 8x slot, could there be PCIe slots that share lanes with m.2 slots or u.2 ports?

Reason I'm asking is, I currently use a PCIe 400GB Intel SSD750. It is more than fast enough and more than large enough for me, but if Zen winds up being worth buying, it's relatively limited number of PCIe lanes has me wondering if I'd be better off ditching the Intel PCIe SSD in favor of an M.2 drive, if that will allow me to save PCIe lanes.

You cant "double use" the lanes. You can at best just share the lanes via a PLX chip.

Chipsets you can think of as a PLX chip too. The lanes the chipset provides ends up in a x4 PCIe 3.0 interface to the CPU.

You can not get full speed on the 750 if its attached to the X370 chipset in any way because the chipset only provides PCIe 2.0. It needs to be attached to the CPU lanes.
 
You cant "double use" the lanes. You can at best just share the lanes via a PLX chip.

Chipsets you can think of as a PLX chip too. The lanes the chipset provides ends up in a x4 PCIe 3.0 interface to the CPU.


Are you sure you can't do the same thing with those as you can with regular lanes going to PCIe slots?

In other words, you could have a PCIe slot on the motherboard that is 8x if no m.2 ports are populated, 4x of one m.2 slot is populated, and 0x if both m.2 slots are populated, just like my motherboard has a 16x slot that becomes an 8x slot of a nearby slot is populated?

Seeing that m.2 slots are really just PCIe slots electrically in a different shape, I would think this would be very possible.

Now, if anyone has done this yet, is a completely different question.
 
Are you sure you can't do the same thing with those as you can with regular lanes going to PCIe slots?

In other words, you could have a PCIe slot on the motherboard that is 8x if no m.2 ports are populated, 4x of one m.2 slot is populated, and 0x if both m.2 slots are populated, just like my motherboard has a 16x slot that becomes an 8x slot of a nearby slot is populated?

Seeing that m.2 slots are really just PCIe slots electrically in a different shape, I would think this would be very possible.

Now, if anyone has done this yet, is a completely different question.

It requires split functionality and it can only be done with predetermined multipliers. For example the AM4/LGA1151 CPUs can only do 1x16 or 2x8. You are not going to get something like 8x for graphics and 2x4 for M.2.

The solution to your issue is PLX chips.
 
It requires split functionality and it can only be done with predetermined multipliers. For example the AM4/LGA1151 CPUs can only do 1x16 or 2x8.

Hmm. Looks like I might be shopping for an m.2 drive as well then...

Between needing 64GB of DDR4 RAM, an m2 drive a motherboard and a CPU, this is shaping up to be an expensive upgrade.

It had better be worth it, or I guess I'll just have to pass.
 
Hmm. Looks like I might be shopping for an m.2 drive as well then...

Between needing 64GB of DDR4 RAM, an m2 drive a motherboard and a CPU, this is shaping up to be an expensive upgrade.

It had better be worth it, or I guess I'll just have to pass.

Your issue is you are trying to fit workstation requirements into the mainstream platform. High memory amount, high I/O connectivity etc. Maybe what you need is Skylake-X if you are actually going to upgrade or the SP3 Zen server socket platform.
 
  • Like
Reactions: noko
like this
Your issue is you are trying to fit workstation requirements into the mainstream platform. High memory amount, high I/O connectivity etc. Maybe what you need is Skylake-X if you are actually going to upgrade or the SP3 Zen server socket platform.


Yeah, you are probably right, it's just that this being an 8c/16t part, it's just weird that it is in the mainstream segment.

Downside with the server parts though, is they often don't have motherboards with overclocking support, and server CPU's are typically clocked way lower and cost way more than mainstream parts.
 
Yeah, you are probably right, it's just that this being an 8c/16t part, it's just weird that it is in the mainstream segment.

Downside with the server parts though, is they often don't have motherboards with overclocking support, and server CPU's are typically clocked way lower and cost way more than mainstream parts.

Well you have to wait anyway and see how it performs in benchmarks. 8c/16t as such doesn't mean anything in itself. Naples loses vs Skylake-EP using much more cores. AMDs own statement there is 32 cores is close to 18 cores.
 
It requires split functionality and it can only be done with predetermined multipliers. For example the AM4/LGA1151 CPUs can only do 1x16 or 2x8. You are not going to get something like 8x for graphics and 2x4 for M.2.

The solution to your issue is PLX chips.


Going back to this for a moment.

How come I can buy passive risers that will split one x16 slot into 4 x4 slots?

I guess based on stuff like this, I got the impression that PCIe lanes are very configurable out of the box.

Is it that it is much easier to split a slot than to join two of them?
 
Going back to this for a moment.

How come I can buy passive risers that will split one x16 slot into 4 x4 slots?

I guess based on stuff like this, I got the impression that PCIe lanes are very configurable out of the box.

Is it that it is much easier to split a slot than to join two of them?

Could you link it?

Usually its something like this for miners and its all PLX based.
http://amfeltec.com/splitters-gpu-oriented/?view=list
amfeltec-pci-e-gpu-splitters.jpg
 
Could you link it?

Usually its something like this for miners and its all PLX based.
http://amfeltec.com/splitters-gpu-oriented/?view=list
amfeltec-pci-e-gpu-splitters.jpg


Hmm. My bad I guess. I had always thought these things to be completely passive, judging by how barren the boards of many I have seen are, but I guess some form of.microcontroller must always be needed for the two control pins..


I'm learning a lot from this conversation. Thanks.
 
I think someone was discussing PCIE lanes in the thread. Anyways this video says 24 lanes.

 
I think someone was discussing PCIE lanes in the thread. Anyways this video says 24 lanes.




Specifically it is 24 Lanes, 16x dedicated for GPU, 4x for NVMe, 4x for Chipset.

Now, does that mean that we will see boards with 2x8 (16 physical, 8 electrical), or 1x8 + 2x4 CPU lanes that can be repurposed for storage/networking? I would assume so.

Considering PCIE3 is 8Gbps/lane that only gives 32Gbps (~3.8GB/sec real world) for the entire chipset.

A 10GBe NIC is going to want 2 lanes to operate at full speed.

Gosh, looking at what a mess this can be, I see why a PLX chip might be a popular option. Stack 32 lanes (4x 8 lane electrical) of physical slots into a shared 8 lane bus and call it a day. Let the end user figure out the best division of resources.

Now if only bifurcation was a standard thing across all chipsets...
 
Specifically it is 24 Lanes, 16x dedicated for GPU, 4x for NVMe, 4x for Chipset.

Now, does that mean that we will see boards with 2x8 (16 physical, 8 electrical), or 1x8 + 2x4 CPU lanes that can be repurposed for storage/networking? I would assume so.

Considering PCIE3 is 8Gbps/lane that only gives 32Gbps (~3.8GB/sec real world) for the entire chipset.

A 10GBe NIC is going to want 2 lanes to operate at full speed.

Gosh, looking at what a mess this can be, I see why a PLX chip might be a popular option. Stack 32 lanes (4x 8 lane electrical) of physical slots into a shared 8 lane bus and call it a day. Let the end user figure out the best division of resources.

Now if only bifurcation was a standard thing across all chipsets...


Hmm. Are there any good m.2 slot to PCIe slot flexible risers? Maybe one can use one of the m.2 slots for a 10GBe card.
 
Keep going:
what happens if I install an NVMe RAID controller
with x16 edge connector in the slot normally reserved for an x16 GPU?

Will it be allocated all x16 lanes, electrically speaking?

p.s. That Titanium might work well as a "thin client"
that archives large data sets on a NAS, but
if it can't support a modern NVMe RAID controller
with x16 edge connector at full speed, I think I'll pass
on this generation of AM4 motherboards.

Looking at that board, I'm guessing we are looking at 2x8 (16 slot) PCIe off the CPU, the 3 x1 slots off the chipset, the bottom x16 is probably an x1 or x4, also off the chipset. The board may have intelligent switching for the 2x8 and make it 1x16 if you only have one card installed. You could then install a low-end GPU in the bottom x16 slot.

Either way, if you were looking at all that, I'm not quite sure what the point would be in having ~9GByte/sec local reads with limited network IO. Better to sacrifice PCIe x16 down to x8 so you can put a 40Gbe NIC in there. You'll still hit nearly the same 9GB/sec.

Do Ryzen CPUs have on-chip video? I thought I saw 2 integrated video ports
on the rear I/O panel of that Titanium motherboard.

Ryzen, no, AFAIK. Only the forthcoming AM4 APUs will have IGPs. So those ports will be dead unless you stick an APU in there.
 
> 4x for NVMe

And, I saw 2 x M.2 ports on that Titanium motherboard (see video above).

So, 2 x M.2 slots must share x4 PCIe 3.0 lanes??

Sounds like the same upstream bandwidth ceiling as Intel's DMI 3.0 link
and I mean EXACTLY THE SAME!

Therefore, expect no benefit from a RAID-0 array of 2 very fast NVMe M.2 SSDs
like the Samsung 960 Pro.

Keep going: what happens if I install an NVMe RAID controller
with x16 edge connector in the slot normally reserved for an x16 GPU?

Will it be allocated all x16 lanes, electrically speaking?

Do Ryzen CPUs have on-chip video? I thought I saw 2 integrated video ports
on the rear I/O panel of that Titanium motherboard.

Stop at 1:06 / 3:16 in the "Top 5 Features" video above.

Frankly, I'm disappointed so far, but I'll wait until we know more
about these AM4 chipsets.


p.s. That Titanium might work well as a "thin client"
that archives large data sets on a NAS, but
if it can't support a modern NVMe RAID controller
with x16 edge connector at full speed, I think I'll pass
on this generation of AM4 motherboards.


EDIT: I want to assemble four of these in a RAID-0 array:

http://www.newegg.com/Product/Product.aspx?Item=9SIA24G55Z6985&Tpk=9SIA24G55Z6985

... housed in four of these Syba 2.5" M.2 to U.2 enclosures:

http://www.sybausa.com/index.php?route=product/product&product_id=884&search=SY-ADA40112
(they come with a nice thermal pad that transfers heat from the SSD to the enclosure housing)

... wired with four U.2 cables like this one:

http://www.newegg.com/Product/Product.aspx?Item=9SIAA6W3YY8665&Tpk=9SIAA6W3YY8665


There is very little benefit in using RAID to stripe multiple SSD's anyway, they are already massively paralellized internally anyway.

Going from a hard drive to any SSD is a huge improvement. Going from a slow SSD to a faster one is marginal.

When I moved from my sata Samsung 850 Pro to a PCIe Intel 750, while the 750 certainly scores higher in drive bench tests, actual practical performance difference (boot times, load times, system responsiveness, etc) have been undetectably small.

Besides, striping drives gains you sequential performance at the cost of seek times. Seek times are the huge reason why SSD's are such a huge improvement.

If you have an odd workload where sequential speeds are unusually important to you, I guess striping makes sense, but for most workloads you are just giving up reliability for no benefit at all.
 
> There is very little benefit in using RAID to stripe multiple SSD's anyway

???


I honestly don't understand: I constantly read about this CPU speed difference and
that CPU speed difference (5%, 10% etc.), but a C: partition that operates almost FOUR TIMES
faster than a single SSD has "very little benefit".

Believe me when I say that routine maintenance tasks, like writing a drive image,
are noticeably -- and measurably -- faster. Then, there is a separate task
to read that entire drive image, to verify its integrity. And, then another read
while writing a backup archive copy of same.

I also notice, and enjoy, all the other tasks which are much faster
e.g. navigating a large NTFS database with lots of discrete filenames.

So you have a rather enterprisey use case, not a consumer/prosumer use case.

It is still unclear from your description what you are looking for as far as this machine goes. Do you want local storage IO and not care about network IO? Do you want GPU IO as well?

If we are talking about the consumer/prosumer use case, going from 500MB/sec to 5GB/sec is not going to make much of a difference unless you are slinging around large files quite often, and is going to make just about zero difference in gaming.
 
> There is very little benefit in using RAID to stripe multiple SSD's anyway

???

ATTO with 4 x Samsung 840 Pro in RAID-0, PCIe 2.0 chipset, 2720SGL controller:

4xSamsung.840.Pro.SSD.RR2720.P5Q.Premium.Direct.IO.2.jpg

Well, as I was saying, you can get great benchmark numbers, but they rarely if ever result in improvements in practical use. Your boot times, load times and system responsiveness are going to be almost exactly the same. Only time you'll ever notice is with large sequential transfers to another fast source. (RAMdisk?)
 
So you have a rather enterprisey use case, not a consumer/prosumer use case.

That's been made uncomfortably clear. Nobody else on this forum cares about running SQL databases in RAMdisks and wanting to move that down to RAID M.2.

And anyone that does is certainly not purchasing AMD Ryzen at launch. What we have here is a enterprise customer on a consumer budget.
 
  • Like
Reactions: Dew
like this
It's a sad day when the scientific method gets squashed
by corporate propaganda and marketing hype.

If they won't manufacture what we wish to purchase,
then we won't be spending money on what we don't want.

Without having a "Titanium" in hand, the only way
to learn whether its video ports will function
was to ask that question (the easy way)
or purchase one and find out the hard way.

Beware of allowing theories to "morph" into facts :)
Why wouldn't they function? I'm sure they'll work fine, with an APU. Ryzen is a CPU, with no graphics capabilities, I'm 99% certain (only 1% uncertain because there is a remote possibility that they decided to turn it into an APU without saying anything). There will be APUs released later (as had been said by AMD) for the same socket. I'm not sure of chipset compatibility, but I think they should work with CPUs and APUs.
 
There are current Bristol Ridge AM4 CPUs that should function with the video ports. Next year there will be Zen based APUs that will use the video ports as well. The Zen parts released in Q1 will not use the video.
 
Yeah, I don't mind PS2. I used to use a black Model M13 Trackpoint II (damn I loved that keyboard) and used the PS/2 ports. In many ways they work better than USB. I have since switched to new production Unicomp keyboards, and while they are good to type on, they have nowhere near the quality and fit and finish as the old ones. (no one is willing to pay for that kind of quality anymore, keyboards have to be cheaper) For a while I was having trouble with getting into the BIOS because the USB on the keyboards would initialize too slowly. I'd have to try several times, repeatedly tapping the del key hoping that it would finish initializing before I ran out of post screen.

I only got rid of the M13 because I wore all the lettering clean off the keys. When I bought replacement keycaps for it, Unicomp informed me I got the last set they had, and weren't planning on making anymore, and I never found any aftermarket manufacturers.

Since these keyboards are getting pretty rare, and some people collect them, I decided to clean up the M13, install the new caps, make it look like new and then put it away in its box. I didnt want to wear out a collectors item :p

I took a picture in my white box after cleaning it up, before putting it away. I think it looks brand new. Hoping it appreciates :p

5091177536_61b3c181bd_o.jpg

A few years back, some folks at work were crowing about these new-fangled mechanical keyboards.
I brought in the OG and shut 'em up :)

ibm_keyboard.jpg~original


Mint condition with the original box, except the rubber on the cable has started to disintegrate :(
Bought it way back when for use with an IBM Thinkpad 760ED when it launched.
 
Back
Top