Ryzen 3000 hype and review oddities

I was referring to the statement you made where you said that no one tested overclocked performance, which is untrue. All the reviews I've seen, including the one I wrote had overclocked values. I had a 4.3GHz all core overclock in the data set. Sure, you could try manually clocking a single core up to the boost clocks, but really there isn't a need for this. So the figures where we saw 4.4GHz and later 4.6GHz boost clocks in the update pretty well have this covered on both ends of the spectrum. The data is there. When you look at the Intel data its presented the same way. Stock speeds which include boost clocks and all core overclocks. That's exactly what I (and others) did.
My basis for that was that no one I saw even reached boost clocks as OC, if someone tested a 9900k and it did not reach 4.7Ghz singlecore or somone published a test with [email protected] all core OC, I would not call that overclocking either. I would need to see 4.8Ghz.
Perhaps holding amd to intel's turbo standards is wrong, but I also feel it is wrong to call anything between base clock->boost clock OC.



I've used a wide range of processors and while I agree, more is better, your still primarily GPU bound. Tests have been done in the past showing virtually no difference between many CPU's using a high end graphics card at higher resolutions. This is basically well known and generally accepted.
This really is just a myth perpetuated by ultra benchmarks and 60Hz screens. If you choose ultra and then play, you will hard-bottleneck the gpu, and reach the conclusion that there's virtually no difference between cpus. Some have even included old sandy bridge 2600k in such benchmarks and gone "see, almost no difference!" but again, it is because ultra 4k will hard bottleneck the gpu with overkill settings. It is hard to quantify, so I'll pull numbers out of my ass and say ultra generally looks 10% better than high, but 120 vs 60 fps is 100% more motion resolution.

Trying to reach ~120fps 4k, both gpu and cpu will bottleneck at various settings and situations, and cpu and gpu hungry settings will need to be tweaked separately.
It is the same with slightly weaker hardware in 1440p 144fps+, or 240fps 1080p.
It has been the same since I started running 5760x1080@120Hz sometime around 2011-12.

If you saw my graphs above, you can see that briefly, cpu has spiked and gpu performance dropped, since cpu bottlenecked. And this is with as perfectly tweaked balance as I can manage - I could increase all cpu hungry settings and have the cpu bottleneck more. If i bought a more powerful cpu, I could probably increase some setting - but maybe not as things like shadows don't always offer the granularity needed (low -med-high shadows for example).

For people tweaking for higher fps, a 5% increase in cpu power can be a 5% increase in fps, or high shadows instead of medium. Or not. Now, 5% I agree is in the range where it is acceptable, gain 4 more cores for a tiny fps loss - I'll take it. But if it aproaches 10-20%, not me. Add optional overclocking to the cpu you compare to, makes matters worse. Add possibly flaky 99th percentile min fps (which IF true in the LTT test probably gives clearly visible stutters and needs to be cleared up).

Personally, I think I will just wait and see if it turns out to be nothing, re-evaluate after the release-problems are cleared up. (there are so many release-problems when looking around, in addition to that destiny2 can't even be started on 3900 so I'd need to wait for that anyway, etc).
 
There's a user that's hitting all-core *at* or above the boost clock. See here.
It's certainly opened my eyes on there being another 150-200Mhz available than we were originally expecting. Like Dan, I saw that many reviewers (e.g. GN, AT) were including 4.3Ghz all-core as their published Ryzen overclock.
But most users here have the luxury of time (that reviewers don't) to get higher overclocks than published reviews - that's nothing new.
 
My basis for that was that no one I saw even reached boost clocks as OC, if someone tested a 9900k and it did not reach 4.7Ghz singlecore or somone published a test with [email protected] all core OC, I would not call that overclocking either. I would need to see 4.8Ghz.
Perhaps holding amd to intel's turbo standards is wrong, but I also feel it is wrong to call anything between base clock->boost clock OC.

The chips simply don't overclock well at all. It doesn't really matter, using all cores, Ryzen's simply cannot clock nearly as high as they can on a single core. Their all core overclock was included in the tests. You don't like the results, fine. But that's all those chips have to give, it was overclocking and it is in the data set.

This really is just a myth perpetuated by ultra benchmarks and 60Hz screens. If you choose ultra and then play, you will hard-bottleneck the gpu, and reach the conclusion that there's virtually no difference between cpus. Some have even included old sandy bridge 2600k in such benchmarks and gone "see, almost no difference!" but again, it is because ultra 4k will hard bottleneck the gpu with overkill settings. It is hard to quantify, so I'll pull numbers out of my ass and say ultra generally looks 10% better than high, but 120 vs 60 fps is 100% more motion resolution.

You are pretty much pulling numbers out of your ass and generalizing. High vs. ultra will heavily depend on the specific game. You can't just say that ultra quality looks 10% better than high quality and be done with it. Not everyone cares about FPS specifically. As long as I get 60FPS consistently, I'll take the eye candy over raw frame rate.

Trying to reach ~120fps 4k, both gpu and cpu will bottleneck at various settings and situations, and cpu and gpu hungry settings will need to be tweaked separately.
It is the same with slightly weaker hardware in 1440p 144fps+, or 240fps 1080p.
It has been the same since I started running 5760x1080@120Hz sometime around 2011-12.

I'm going to disagree with you somewhat on this. You want your hardware to be as fast as possible to remove any potential bottlenecks. There isn't that much to tweaking a modern CPU for the most part. There are some additional opportunities with Zen based CPU's which includes Zen+ and Zen2, but they also don't overclock all that well as you are so fond of pointing out. I've always run higher resolution displays or display arrays and I've always simply gone for the fastest CPU, RAM and GPU's I could afford. Then I would overclock those things as far as I could take them. I didn't need to "tune" them specifically in various situations. Make something faster, its good for all situations. Again, I think you are over thinking this.

If you saw my graphs above, you can see that briefly, cpu has spiked and gpu performance dropped, since cpu bottlenecked. And this is with as perfectly tweaked balance as I can manage - I could increase all cpu hungry settings and have the cpu bottleneck more. If i bought a more powerful cpu, I could probably increase some setting - but maybe not as things like shadows don't always offer the granularity needed (low -med-high shadows for example).

I'm not sure what your point is. If you run into a bottleneck, you try and eliminate the bottleneck as much as you can. If that means reduction in visual settings in your game, then that's a personal choice. However, it doesn't change what you can do with the CPU and GPU. Specific situations also doesn't change the general rule that higher resolution gaming is far more GPU dependent than CPU dependent. I've seen it for years where a faster CPU still equaled greater performance in specific situations, but most of the time those specific situations occurred in higher resolution and multi-GPU scenarios. Again, when you start tweaking things and dialing settings down, you are putting things more on the CPU.

Look at it this way, whether you are shooting for higher FPS or higher in game settings, generally speaking a faster GPU will benefit you more than a faster CPU. Ideally, you want the fastest CPU and GPU money can buy if you can afford it, or are willing to pay for it. Then you want to take those components as far as you can go with them. That doesn't really change whether you are at 1920x1080 or 3840x2160. The difference is what you can get by with. At the lower resoluton, your more CPU bound than GPU bound. You probably wouldn't see much if any difference between an RTX 2080 and a 2080 Ti at that resolution. However, you will see a difference between a Core i5 2500K at stock settings vs. a Ryzen 9 3900X at stock settings.

For people tweaking for higher fps, a 5% increase in cpu power can be a 5% increase in fps, or high shadows instead of medium. Or not. Now, 5% I agree is in the range where it is acceptable, gain 4 more cores for a tiny fps loss - I'll take it. But if it aproaches 10-20%, not me. Add optional overclocking to the cpu you compare to, makes matters worse. Add possibly flaky 99th percentile min fps (which IF true in the LTT test probably gives clearly visible stutters and needs to be cleared up).

Again, I can't speak to their testing experiences. However, you are generalizing again. The math you throw out won't always track. But let's say your at 100FPS on one CPU and 95 on another. Do you really think that makes that big a difference? If you do, buy the Intel. It's that simple. If all you do is play games on your computer, and you do nothing else with it, then buy Intel. I said as much at the end of my review.

Personally, I think I will just wait and see if it turns out to be nothing, re-evaluate after the release-problems are cleared up. (there are so many release-problems when looking around, in addition to that destiny2 can't even be started on 3900 so I'd need to wait for that anyway, etc).

On this we can agree. If you buy now, you will have to deal with early adopter problems going AMD. The AGESA code is incredibly complex and these platforms have issues that have to be dealt with. It's the price of going AMD unfortunately. Say what you like about Intel, generally speaking, when it comes to platforms Intel has its shit together far more than AMD does.
 
Last edited:
On this we can agree. If you buy now, you will have to deal with early adopter problems going AMD. The AGESA code is incredibly complex and these platforms have issues that have to be dealt with. It's the price of going AMD unfortunately. Say what you like about Intel, generally speaking, when it comes to platforms Intel has its shit together far more than AMD does.
[/QUOTE]

funny you should mention this, I went with the B350 instead of the 370 because of how terrible the introduction boards were doing, which seemed to never improve until they released the 4 series. I picked up another cheap B450 and its great.
 
One thing you have to understand is that it isn't that Ryzen isn't efficient at 7nm. It's that AMD took the gains from the process node shrink and reinvested them in performance.


Source Me:
https://www.thefpsreview.com/2019/07/07/amd-ryzen-9-3900x-cpu-review/
https://www.thefpsreview.com/2019/07/15/amd-ryzen-9-3900x-cpu-review-new-bios-performance-tested/

I, for one, am eternally grateful they did this. The whole efficiency chasing has been a thorn in the side of desktop PC enthusiasts for far to long. I mean the only time efficiency is really a concern is if you don't have the performance to back it up as was the case with the FX line of processors. I'm hoping Kyle can successfully relay this information during his tenure at Intel
 
I, for one, am eternally grateful they did this. The whole efficiency chasing has been a thorn in the side of desktop PC enthusiasts for far to long. I mean the only time efficiency is really a concern is if you don't have the performance to back it up as was the case with the FX line of processors. I'm hoping Kyle can successfully relay this information during his tenure at Intel

Kyle doesn't work at Intel anymore. He hasn't for awhile. Your point is well taken though. AMD saw an opportunity to extend performance since the efficiency they gained allowed them to do it, and that doesn't seem like something Intel would choose to do on the desktop side without AMD provoking them into it. Without AMD, we'd probably still be using quad core CPU's in the mainstream and nothing beyond 16c/32t on the HEDT side.
 
Kyle doesn't work at Intel anymore. He hasn't for awhile. Your point is well taken though. AMD saw an opportunity to extend performance since the efficiency they gained allowed them to do it, and that doesn't seem like something Intel would choose to do on the desktop side without AMD provoking them into it. Without AMD, we'd probably still be using quad core CPU's in the mainstream and nothing beyond 16c/32t on the HEDT side.

At most we'd see an increase to 6-core.

edit: kyle's announcement

https://hardforum.com/threads/leaving-intel.1982266/
 
At most we'd see an increase to 6-core.

edit: kyle's announcement

https://hardforum.com/threads/leaving-intel.1982266/

Agreed. Just to add, the only reason you'd see an increase in core count on the HEDT side comes down to two things. HEDT CPU's are simply repurposed Xeons. So there is a trickle down effect. Secondly, Intel has learned over the years that they sell more high dollar processors in that bracket if they add cores. The Core i7 980X sold extremely well. The Core i7 3960X and subsequent 4960X did not. Why? No advantages over regular CPU's as core counts didn't go up and there were minimal overclocking advantages. Basically, the 3930K and 4930K were better buys. This is why they increased the core count on the 6950X over the 5960X. It was objectively worse in some ways, but it probably sold better because it added core count.

Something Intel didn't really want to do on the mainstream side as those CPU's were largely derived from mobile parts.
 
I got intel at ryzen launch. Only a few reviewers said that intel is better at gaming than ryzen everyone else was buy buy buy ryzen.
AMD is great but because of the compatibility with ram issue and lack of mAtx, I got intel. Still regret it a little but overall i got a better build this way.
 
"I'll pull numbers out of my ass and say"
You are pretty much pulling numbers out of your ass
:) :)



I'm not sure what your point is. If you run into a bottleneck, you try and eliminate the bottleneck as much as you can. If that means reduction in visual settings in your game, then that's a personal choice. However, it doesn't change what you can do with the CPU and GPU. Specific situations also doesn't change the general rule that higher resolution gaming is far more GPU dependent than CPU dependent. I've seen it for years where a faster CPU still equaled greater performance in specific situations, but most of the time those specific situations occurred in higher resolution and multi-GPU scenarios. Again, when you start tweaking things and dialing settings down, you are putting things more on the CPU.

Look at it this way, whether you are shooting for higher FPS or higher in game settings, generally speaking a faster GPU will benefit you more than a faster CPU. Ideally, you want the fastest CPU and GPU money can buy if you can afford it, or are willing to pay for it. Then you want to take those components as far as you can go with them. That doesn't really change whether you are at 1920x1080 or 3840x2160. The difference is what you can get by with. At the lower resoluton, your more CPU bound than GPU bound. You probably wouldn't see much if any difference between an RTX 2080 and a 2080 Ti at that resolution. However, you will see a difference between a Core i5 2500K at stock settings vs. a Ryzen 9 3900X at stock settings.
"If that means reduction in visual settings in your game, then that's a personal choice." - It does mean small reduction in visual quality in stills and slow moving content - but an objective increase in visual quality for reasonably fast moving content. It is everyone's personal choice of course, and depends on the pace of the game in question, but in general not making that choice (if owning a high refresh monitor) is just saying "I choose lower overall motion quality and to throw my cpu performance in the garbage and bottleneck my gpu, for better stills and scenery".
Twice as much visual information and a large reduction in blur. Check in the displays section and they will spam you about it until next sunday. I did as I said pull the ultra quality number out of my ass, but in soo many games "ultra" has a couple of useless extremely GPU-costly settings that improve next to nothing and completely gut the fps.
The increase in visual information (100% from 60-120 fps) is a fact, and the large reduction in blur as well, so that part did not originate in my bowels.

My rather strong interest in the cpu performance is due to that i know with 100% certainty i can benefit from a decent increase, but again, IF it can stay 5% less than 9900k for 4 more cores, I'll take it. (obviously both 3900x and 9900k would be a decent 10-15% upgrade over my 6700k OC). I also know that 10-15% is a touch on the low side (though many games can now benefit from >4 cores), which has made me hold off on 9900k and feel no rush about the 3900x, but I am interested.
Being able to keep a cpu when pci-e 4 becomes useful is also a nice bonus.


This all is expanding what i originally said; 3000-series not interesting for me for gaming except for 3900x, which is so close to 9900k in performance. 5% tradeoff for 4 cores, great. But there can not be other performance issues to accept this "deal", and since it is so close, it seems worth discussing and finding out, as it is also technically interesting how it turns out. The LTT issues need to be explained or dismissed, etc. Again, I lean towards waiting. When amd chose to actually comment on the clearly documented issue, and say it is not an issue and does not need optimizing, I find it odd - even if only 1 review saw this.




Again, I can't speak to their testing experiences. However, you are generalizing again. The math you throw out won't always track. But let's say your at 100FPS on one CPU and 95 on another. Do you really think that makes that big a difference? If you do, buy the Intel. It's that simple. If all you do is play games on your computer, and you do nothing else with it, then buy Intel. I said as much at the end of my review.
The answer here was already in the text you quoted; "Now, 5% I agree is in the range where it is acceptable, gain 4 more cores for a tiny fps loss - I'll take it." If the 3900x did not have more cores that i can sometimes have a use for, then yes, the entire 3000 lineup would be pretty pointless (to me). It's still great that amd caught up, and hopefully increase their market share to the ideal 50% where we can have true performance wars again.
 
I am shocked this is still going.

Morkai chiplets are the future. I don't care what settings your gaming at above 1080p None of the current AMD or Intel mid range and up chips is going to be any faster then the other. They just won't be. If you are gaming at 1080p still a difference of a few % points at frame rates already acceding refresh ranges won't be noticed. The bottleneck is still going to be your GPU even at 1440 and even if your running an overclocked 2080ti. It will still be bottlenecked next year with a 3080ti or a big navi or a Intel Xe part.

AMD hands Intel their ass hard in every single non related game category. So for games you can see a very slim difference in performance that still slightly favors intel if we reduce resolution or drop image quality far enough for it to show a low single digit % on a bar chart. Still there is no way in hell you or anyone else would be able to perceive that difference today or at any time over the next few years as GPUs are just not going to progress that fast.

So as has been said plenty... if you have a hard entrenched Intel bias. That is fine buy the Intel part. It is the last and fastest example of a non chiplet part. From here on it its chiplet for all. Intels sonnycove is going to chiplet based. Likely using intels 3d chip stacking tech they have been working on for a long time. So yes the last and best example of a single silicon etched consumer GPU is out there today... pick one up and be happy. Everyone else is probably going to be buying zen2 for all the reasons this thread has touched on 20x over.
 
This all is expanding what i originally said; 3000-series not interesting for me for gaming except for 3900x, which is so close to 9900k in performance. 5% tradeoff for 4 cores, great. But there can not be other performance issues to accept this "deal", and since it is so close, it seems worth discussing and finding out, as it is also technically interesting how it turns out. The LTT issues need to be explained or dismissed, etc. Again, I lean towards waiting. When amd chose to actually comment on the clearly documented issue, and say it is not an issue and does not need optimizing, I find it odd - even if only 1 review saw this.

If only 1 review out of say 20 saw this, then it's quite reasonable to conclude the reviewer did something wrong/it was an artificially induced situation and move on.
 
"I'll pull numbers out of my ass and say"

:) :)

At no point in the post you are addressing, did I use any numbers beyond model numbers, listing a specific resolution and or refresh rate. I didn't once use any arbitrary performance metrics. I simply spoke in general terms. You are the one saying that ultra settings look 10% better than high, which isn't necessarily true.

"If that means reduction in visual settings in your game, then that's a personal choice." - It does mean small reduction in visual quality in stills and slow moving content - but an objective increase in visual quality for reasonably fast moving content. It is everyone's personal choice of course, and depends on the pace of the game in question, but in general not making that choice (if owning a high refresh monitor) is just saying "I choose lower overall motion quality and to throw my cpu performance in the garbage and bottleneck my gpu, for better stills and scenery".
Twice as much visual information and a large reduction in blur. Check in the displays section and they will spam you about it until next sunday. I did as I said pull the ultra quality number out of my ass, but in soo many games "ultra" has a couple of useless extremely GPU-costly settings that improve next to nothing and completely gut the fps.
The increase in visual information (100% from 60-120 fps) is a fact, and the large reduction in blur as well, so that part did not originate in my bowels.

I never meant to address your math about visual information. I'm simply talking about your statement about ultra settings looking 10% better than high, which isn't necessarily true. I'll agree that games often have one or two settings in their Ultra presets that could be turned down to provide considerably better performance without much if any loss in fidelity. I don't think anyone questioned that. I'm not arguing about visual information either. I'm simply saying that some people prefer having higher visual fidelity than they do raw FPS. I could turn my games down to low settings at 4K and get far better frame rates. That doesn't make for an enjoyable or immersive experience for me. The balance between frame rates and quality settings is a personal choice.Yours seems to lean more towards the frame rate side and mine the quality side. Where I'm at, the GPU makes far more difference than the CPU. In your scenario, I believe this is still largely true, but CPU impact will be slightly larger depending on how much you offload from the GPU by altering those quality settings.

My rather strong interest in the cpu performance is due to that i know with 100% certainty i can benefit from a decent increase, but again, IF it can stay 5% less than 9900k for 4 more cores, I'll take it. (obviously both 3900x and 9900k would be a decent 10-15% upgrade over my 6700k OC). I also know that 10-15% is a touch on the low side (though many games can now benefit from >4 cores), which has made me hold off on 9900k and feel no rush about the 3900x, but I am interested.
Being able to keep a cpu when pci-e 4 becomes useful is also a nice bonus.

Of course you could benefit from a performance increase. For the most part, any of us could. As for specifics on Ryzen performance vs. Intel, I saw some games with a much larger gap between them and some with a much smaller one. If you look at aggregate data, (and some people have) you end up with a figure of around 6% or something like that. I recall someone saying that, so I could be wrong on that. I just know from my own tests that the difference depends heavily on the games your using specifically. I think someone mentioned a review where 30 titles were tested and it was 6% aggregate. Obviously, as with my tests you'll have some gaps larger or smaller than that. Since you play Destiny 2 as well, you'd probably want to look at the performance gap there and in any other specific titles you play, or upcoming titles you are interested in using engines available today before making an actual decision. That would be the prudent thing to do.

This all is expanding what i originally said; 3000-series not interesting for me for gaming except for 3900x, which is so close to 9900k in performance. 5% tradeoff for 4 cores, great. But there can not be other performance issues to accept this "deal", and since it is so close, it seems worth discussing and finding out, as it is also technically interesting how it turns out. The LTT issues need to be explained or dismissed, etc. Again, I lean towards waiting. When amd chose to actually comment on the clearly documented issue, and say it is not an issue and does not need optimizing, I find it odd - even if only 1 review saw this.

I want to get a 3800X on the bench, because as your fond of pointing out latency issues in the architecture, it has only one CCD complex, and therefore could offer better performance than the 3900X in games. If you want performance elsewhere besides gaming, I think that the gaming performance is good enough to take the hit there, as you pick up so much more performance everywhere else. This is especially true of higher resolution gaming, but when I said that, I didn't account for people dropping their quality settings while still remaining at 4K or close to it. Frankly, I think such cases are largely outliers and I could only speculate as to what the data would like there. However, one of the reasons why we test at settings to isolate CPU differences is to show which CPU's are stronger at a given game. You could infer that the CPU that wins at 1080P should always be faster at any resolution. Again, other sites have tested CPU's time and time again at 4K and seen little difference between them. So take that for whatever its worth.

The answer here was already in the text you quoted; "Now, 5% I agree is in the range where it is acceptable, gain 4 more cores for a tiny fps loss - I'll take it." If the 3900x did not have more cores that i can sometimes have a use for, then yes, the entire 3000 lineup would be pretty pointless (to me). It's still great that amd caught up, and hopefully increase their market share to the ideal 50% where we can have true performance wars again.

Two things. 1.) The performance difference is greater than 5% from what I've seen in some cases. You need to understand that an aggregate number doesn't represent specific scenarios. You will encounter situations where you see bigger hits than 5% and ones where you see a much smaller difference or even cases favoring Ryzen. Destiny 2 is a game you play, so right now I wouldn't buy until you find out what the gap is because no one right now, knows what it is. I'm going to test that when the BIOS updates are available, but until then we have no idea. That game could show us a 15% difference in favor of Intel for all we know.

The second thing is, you do not need a perfect 50/50 market share split to have true performance wars. Intel's been cutting prices and releasing chips like mad to try and hold on to market share they've been losing to AMD since Ryzen came out.
 
Agreed. Just to add, the only reason you'd see an increase in core count on the HEDT side comes down to two things. HEDT CPU's are simply repurposed Xeons. So there is a trickle down effect. Secondly, Intel has learned over the years that they sell more high dollar processors in that bracket if they add cores. The Core i7 980X sold extremely well. The Core i7 3960X and subsequent 4960X did not. Why? No advantages over regular CPU's as core counts didn't go up and there were minimal overclocking advantages. Basically, the 3930K and 4930K were better buys. This is why they increased the core count on the 6950X over the 5960X. It was objectively worse in some ways, but it probably sold better because it added core count.

Something Intel didn't really want to do on the mainstream side as those CPU's were largely derived from mobile parts.

Hah! you are right about the mobile parts, I forgot about that.

I think Intel should cannibalize some of their lower tiered parts and further increase performance on Celeron parts. There is still a big market for simple, strong and cheap CPU's, AMD is also not really strong in this market at all and Intel could keep them out.
 
I got a Ryzen 3700x and have experienced some oddities as well.

I have a kit of 4x 8GB Corsair Vengeance DDR4-3200. I simply cannot run all 4 DIMM's at DDR4-3200 levels with a Crosshair VII Hero. They ran at DDR4-3200 with a Z270 board without issue.
 
I got a Ryzen 3700x and have experienced some oddities as well.

I have a kit of 4x 8GB Corsair Vengeance DDR4-3200. I simply cannot run all 4 DIMM's at DDR4-3200 levels with a Crosshair VII Hero. They ran at DDR4-3200 with a Z270 board without issue.

It's pretty simple. Here it is from the horse's mouth:

upload_2019-7-17_12-56-39.png


This is taken from the AMD Ryzen Reviewer's Guide. It's pretty clear what's going on. Like the previous two generation's of Ryzen CPUs, you simply cannot expect the same clock speeds out of a configuration using all four DIMM slots. The maximum supported speeds are listed in the table as 2933MHz for 4x single ranked DIMMs. With 4 dual ranked DIMMs, the maximum supported speed is DDR4 2667MHz. Now, this is what's officially supported. So naturally, the reason why this is stated as being the case is because adding two more DIMMs effects memory clocking on Ryzen systems. That's not to say it isn't possible to run a 4x8GB DIMM configuration at DDR4 3200MHz, but it will likely take the right modules and the right settings to achieve this. You'll likely have to spend quite a bit of time tuning this and even then, it might not be possible with a given memory kit. Most of the quad-channel kits are generally for Threadripper and Intel's HEDT systems, so even buying four at the same time won't guarantee you anything. Buying two 16GB (2x8GB) DIMMs isn't going to help matters. If all four aren't tested together, your odds of making them work diminishes. That said, I was able to make this work on a Ryzen 7 2700X system using two G.Skill FlareX DDR 3200MHz memory kits.

However, the user I built this for screwed it up with BIOS updates to support a Ryzen 9 3900X CPU. It doesn't work anymore. The take away from this is that AMD's AGESA code can deeply impact memory compatibility. Keep in mind that Intel motherboards are a totally different beast. Just because you could do something on an Intel chipset based system does not mean that such success will translate over to Ryzen / Socket AM4 systems.
 
It's pretty simple. Here it is from the horse's mouth:

View attachment 174726

This is taken from the AMD Ryzen Reviewer's Guide. It's pretty clear what's going on. Like the previous two generation's of Ryzen CPUs, you simply cannot expect the same clock speeds out of a configuration using all four DIMM slots. The maximum supported speeds are listed in the table as 2933MHz for 4x single ranked DIMMs. With 4 dual ranked DIMMs, the maximum supported speed is DDR4 2667MHz. Now, this is what's officially supported. So naturally, the reason why this is stated as being the case is because adding two more DIMMs effects memory clocking on Ryzen systems. That's not to say it isn't possible to run a 4x8GB DIMM configuration at DDR4 3200MHz, but it will likely take the right modules and the right settings to achieve this. You'll likely have to spend quite a bit of time tuning this and even then, it might not be possible with a given memory kit. Most of the quad-channel kits are generally for Threadripper and Intel's HEDT systems, so even buying four at the same time won't guarantee you anything. Buying two 16GB (2x8GB) DIMMs isn't going to help matters. If all four aren't tested together, your odds of making them work diminishes. That said, I was able to make this work on a Ryzen 7 2700X system using two G.Skill FlareX DDR 3200MHz memory kits.

However, the user I built this for screwed it up with BIOS updates to support a Ryzen 9 3900X CPU. It doesn't work anymore. The take away from this is that AMD's AGESA code can deeply impact memory compatibility. Keep in mind that Intel motherboards are a totally different beast. Just because you could do something on an Intel chipset based system does not mean that such success will translate over to Ryzen / Socket AM4 systems.
Not to butt in but reading that chart says 2 of 2 or 2 of 4 for single ranked memory when running at DDR-3200. I have 4 of 4 Dimms single ranked running at DDR- 3200 speeds on a X470 with no issues. Just curious if I'm reading this chart right.
 
Just an FYI, MSI boards have 4 DIMMS @ 3600MHz listed on their QVL for at least one of their boards. I haven't seen it myself, I was just watching one of BZ's rambling vidoes about MSI x570 boards.
 
Not to butt in but reading that chart says 2 of 2 or 2 of 4 for single ranked memory when running at DDR-3200. I have 4 of 4 Dimms single ranked running at DDR- 3200 speeds on a X470 with no issues. Just curious if I'm reading this chart right.

"Officially supported" vs. actual results. I mean you can run DDR4 4000 RAM, but it's not "officially supported." I ran 4x8GB of 3200Mhz memory in my CH7/2700X build and I had zero issues. But it was not "officially supported."
 
Not to butt in but reading that chart says 2 of 2 or 2 of 4 for single ranked memory when running at DDR-3200. I have 4 of 4 Dimms single ranked running at DDR- 3200 speeds on a X470 with no issues. Just curious if I'm reading this chart right.

Look again. It shows 4 of 4 twice. For single and dual ranked modules. In both cases it shows the maximum supported RAM speed as being below DDR4 3000MHz. It doesn't mean you can't run 4 DIMMs at a given speed, but it isn't guaranteed, nor supported by AMD.

I've run 4 modules at DDR4 3600MHz on my Threadripper system, but it was far from supported.
 
It's pretty simple. Here it is from the horse's mouth:

This is taken from the AMD Ryzen Reviewer's Guide. It's pretty clear what's going on. Like the previous two generation's of Ryzen CPUs, you simply cannot expect the same clock speeds out of a configuration using all four DIMM slots. The maximum supported speeds are listed in the table as 2933MHz for 4x single ranked DIMMs. With 4 dual ranked DIMMs, the maximum supported speed is DDR4 2667MHz.

How do I tell if Vengeance Pro DDR4-3200 is single or dual ranked?

https://www.corsair.com/us/en/Categ...GB-White/p/CMW16GX4M2C3200C16W#tab-tech-specs

Also, in my CHVII bios, the auto DIMM voltage was 1.2V. I assume I have to turn it up to 1.35V?

Edit: Must be another issue. I remove two sticks of memory, and I am still unable to get DDR4-3200, whereas it was possible with a Ryzen 1700x. Memory frequency seems to be stuck at DDR4-2133.
 
Last edited:
All I know is it would be nice for them to be in stock soon. I want to upgrade my 2600X to a 3700X.
 
How do I tell if Vengeance Pro DDR4-3200 is single or dual ranked?

https://www.corsair.com/us/en/Categ...GB-White/p/CMW16GX4M2C3200C16W#tab-tech-specs

Also, in my CHVII bios, the auto DIMM voltage was 1.2V. I assume I have to turn it up to 1.35V?

Edit: Must be another issue. I remove two sticks of memory, and I am still unable to get DDR4-3200, whereas it was possible with a Ryzen 1700x. Memory frequency seems to be stuck at DDR4-2133.

CPU-Z will tell you if your RAM is single or dual ranked.

upload_2019-7-18_9-22-1.png


Secondly, you will have to turn up your voltage to at least 1.35v. Corsair RAM generally needs to be run somewhat higher than that. I usually use 1.36v or 1.365v. And again, just because a previous CPU did it, or a previous BIOS did it really means nothing on Ryzen. The memory controllers are different as is the AGESA code in the BIOS.
 
Secondly, you will have to turn up your voltage to at least 1.35v. Corsair RAM generally needs to be run somewhat higher than that. I usually use 1.36v or 1.365v. And again, just because a previous CPU did it, or a previous BIOS did it really means nothing on Ryzen. The memory controllers are different as is the AGESA code in the BIOS.

Seems to be that Asus hasn't released a BIOS with a newer AGESA version.

https://rog.asus.com/forum/showthre...ils-to-post-if-3200-MHz-Ram-set-over-2400-MHz

Searching on the Internet, looks like a bunch of people with B450/X470 boards have the same issue.
 
UPDATE on my Ryzen adventure:
==========================

Bought 3700x, 16GB 3.2GHz RAM with CL14 and MSI X570-A PRO board.

It worked well until I wanted more single core performance out of it.

Bought 3600MHz RAM and like many others on x570 boards, you just can't get passed 3200MHz, no matter what you try.

Overclocking plain sucks on Ryzen. Manual PBO got me 4250MHz all cores and that's it. Nothing to write home about, extra few points in Cinebench, meh. Manual overclocking with PBO disabled means you are giving up single core boost.

Then had weird issues with my motherboard. Lost RAID1, luckily I had a backup elsewhere. Can't recover the original RAID setup though, the utility no longer sees any SATA disks, while windows does. I spent enough time troubleshooting it to call it a day, I've got life.

CONCLUSION:
============
I'm giving up on Ryzen, had enough of troubleshooting weird things and losing my RAID, I'm disappointed.

To get the best out of Ryzen, you need to get A grade gear. £200+ mobo so that your highly clocked and expensive RAM will run as intended. If you can actually buy it, since all decent stuff is out of stock. Original cooler is pretty, but as useful as a chocolate teapot if you value having a silent system. I'm kind of traumatised by its whine. Anyway all this adds up to rather hefty sum of money.

I spoke to a support engineer from Scan and he said that they're getting tremendous amount of support calls from Ryzen adopters. Nearly everyone has issues with RAM not being able to clock higher. And there's nothing you can do about it, since this may not even be a BIOS thing. The engineer actually called it silicon lottery, meaning that not every board is capable of hitting higher clocks. For all I know, There could be an issue with the CPU itself.

Now, for the price of decent Ryzen 3700x system could get 9900k with a solid midrange board, whatever cheap RAM since Intel isn't bothered by it and DarkRock Pro 4 cooler. Yeah, you can get 9900k from Amazon now for £404!

And so I did. I'm returning this mess of a tech. Ryzens might be awesome and all, but the implementation is not yet mature enough. Maybe 4700x will change my mind.
 
UPDATE on my Ryzen adventure:
==========================

Bought 3700x, 16GB 3.2GHz RAM with CL14 and MSI X570-A PRO board.

It worked well until I wanted more single core performance out of it.

Bought 3600MHz RAM and like many others on x570 boards, you just can't get passed 3200MHz, no matter what you try.

Overclocking plain sucks on Ryzen. Manual PBO got me 4250MHz all cores and that's it. Nothing to write home about, extra few points in Cinebench, meh. Manual overclocking with PBO disabled means you are giving up single core boost.

Then had weird issues with my motherboard. Lost RAID1, luckily I had a backup elsewhere. Can't recover the original RAID setup though, the utility no longer sees any SATA disks, while windows does. I spent enough time troubleshooting it to call it a day, I've got life.

CONCLUSION:
============
I'm giving up on Ryzen, had enough of troubleshooting weird things and losing my RAID, I'm disappointed.

To get the best out of Ryzen, you need to get A grade gear. £200+ mobo so that your highly clocked and expensive RAM will run as intended. If you can actually buy it, since all decent stuff is out of stock. Original cooler is pretty, but as useful as a chocolate teapot if you value having a silent system. I'm kind of traumatised by its whine. Anyway all this adds up to rather hefty sum of money.

I spoke to a support engineer from Scan and he said that they're getting tremendous amount of support calls from Ryzen adopters. Nearly everyone has issues with RAM not being able to clock higher. And there's nothing you can do about it, since this may not even be a BIOS thing. The engineer actually called it silicon lottery, meaning that not every board is capable of hitting higher clocks. For all I know, There could be an issue with the CPU itself.

Now, for the price of decent Ryzen 3700x system could get 9900k with a solid midrange board, whatever cheap RAM since Intel isn't bothered by it and DarkRock Pro 4 cooler. Yeah, you can get 9900k from Amazon now for £404!

And so I did. I'm returning this mess of a tech. Ryzens might be awesome and all, but the implementation is not yet mature enough. Maybe 4700x will change my mind.

I'm not surprised by your experiences. If you had read reviews, you'd have realized that Ryzen doesn't overclock much. PBO doesn't force you to give up your boost clocks either. It's just like PB2, but it uses different PPT, EDC and TDC values than PB2 does. Now, its clear that there is something else limiting Ryzen boost clocks as the change to PBO, even with the manual +200MHz offset either does very little or does worse than the CPU based PB2 values. I think these chips are thermally limited and AMD's running them at the edge of their capabilities at stock values. Ryzen overclocking seems to get more and more pointless with each generation. All of the reviews have suggested this is pretty much the case.

I hate to be that guy, but what did you expect from the stock cooler? Factory CPU coolers have basically always sucked. They are tiny, lack surface area and have loud ass fans. I can't think of a single one in the last 15 years that was really worth a shit. I can't speak to your RAID issues without more information. Next, you can get passed 3200MHz on Ryzen 3000 series processors. I have one on the bench at DDR4 3600MHz. Lastly, don't kid yourself. I test these things all the time. There is plenty of RAM that doesn't behave right on some Intel motherboards too. Generally you can make it work, but XMP is a crap shoot sometimes.
 
^^Call me old-fashioned, but I still prefer to set timings manually. Or is that just old habits dying hard?
 
I'm not surprised by your experiences. If you had read reviews, you'd have realized that Ryzen doesn't overclock much. PBO doesn't force you to give up your boost clocks either. It's just like PB2, but it uses different PPT, EDC and TDC values than PB2 does. Now, its clear that there is something else limiting Ryzen boost clocks as the change to PBO, even with the manual +200MHz offset either does very little or does worse than the CPU based PB2 values. I think these chips are thermally limited and AMD's running them at the edge of their capabilities at stock values. Ryzen overclocking seems to get more and more pointless with each generation. All of the reviews have suggested this is pretty much the case.

I hate to be that guy, but what did you expect from the stock cooler? Factory CPU coolers have basically always sucked. They are tiny, lack surface area and have loud ass fans. I can't think of a single one in the last 15 years that was really worth a shit. I can't speak to your RAID issues without more information. Next, you can get passed 3200MHz on Ryzen 3000 series processors. I have one on the bench at DDR4 3600MHz. Lastly, don't kid yourself. I test these things all the time. There is plenty of RAM that doesn't behave right on some Intel motherboards too. Generally you can make it work, but XMP is a crap shoot sometimes.

I made a mistake of being an early adopter and I bought into the hype. Should’ve waited.
I could have stayed with Ryzen, just slapping a better cooler on it, sure. I could have tinkered, but don’t have the patience for crap, glitchy tech.

I guessed what rubbed be the wrong way most is that I lost 5 FPS in one of my games I play a lot. That’s down to 55 FPS, which is worse than my old 4790k @4.8GHz. That’s why I overclocked Ryzen, which closed that FPS gap. But the question that followed was yeah the extra cores are fantastic (yeah!), but really, I kind of want just a tad of headroom in single core performance, not just extra parallelism.

The issue with RAID just pushed me over the edge. I was using RAID on my old board for four years without a single issue. It’s a kind of a thing that needs to be more bomb proof than that. It was just glitchy for the start. Also, why was it using a web server opened to my network by default, seriously?!
 
^^Call me old-fashioned, but I still prefer to set timings manually. Or is that just old habits dying hard?
Same here, but on this board you have to short CMOS pins every time to reset it as it wont POST. It stays locked up, it won’t auto detect lockup and recover. After 50th reset you’ll be reaching for tranquillisers.

Also, Ryzen is a different beast. The closest comparison I can make is to an NVidia graphics cards. With PBO all you need to set are your thermal and voltage limits, and its BIOS will do the rest. 200MHz limit is a pain though and it screams for more optimisation, since these early BIOSes like to over volt everything.

Be aware that manual over lock will stop individual core boosting and you’ll end up with worse results than in stock configuration.
 
Last edited:
^^Call me old-fashioned, but I still prefer to set timings manually. Or is that just old habits dying hard?

That's you being stubborn. There hasn't been a reason to do this in years. XMP not only set the timings and voltages for you, but XMP II can do all the sub-timings as well. These are the timings you aren't going to know without RAM Mon or something like that.

I made a mistake of being an early adopter and I bought into the hype. Should’ve waited.
I could have stayed with Ryzen, just slapping a better cooler on it, sure. I could have tinkered, but don’t have the patience for crap, glitchy tech.

I guessed what rubbed be the wrong way most is that I lost 5 FPS in one of my games I play a lot. That’s down to 55 FPS, which is worse than my old 4790k @4.8GHz. That’s why I overclocked Ryzen, which closed that FPS gap. But the question that followed was yeah the extra cores are fantastic (yeah!), but really, I kind of want just a tad of headroom in single core performance, not just extra parallelism.

The issue with RAID just pushed me over the edge. I was using RAID on my old board for four years without a single issue. It’s a kind of a thing that needs to be more bomb proof than that. It was just glitchy for the start. Also, why was it using a web server opened to my network by default, seriously?!

The stock coolers suck. They pretty much always have. I've never used one as anything other than a stop gap until I could get something better. Even at stock speeds, your ears will typically thank you. On the RAID front, you should do all your configuration stuff in the BIOS. It sounds like you were using the RAID XPert software in Windows which I agree is bullshit. On the subject of CPU performance, I get where you are coming from. I never did benchmark between the two of them, but there are times I swear my X99 / Core i7 5960X was faster in games than my Threadripper 2920X. I've tested against the Core i9 9900K and at 4K, the experience on Intel is simply better. That said, the game I play the most is Destiny 2, and I haven't a clue how Ryzen 3000 does with that.

Same here, but on this board you have to short CMOS pins every time to reset it as it wont POST. It stays locked up, it won’t auto detect lockup and recover. After 50th reset you’ll be reaching for tranquillisers.

Also, Ryzen is a different beast. The closest comparison I can make is to an NVidia graphics cards. With PBO all you need to set are your thermal and voltage limits, and its BIOS will do the rest. 200MHz limit is a pain though and it screams for more optimisation, since these early BIOSes like to overcoat everything.

Be aware that manual over lock will stop individual core boosting and you’ll end up with worse results than in stock configuration.

What board is this? Your spot on about PBO, but with Ryzen 3000, you needn't bother with it. Stick to PB2. However, a manual overclock (all cores) can still be faster at heavily threaded tasks. You can probably take the manual clocks slightly higher than the boost clocks you get under PB2 or PBO. I did in my case. However, if your running tasks that benefit from this often enough to sacrifice that single threaded performance, you are probably better served by Threadripper than Ryzen.
 
To the OP... buy what you like man.

Re: gaming performance. The 3900X is ~5% behind the 9900k in 1080p gaming with a high-end GPU. This holds even when both CPUs are overclocked. This has been confirmed by multiple reviewers across many games (36 games in one particular reviewer's case). It's merely that overclocking a 9900k is just a nice, smooth, all-core OC. Whereas PBO/Auto-OC is the way to go with Zen 2, if you even bother at all (I probably won't). Since the 3900X has a boatload more threads, the gain for Zen 2 is in single and light-threaded workloads, areas where PBO/Auto-OC can help a little. All-core OC on Zen 2 is a waste of time, usually, except for maybe the lower-end parts like the 3600. Side note: the 3800X offers similar gaming performance to the 3900X for less money. You can game on a 3900X, but don't buy it just for that.

Re: efficiency. Zen 2 is flat-out more efficient, confirmed by The_Stilt's analysis. The 12 core part may consume more power than the 9900k on all-core workloads, but it is doing 50% more work (roughly). Efficiency isn't power draw at the wall, it's power draw vs. performance. 9% is a worst case estimate. Depending on workload, it can be much higher. Idle, Zen 2 sips power. der8auer confirmed this in a recent test.

Re: temperature. Yes, this is a known thing. The chiplets get hot. The chip will not clock as high automatically with poor cooling. Not really a huge deal, but it's a known thing.

TLDR; if you're gaming only, buy a 9700k or a 9900k, if your budget for the CPU is in the $400-$500 range.

If your budget is less than that, a 3600 or 3700X may be a better buy. Gamers Nexus noted that the 3600 offered better frametime performance than the 9600k despite the latter having higher average FPS. You're more likely to notice the former, than the latter.

3900X is for mixed use, where you may game, but also touch on multithreaded workloads. 3D Graphics, Video Editing, Compiling, running VMs, etc... Edit: Worth noting that for a few of these tasks, the 9900k still sometimes wins, because not all applications are equally well-threaded. But the 3900X wins a majority of these, and the 3800X more or less ties the 9900k here.
 
Last edited:
To the OP... buy what you like man.

Re: gaming performance. The 3900X is ~5% behind the 9900k in 1080p gaming with a high-end GPU. This holds even when both CPUs are overclocked. This has been confirmed by multiple reviewers across many games (36 games in one particular reviewer's case). It's merely that overclocking a 9900k is just a nice, smooth, all-core OC. Whereas PBO/Auto-OC is the way to go with Zen 2, if you even bother at all (I probably won't). Since the 3900X has a boatload more threads, the gain for Zen 2 is in single and light-threaded workloads, areas where PBO/Auto-OC can help a little. All-core OC on Zen 2 is a waste of time, usually, except for maybe the lower-end parts like the 3600. Side note: the 3800X offers similar gaming performance to the 3900X for less money. You can game on a 3900X, but don't buy it just for that.

Re: efficiency. Zen 2 is flat-out more efficient, confirmed by The_Stilt's analysis. The 12 core part may consume more power than the 9900k on all-core workloads, but it is doing 50% more work (roughly). Efficiency isn't power draw at the wall, it's power draw vs. performance. 9% is a worst case estimate. Depending on workload, it can be much higher. Idle, Zen 2 sips power. der8auer confirmed this in a recent test.

Re: temperature. Yes, this is a known thing. The chiplets get hot. The chip will not clock as high automatically with poor cooling. Not really a huge deal, but it's a known thing.

TLDR; if you're gaming only, buy a 9700k or a 9900k, if your budget for the CPU is in the $400-$500 range.

If your budget is less than that, a 3600 or 3700X may be a better buy. Gamers Nexus noted that the 3600 offered better frametime performance than the 9600k despite the latter having higher average FPS. You're more likely to notice the former, than the latter.

3900X is for mixed use, where you may game, but also touch on multithreaded workloads. 3D Graphics, Video Editing, Compiling, running VMs, etc... Edit: Worth noting that for a few of these tasks, the 9900k still sometimes wins, because not all applications are equally well-threaded. But the 3900X wins a majority of these, and the 3800X more or less ties the 9900k here.

Depending on what graphics card you use (1080ti in my case), you will get slight CPU bottleneck on 1440p as well with 3700x. Some game engines are still so single thread bound (I'm looking at you, Ghost Recon: Wildlands).
 
TLDR; if you're gaming only, buy a 9700k or a 9900k, if your budget for the CPU is in the $400-$500 range.

If your budget is less than that, a 3600 or 3700X may be a better buy. Gamers Nexus noted that the 3600 offered better frametime performance than the 9600k despite the latter having higher average FPS. You're more likely to notice the former, than the latter.

3900X is for mixed use, where you may game, but also touch on multithreaded workloads. 3D Graphics, Video Editing, Compiling, running VMs, etc... Edit: Worth noting that for a few of these tasks, the 9900k still sometimes wins, because not all applications are equally well-threaded. But the 3900X wins a majority of these, and the 3800X more or less ties the 9900k here.

For gaming I don't really see any reason to buy anything but the 3600 at the moment. Especially at more GPU bound resolutions like 1440p and 4K it gets so close to the 3700X as well as the 9700K/9900K depending on the game that anything else is a waste of money unless you do something to warrant the extra cores and threads. The 3600 is about half the cost of 9700K and roughly 40% cheaper than 3700X in my country based on lowest price for each.

It's not even worth going for the higher end models for "future proofing". By the time games get more benefit from 8 cores you can put the money saved towards upgrading to whatever 8+ core is a thing at the time or buy a used 3700X (or Ryzen 4000 equivalent if it's still AM4 compatible).
 
  • Like
Reactions: ChadD
like this
Depending on what graphics card you use (1080ti in my case), you will get slight CPU bottleneck on 1440p as well with 3700x. Some game engines are still so single thread bound (I'm looking at you, Ghost Recon: Wildlands).

That is true, but the reverse is also true - in some games the 3700X and/or 3900X outright win even in 1080. The average favors Intel, and from what I've seen if you were to take 10 games at random, the 9900k/9700k would win 8 of them, and 3700X/3800X/3900X would win 2.

Something else I noticed after perusing way too many benchmarks. The 3900X overpowers the 8700k on average in gaming, and is well ahead of the 7700k, again on average. This means Zen is less than one gen behind Intel in raw gaming/latency-sensitive performance. And it is ahead on throughput tasks even at a raw IPC level. Intel's clockspeed advantage here isn't enough to overcome in many cases, and the 3700X or 3800X are about as fast in throughput tasks as the 9900k. 3900X, of course, demolishes on account of raw core count. Average per-core performance is approaching a dead heat.
 
Last edited:
For gaming I don't really see any reason to buy anything but the 3600 at the moment.

It all depends on use case and budget - as does everything. With each generation of Zen, AMD has been chipping away at remaining reasons to favor Intel products in the enthusiast/power-user spaces. The reasons still exist with this generation, but they are fewer.

It's not all good for AMD. They need to get their mobile house in order. Zen has had a pretty paltry showing there. But in servers, APU builds, budget, and enthusiast/workstation, AMD has made compelling cases that get more compelling with each revision.
 
That is true, but the reverse is also true - in some games the 3700X and/or 3900X outright win even in 1080. The average favors Intel, and from what I've seen if you were to take 10 games at random, the 9900k/9700k would win 8 of them, and 3700X/3800X/3900X would win 2.

Something else I noticed after perusing way too many benchmarks. The 3900X overpowers the 8700k on average in gaming, and is well ahead of the 7700k, again on average. This means Zen is less than one gen behind Intel in raw gaming/latency-sensitive performance. And it is ahead on throughput tasks even at a raw IPC level. Intel's clockspeed advantage here isn't enough to overcome in many cases, and the 3700X or 3800X are about as fast in throughput tasks as the 9900k. 3900X, of course, demolishes on account of raw core count. Average per-core performance is approaching a dead heat.

Super close indeed. For non-gaming workloads, AMD all the way. For gaming on 1080p with average graphics, get AMD. To get the most out of your flagship graphics card, get Intel CPU.

36 Games Benchmarked: Ryzen 9 3900X vs. Core i9-9900K
The summary is on 7' 27'' mark.


The most balanced review I found in regards to gaming is in this video from Digital Foundry. They even cover those moments mentioned here in the forum where one "could swear that Intel experience is just smoother", see 22' 10'' mark.
It also also shows effects of memory bandwidth of 3000MHz vs 3600MHz which results roughly -7% on AMD and on -3% on Intel.
 
That is something I have noticed. The Intel systems seem slightly smoother. It was the same way back in the Pentium 4 vs. Athlon 64 days. We always attributed it to Hyperthreading, but that may not be the whole story. To be honest, it's not the sort of thing you notice very often. I only really noticed it with an Intel and an AMD system literally side by side on the test bench. I have two testing setups next to each other with the same monitor and everything.

This is regardless of the CPU used. I've done this test with the 9900K vs. the 3900X and the 3600X up against the 9600K. Its a tiny difference. As I said, you really have to have each system right there and do the same things on both of them like I do in benchmarking them to feel it.
 
That is something I have noticed. The Intel systems seem slightly smoother. It was the same way back in the Pentium 4 vs. Athlon 64 days. We always attributed it to Hyperthreading, but that may not be the whole story.

I’d love to know what the root cause of that is.
 
Back
Top