Ryzen 3600X Tested Tight Ram Timings 2133/2400/2933/3200/3733/4000/4200

gerardfraser

[H]ard|Gawd
Joined
Feb 23, 2009
Messages
1,366
Ryzen 3600X Tested Tight Ram Timings 2133Mhz 2400Mhz 2933Mhz 3200Mhz 3733Mhz 4000Mhz 4200Mhz 2560x1440 Ultra settings.

There are plenty of 1080P and lower test,these test are aimed at people who blew there cash on $500+ graphics cards and are not quite sure if they should upgrade their ram.I say no if you game at higher resolutions.

Battlefield V I noticed that above DDR4 4000Mhz average FPS started to climb.Which is weird because memory clock/fabric clock/memory controller switch from 1:1:1 to 1:1:2 so memory controller gets cut in half.

A Plague Tale Innocence had two areas where FPS lows dropped to 16 FPS and 7 FPS in the same spot on every test

All games gave the exact same game play every test.

Video of game areas tested




Memory-Chart.png


RAM TIMINGS USED FULL NO AUTO
2133.png

2400.png

2933.png

3200.png

3733.png

4000.png

4200.png
♦ 2133Mhz (16GB)CL10-10-10-10-21
♦ 2400mhz (16GB)CL10-11-11-11-21
♦ 2933mhz (16GB)CL12-14-13-13-26
♦ 3200mhz (16GB)CL14-14-14-14-28
♦ 3733mhz (16GB)CL16-17-16-16-34
♦ 4000mhz (16GB)CL16-18-17-17-36 -(memory clock 2000x2/fabric clock 1800x2/memory controller 1000x2)
♦ 4200mhz (16GB)CL16-18-17-17-36 -(memory clock 2100x2/fabric clock 1800x2/memory controller 1050x2)

COMPUTER USED
♦ CPU - AMD 3600X With MasterLiquid Lite ML240L RGB AIO (Fans 55%)
♦ GPU - Nvidia RTX 2080
♦ RAM - G.Skill Trident Z 16gb DDR4 (F4-4000C18D-16GTZ) (2x8GB)
♦ Mobo - MSI X470 - Gaming Plus
♦ SSD - M.2 2280 WD Blue 3D NAND 500GB
♦ DSP - LG 27" 4K UHD FreeSync Gaming Monitor (27UD59P-B.AUS)
♦ PSU - Antec High Current Pro 1200W

VIDEO INFORMATION
► FPS Monitoring : MSI Afterburner/RTSS
► Gameplay Recorder : Nvidia Shadowplay
► Edit Videos : VSDC Free Video Editor http://www.videosoftdev.com/
 
Last edited:
I wasnt going to upgrade my ram regardless.
Nice to see some charts focused on a normal gaming resolution.
 
While my testing probably hasn't been as extensive, this is pretty much what I've already concluded. I've done these tests on Ryzen 2000, 3000, Threadripper, and so on and pretty much found that around 3000MHz, your pretty much golden. At the lower resolutions and in synthetic tests, you'll see the differences as you ramp up clock speed. AMD pretty much already said that going above 3733MHz isn't really advised unless you go way beyond that point. In other words, going to 4000MHz isn't going to be worth it, but in theory if you hit DDR4 6000MHz then it might be a different story.
 
I wasnt going to upgrade my ram regardless.
Nice to see some charts focused on a normal gaming resolution.

I agree.

While my testing probably hasn't been as extensive, this is pretty much what I've already concluded. I've done these tests on Ryzen 2000, 3000, Threadripper, and so on and pretty much found that around 3000MHz, your pretty much golden. At the lower resolutions and in synthetic tests, you'll see the differences as you ramp up clock speed. AMD pretty much already said that going above 3733MHz isn't really advised unless you go way beyond that point. In other words, going to 4000MHz isn't going to be worth it, but in theory if you hit DDR4 6000MHz then it might be a different story.

Same here with my testing in the past ,every bit of information may help someone out and save them a couple bucks if they want.
 
Easy mode 1440p gaming build: 2600 under $100 shipped, b450, some cheapie brand name ram.

Monitor and gpu are what I focus on.

I'm cranking out 155+ fps right now.
 
Can most DDR4 kits hit those latencies, particularly 2133c10 or 2400c10?
I had to do a double-take as the 2133 setting is not that different from my current DDR3 kit lmao.
 
Well I do not own any 2133,2400 ETC ,so I do not see why they could not hit those clocks and I do not think there is a DRAM calculator for that low of ram but I can tell you I went off the timings of some DDR3 ram I have in intel machines.
Best i could do and you do not see anyone posting timings like I posted
For example techpowerup memory scaling benchmarks timings used.
relative-performance-games-1920-1080.png

https://www.techpowerup.com/review/amd-zen-2-memory-performance-scaling-benchmark/4.html
 
Do not get to hung up on a couple weird numbers it is just what I recorded with MSI afterburner.
Grain of salt stuff when weird things happen.I can say there was no actual difference in game play and you never be able to tell a difference.
I have no clue why BF5 had like 10 FPS more on higher ram speed and it could of been I took a dump and came back and tested it was higher.For now I did a lot of runs and I will not be testing it again for a bit.
 
Do not get to hung up on a couple weird numbers it is just what I recorded with MSI afterburner.
Grain of salt stuff when weird things happen.I can say there was no actual difference in game play and you never be able to tell a difference.
I have no clue why BF5 had like 10 FPS more on higher ram speed and it could of been I took a dump and came back and tested it was higher.For now I did a lot of runs and I will not be testing it again for a bit.

The Tomb Raider test result is what we should expect- BF:V shows just how hard consistent testing really is :).

KC Deliverance is one that I'd expect inconsistent results from, if only due to the developer being a bit of a small shop and the game type not being overly dependent on performance consistency like BF:V would be, for instance.

And I'm primarily posting to add my voice to the above thanking you for the hard work!
 
Thanks,no hard work ,just messing around with new CPU and thought I would share.
 
  • Like
Reactions: Aix.
like this
Nice work! Any chance of seeing at least some of the more common speeds like 3200 and 3600 with higher latencies?

At least over here if you want CL14 3200 or CL16 3600 the price for 2x 8 GB jumps like 50% vs buying 3200 CL16 or CL18/19 3600. While that cheaper RAM can perhaps be tightened in latency, it's not guaranteed so it would be nice to see how "normal" RAM behaves with these CPUs.
 
I assume these tests are more for people upgrading only the cpu to a ryzen 3000 series... Would there be much expected difference on an X570 board, or is this generally expected to hold consistent?
 
I assume these tests are more for people upgrading only the cpu to a ryzen 3000 series... Would there be much expected difference on an X570 board, or is this generally expected to hold consistent?
I would expect it would be the more of the same ,just a guess.

Nice work! Any chance of seeing at least some of the more common speeds like 3200 and 3600 with higher latencies?

At least over here if you want CL14 3200 or CL16 3600 the price for 2x 8 GB jumps like 50% vs buying 3200 CL16 or CL18/19 3600. While that cheaper RAM can perhaps be tightened in latency, it's not guaranteed so it would be nice to see how "normal" RAM behaves with these CPUs.

You know what with relaxed normal timings the games might run with higher FPS.I was not going for FPS in the test, I was showing that with tweaking you can get same results as more expensive ram.I had some DDR4 CL16 3200 ram if you added some voltage they worked fine when tweaked.

EDIT:I should say if I was going to add normal timings ,I would have to do all the test over again,to make sure everything is the same on the computer.I do not think I will do that.The test I done are so close there really is not point in doing them again even with normal timings there is just no real difference at higher resolutions.
 
Last edited:
It could be something specific to that engine. You'd have to try other Frostbite titles to see if that's the case.
I would say it was nothing really,could of been no bombs going of the 3-4 runs I did and thus FPS went a little higher.Maybe I turned of shadowplay on the runs with the higher Ram speed and forgot I did and that was the higher FPS.Like i said you can not get caught up in the actual numbers and should look at overall test but hey A Plague Tale Innocence went up 1 FPS on the higher Ram ,whatttttttttttttttttttttttttt might be something there J/K.They were all the same gaming experience ,that's all I can say.
 
I assume these tests are more for people upgrading only the cpu to a ryzen 3000 series... Would there be much expected difference on an X570 board, or is this generally expected to hold consistent?

The memory controller is on the processor itself so the performance delta between x570 and x470 for your common ram speeds should all be within the margin of error. x570 with it's improved PCB designes MIGHT be able to support higher RAM frequencies but at speeds they are both able to hit and that the vast majority of RAM sticks will be able to hit, there really shouldn't be any performance difference.
 
I would say it was nothing really,could of been no bombs going of the 3-4 runs I did and thus FPS went a little higher.Maybe I turned of shadowplay on the runs with the higher Ram speed and forgot I did and that was the higher FPS.Like i said you can not get caught up in the actual numbers and should look at overall test but hey A Plague Tale Innocence went up 1 FPS on the higher Ram ,whatttttttttttttttttttttttttt might be something there J/K.They were all the same gaming experience ,that's all I can say.

Agreed. I've tested this sort of thing as well. My Threadripper 2920X always benchmarks better with DDR4 3000MHz RAM with lower latencies, but I can't tell the difference between that set of RAM and the DDR4 3600MHz modules I have in it right this second at 3200MHz using worse timings.
 
OMG someone on the internet agreed with me twice in one thread,WTF is the world going to end ,did I miss something.Thanks for reply and help Dan.
 
So I've been a little out of the loop on the nitty gritty details for AMD stuff for the last decade or so, but I've gotten the impression that a lot of hand-wringing and stuff was going on with ryzen and ram timings/speeds etc. I've been looking at Ryzen 2 as a possible upgrade and it seems like the TL;DR from OP's testing (which is excellent btw, well done) is that ram speed and timings are overrated. I think this has been the case for a long time on Intel, and even AMD up until it flared up again with Ryzen so I just want to confirm my suspicions that shelling out big bucks for headline speeds and super tight timings is actually a thing of the past and not making some odd comeback in the age of Ryzen.
 
So I've been a little out of the loop on the nitty gritty details for AMD stuff for the last decade or so, but I've gotten the impression that a lot of hand-wringing and stuff was going on with ryzen and ram timings/speeds etc. I've been looking at Ryzen 2 as a possible upgrade and it seems like the TL;DR from OP's testing (which is excellent btw, well done) is that ram speed and timings are overrated. I think this has been the case for a long time on Intel, and even AMD up until it flared up again with Ryzen so I just want to confirm my suspicions that shelling out big bucks for headline speeds and super tight timings is actually a thing of the past and not making some odd comeback in the age of Ryzen.

I would agree with you 100% is that ram speed and timings are overrated, especially true at higher gaming resolutions say 2560x1440 .I only talking gaming to be clear.The only reason why I have done this particular test is to let people know and maybe help some kid wanting the best ram that it is unnecessary and save them a couple bucks.
If your a benchmark guy then by all means go nuts,bench marking is more fun than gaming at times.
 
Nice info! 3200 c16 kits are going for 80 and less on sale right now (US) and it is plenty fine for gaming even on Ryzen as you show here. I have been using the Ripjaw V c16 3200 on all my Ryzen builds for fam, freinds and clients for a while. Even first gen CPUs that wouldn't hit 3200 all hit 2933 at xmp on various boards. Not once did anyone complain about stability or state that they had a bad gaming experience on they systems.
 
Nice info! 3200 c16 kits are going for 80 and less on sale right now (US) and it is plenty fine for gaming even on Ryzen as you show here. I have been using the Ripjaw V c16 3200 on all my Ryzen builds for fam, freinds and clients for a while. Even first gen CPUs that wouldn't hit 3200 all hit 2933 at xmp on various boards. Not once did anyone complain about stability or state that they had a bad gaming experience on they systems.

The ripjaw c16 3200 is uber good silicon. The Hynix C die is very good. I'd almost say on the level of B die if not better than B for AMD processors.
 
I'm tempted to return my 3600c16 Ballistic after reading some Reddit threads. Maybe much ado about nothing, but I've already had to RMA my DDR1 and DDR2 ballistix kits in the past.
 
You know what with relaxed normal timings the games might run with higher FPS.I was not going for FPS in the test, I was showing that with tweaking you can get same results as more expensive ram.I had some DDR4 CL16 3200 ram if you added some voltage they worked fine when tweaked.

EDIT:I should say if I was going to add normal timings ,I would have to do all the test over again,to make sure everything is the same on the computer.I do not think I will do that.The test I done are so close there really is not point in doing them again even with normal timings there is just no real difference at higher resolutions.

Thanks, that's pretty much what I wanted to know. I game at 1440p and 4K but if I could get a noticeable performance improvement by choosing a better RAM kit then that might be worth the extra expense but not if it's a few fps for a much higher price. I've been running a G. Skill Ripjaws V DDR-3000 CL15 kit with my 6600K for years so going for DDR-3200 CL16 is probably going to work just fine when I upgrade to a Ryzen 3600.
 
More of this scientific, reproducible, transparent user testing. The tech sites are great and all but I trust you guys more.
 
I'm tempted to return my 3600c16 Ballistic after reading some Reddit threads. Maybe much ado about nothing, but I've already had to RMA my DDR1 and DDR2 ballistix kits in the past.

Me too...definitely turned me off to them. Although, the low profile DDR3 stuff was decent for me.
 
These tests don't show anything except higher CPU speed cant make games run better when GPU bottlnecked.
Underclock the CPU by 500MHz with the RAM at 3733c16 and I bet the results will be much the same except the system will use less power.

So Ryzen 2600X vs 3600 vs 3600X would likely see a similar performance variance to 3000c16 vs 3733c16 in these tests.
While overclocking the RTX 2080 would see a much larger % change than anything done to CPU\RAM.
 
Last edited:
These tests don't show anything except higher CPU speed cant make games run better when GPU bottlnecked.
Underclock the CPU by 500MHz with the RAM at 3733c16 and I bet the results will be much the same except the system will use less power.

So Ryzen 2600X vs 3600 vs 3600X would likely see a similar performance variance to 3000c16 vs 3733c16 in these tests.
While overclocking the RTX 2080 would see a much larger % change than anything done to CPU\RAM.

The point is that memory speeds have very little impact on gaming once you cross a certain threshold. The RAM being at 3733MHz probably wouldn't make up for the CPU being underclocked that much. The biggest difference you see across CPU's at high resolution will generally be thier minimum frame rates. My Threadripper 2920X can't hold a candle to the Core i9 9900K. Using Destiny 2 as an example, my minimums were 26FPS (Manual overclock) and 36FPS PBO. In contrast, the 9900K gets 54FPS at 4,8GHz and 56FPS at 5.0GHz. I've yet to see RAM have anywhere close to that kind of impact on gaming.
 
In contrast, the 9900K gets 54FPS at 4,8GHz and 56FPS at 5.0GHz. I've yet to see RAM have anywhere close to that kind of impact on gaming.
Overclocking my 6700K from 4.2GHz to 4.7GHz increases performance in most games less than overclocking the RAM from 3200c14 to 3866c16
There is the odd game like Crysis 3 that sees a larger performance gain from the CPU OC than the RAM OC.
The number of games that see a larger gain with new Ryzen CPU may have decreased due to it's larger cache.

Arma%20III%20cpu%20vs%20ram.png

But once your GPU limited this happens
Rainbow%20Six%20Siege%20cpu%20vs%20ram.png

https://www.overclock.net/forum/180...0c11-2133c9-ddr4-2133c15-3000c12-4000c17.html
 
The biggest difference you see across CPU's at high resolution will generally be thier minimum frame rates.

Pulling this out because this should really be the focus of performance improvements. I wouldn't pay more for better memory for higher average framerates, but if 1% and 0.1% framerates went up noticeably? That's meaningful.
 
Overclocking my 6700K from 4.2GHz to 4.7GHz increases performance in most games less than overclocking the RAM from 3200c14 to 3866c16
There is the odd game like Crysis 3 that sees a larger performance gain from the CPU OC than the RAM OC.
The number of games that see a larger gain with new Ryzen CPU may have decreased due to it's larger cache.

View attachment 176341
But once your GPU limited this happens
View attachment 176342
https://www.overclock.net/forum/180...0c11-2133c9-ddr4-2133c15-3000c12-4000c17.html

There is also a key point I made that's being missed. I said this earlier (somewhere.) Once you go over DDR4 3000MHz (roughly), you won't gain anything from memory overclocking. Your example backs this up. Going from DDR4 2133MHz to DDR4 3200MHz, or 3000MHz, you see massive improvement. So, that is the largest increase we see in Arma III. However, the second largest increase we see in those charts comes from clocking the 6700K to 4.7GHz. We see an increase from 71.8FPS to 76.4FPS. At DDR4 3866MHz, we see an increase from 77.3FPS to 81.7FPS just from the clock speed. The RAM was the same in both those examples. It's also interesting to note that we see less than 1FPS of difference at DDR4 3000MHz to 3866MHz.

Meaning, this particular game is more sensitive to memory latency than clock speed. Again, I say clock speed of the RAM is important up to around 3000MHz. Past that, your gains are minimal. You get more here from overclocking your CPU than you do from increasing your memory clocks. That said, if you only do one or the other, going from DDR4 2133MHz is more valuable than overclocking the CPU in this particular game. If you are already at DDR4 3000MHz or more, the CPU will provide a larger difference.

Pulling this out because this should really be the focus of performance improvements. I wouldn't pay more for better memory for higher average framerates, but if 1% and 0.1% framerates went up noticeably? That's meaningful.

And that's something that surprised me a bit when testing Threadripper out against the Core i9 9900K. The maximum frame rates were actually higher on the Threadripper CPU in Destiny 2. The averages were actually very close. The minimums were also half that of the Core i9 9900K in some cases. Looking at the 90th, 95th, and 99th percentiles, the Threadripper also fluctuated far more than the Core i9 9900K did. These were tested at 4K. I didn't get around to testing the Ryzen 7 2700X in this game as I couldn't get it to run on the Ryzen 9 3900X, so I aborted the rest of the testing. My guess is that it would be a bit better than Threadripper because of the CCX latency issues. However, I don't know that for sure.
 
dasa good charts and all ,I remember went you posted them back in Sept of 2016 on Intel machine.Since there is no real overclocking with Ryzen do you think these test apply to Ryzen in 2019.
Do you think Ryzen owners should not only downclock there CPU but also lower there settings and resolution also.Thanks for sharing ,I guess I could just go downclock Ryzen by 500Mhz to base clock ,lower settings /resolution and see how that works out but it will no be real gameplay scenario for me.
https://www.overclock.net/forum/180...0c11-2133c9-ddr4-2133c15-3000c12-4000c17.html
 
I pretty much aim for CL16 3200 or CL15 3000. CL14 @ 3200 is generally really expensive ram.
 
dasa
Do you think Ryzen owners should not only downclock there CPU but also lower there settings and resolution also.Thanks for sharing ,I guess I could just go downclock Ryzen by 500Mhz to base clock ,lower settings /resolution and see how that works out but it will no be real gameplay scenario for me.
It is not so much about underclocking\overclocking but showing is money better spent on RAM\CPU\GPU
For example does it matter if somebody spends $100 extra for 3600x over 3600 or would they be better of putting that $100 into buying 3600c16 over 320016.
Or does only GPU matter at higher resolution and they could save $200 by getting a 2600, $100 with cheap RAM and spend that $300 extra on the GPU.
The cache and memory interface on Ryzen could make the results come out very different to what one would experience with a Intel 1151 system.

This is why I would like to see a user like yourself do a test with different CPU\GPU speeds vs the RAM speed changes.
The ideal would of course be to use different hardware but being user reviews we have to work with what we have which means changing clock speeds.

Some test lower resolutions to reduce the impact of GPU on the results to show as much as possible the potential increase in CPU performance from higher RAM speeds.
If you wish to test real world settings I believe it is important to chart the FPS and GPU usage so people can see where the different speeds meet and just how much of the test is CPU\GPU limited.

Although I have found that while GPU usage was a useful tool with my GTX1070 it is useless with the Radeon VII as it maintains 99% usage then downclocks the GPU and seems to buffer a bit.

However, the second largest increase we see in those charts comes from clocking the 6700K to 4.7GHz. We see an increase from 71.8FPS to 76.4FPS. At DDR4 3866MHz, we see an increase from 77.3FPS to 81.7FPS just from the clock speed. The RAM was the same in both those examples. It's also interesting to note that we see less than 1FPS of difference at DDR4 3000MHz to 3866MHz.
3000MHz was heavily overclocked needing the most v and was the least stable out of all the settings tested, I ran that speed to see if it was bandwidth or latency that mattered as 3000c12 with tweaked subs had the same final latency as 3866c16 with tweaked subs as can be seen in the AIDA64 tests but significantly less bandwidth.
The stock speed for the RAM which is faster than most cheap Hynix kits will handle is the 3200c14 XMP setting and this is what you should be comparing with the OC speeds 3000c12\3866c16\4000c17.
3200 c14 XMP 71.8FPS to 3866c16 77.3FPS
6700K 4.2GHz 71.8FPS to 4.7GHz 76.4FPS
CPU\RAM OC combined 81.7FPS

Minimum FPS\99th percentile are a important metric but they wont always be due to the CPU\RAM, in fact this can vary from one part of the game to the next.
In some games minimum FPS can be completely random I found this to be the case in ARMA while Fallout minimim FPS were extremely accurate.
Depending on the level of the game you chose to test at a high res the minimum FPS may be CPU limited or GPU limited.
 
Last edited:
Thanks for the great effort put forth. These Zen2 cpu's don't seem near as reliant on high speed ram as zen/zen+ cpu's, which is great to know we don't have to overspend on ram for the system to perform close to it's highest capacity. Would love to see some of these tested with other benchmarks besides just games as well, but I understand the amount of effort this takes and appreciate you just putting this together for everyone!
 
It is not so much about underclocking\overclocking but showing is money better spent on RAM\CPU\GPU
For example does it matter if somebody spends $100 extra for 3600x over 3600 or would they be better of putting that $100 into buying 3600c16 over 320016.
Or does only GPU matter at higher resolution and they could save $200 by getting a 2600, $100 with cheap RAM and spend that $300 extra on the GPU.
The cache and memory interface on Ryzen could make the results come out very different to what one would experience with a Intel 1151 system.

This is why I would like to see a user like yourself do a test with different CPU\GPU speeds vs the RAM speed changes.
The ideal would of course be to use different hardware but being user reviews we have to work with what we have which means changing clock speeds.

Some test lower resolutions to reduce the impact of GPU on the results to show as much as possible the potential increase in CPU performance from higher RAM speeds.
If you wish to test real world settings I believe it is important to chart the FPS and GPU usage so people can see where the different speeds meet and just how much of the test is CPU\GPU limited.

Although I have found that while GPU usage was a useful tool with my GTX1070 it is useless with the Radeon VII as it maintains 99% usage then downclocks the GPU and seems to buffer a bit.

I do not disagree with you ,I just shared my little part .Anyone can interpret it as they want to.If I can save someone a couple bucks by letting them know there new shinny Ryzen and new 2070Super /5700XT does not need a new X570 motherboard or update on new ram and what they have will supply the same gaming experience ,I say yep i did my part in sharing some information.
I have Intel gear and other GPU's but I do not care to test them or share anything about them as it has all been done before,but I can say that how I tested is not a common test.


Thanks for the great effort put forth. These Zen2 cpu's don't seem near as reliant on high speed ram as zen/zen+ cpu's, which is great to know we don't have to overspend on ram for the system to perform close to it's highest capacity. Would love to see some of these tested with other benchmarks besides just games as well, but I understand the amount of effort this takes and appreciate you just putting this together for everyone!
Techpowerup done memory scaling in applications ,just not with tight timings like I posted for gaming but you may find interesting
https://www.techpowerup.com/review/amd-zen-2-memory-performance-scaling-benchmark/2.html
 
Back
Top