3200 or 3733 for upcoming Zen 2?

Techpowerup shows 5.7% improvement in farcry 5 from 3000c16 to 3200c14 that is not bad

At 1080p, Far Cry 5's improvement from 3000 CL16 to 3200CL14 is 3.7% and 3600 CL17 was virtually the same performance as 3200CL14. And considering 3200 is actually standard spec, 3000 is technically under max standard spec anyway.

And that's using the fastest graphics card on the market.

Zen 2 seems a whole lot less memory sensitive.
 
Last edited:
is also using Precision Boost Overclocking
Yes but looking at other data suggests that PBO makes very little difference far less than there results so most of the change must be from RAM.
Still we do need some better tests than what is currently available.

At 1080p, Far Cry 5's improvement from 3000 CL16 to 3200CL14 is 3.7%
Looks like there is ~2% margin of error in tests as the FPS increased at 3000c16 1080p over 720p

and 3600 CL17 was virtually the same performance as 3200CL14.
Honestly I am surprised 3600c17 is not slower than 3200c14 with such slack timings.
 
Last edited:
Honestly I am surprised 3600c17 is not slower than 3200c14 with such slack timings.

Those aren't slack timings CL16 3600mhz is expensive RAM. Industry standard for total latency is 13.5 - 14 nanoseconds. DDR4 3600 at CL 17 is 9.4 nanoseconds of total latency. DDR4 3200 at CL14, is 8.75 nanoseconds.

So, DDR4 3600 CL17 is not even a whole nanosecond slower in total latency. And its running 400mhz faster on clock rate. So yeah, it should be the same or better overall peformance.
 
Last edited:
dasa

3600 at cas 16 is

16÷3600(2000) = 8.888888 ns

3200 cas 16

16÷3200(2000) = 10 ns

3200 cas 14

14÷3200(2000) = 8.75

3000 cas 14

14÷3000(2000)= 9.33ns


So for pure gaming and cost savings 3200 at 14 is best latency.
 
dasa


So for pure gaming and cost savings 3200 at 14 is best latency.
Of the tests I have seen, Speed is overall better than low timings And at the same total latency, speed wins even more often. People put a lot of emphasis on low timings, but its not as important as it seems. Of course, low timings are nice if you can get them.
https://www.tomshardware.com/reviews/best-ram-speed,5951-5.html

However, Zen 2 just isn't that sensitive. Even Zen+ doesn't scale a lot with RAM
https://www.tomshardware.com/reviews/best-ram-speed-x470-pinnacle-ridge,6064-5.html
 
Looks like G.Skill is intending to release the 3600 14-15-15 1.4v kit they showed off a while back.
https://www.guru3d.com/news-story/g...mory-series-for-ryzen-3000-x570-platform.html


Maybe things have changed with newer CPU and more cores becoming more starved for bandwidth I know my 6700K doesn't really care about increased bandwidth in games only the final latency.
Plenty of tests out there showing Skylake loves more Megahertz on your RAM. Digital Foundry in particular, did a bunch of good test videos and also published some text articles on Eurogamer.
Here's an interesting test on Skylake, with dual 980ti
https://www.techspot.com/article/1171-ddr4-4000-mhz-performance/
 
Plenty of tests out there showing Skylake loves more Megahertz on your RAM
But they don't show the change in latency, is that gain just coming from reduced final latency at higher MHz? from my testing it is mostly from the final latency.
 
Last edited:
Either which way, just make sure you get a decent board.

With the same 2700X CPU tested (to rule out the CPU IMC):

My X470 Ultra Gaming and my B450 Strix E won't do higher than 2933 even on 2 8GB DIMMs.

The Crosshair 7 however, happily does 3200 with 4x16GB.
 
Last edited:
Of the tests I have seen, Speed is overall better than low timings And at the same total latency, speed wins even more often. People put a lot of emphasis on low timings, but its not as important as it seems. Of course, low timings are nice if you can get them.
https://www.tomshardware.com/reviews/best-ram-speed,5951-5.html

However, Zen 2 just isn't that sensitive. Even Zen+ doesn't scale a lot with RAM
https://www.tomshardware.com/reviews/best-ram-speed-x470-pinnacle-ridge,6064-5.html

That article just shows that bandwidth alone does not have much impact on Zen+, above xmp 3200 speed.

However, with a low latency setup you can get 20% more in the same game on Zen+, compared to what they were getting. (you can check TPU article I linked to earlier in the thread).

However it seems Zen2 is different and such gains like we had with Zen+ are not on the table anymore.
 
That article just shows that bandwidth alone does not have much impact on Zen+, above xmp 3200 speed.

However, with a low latency setup you can get 20% more in the same game on Zen+, compared to what they were getting. (you can check TPU article I linked to earlier in the thread).

However it seems Zen2 is different and such gains like we had with Zen+ are not on the table anymore.
Man I wish they would have split up that data better. I spent a long time looking at that. And its not consistent. For each game, you have a different speed and rank of RAM, winning Max/Avg/Min framerate. Its tough to make really specific conclusions, because the results aren't well presented. Especially annoying, because this was seemingly the most timings sensitive set of data I have ever seen. Yet, I'm not sure what to make of it, because it doesn't seem consistent.

However, my broad takeaways these 3 things.

1. optimizing sub timings before you worry about the main timings. Unfortunatly, sub timings are not understood by most users and there isn't a lot of easily accessible info about it.
Indeed, it seems that XMP's sub timings are not good for Ryzen.

2. keeping your RAM speed at a good divider for the Infinity fabric.

3. Don't use Dual Rank RAM, unless you need more than 16GB. However, flip a coin on Multi-rank Vs. Single rank.

----------

Indeed, lowering the main timings is always good. But I think the hyper focus on the main timings, is a bit overblown. I mean in terms of spending money.

I also have to wonder if gaming data at 720p should even matter.
 
Last edited:
I also have to wonder if gaming data at 720p should even matter.
It is a lot more relevant than 1080p low to medium detail tests as at least resolution has no impact on CPU performance unlike detail settings.
In my mind it is a more consistent look into what the minimum FPS could be if you hit a section of gameplay where the CPU is the bottlneck instead of the GPU.
 
Last edited:
In my mind is is a more consistent look into what the minimum FPS could be if you hit a section of gameplay where the CPU is the bottlneck instead of the GPU.

Should be looking at minimum FPS in terms of maximum frametimes, but yes, this is where a difference will actually be felt, if it can be.
 
Man I wish they would have split up that data better. I spent a long time looking at that. And its not consistent. For each game, you have a different speed and rank of RAM, winning Max/Avg/Min framerate. Its tough to make really specific conclusions, because the results aren't well presented. Especially annoying, because this was seemingly the most timings sensitive set of data I have ever seen. Yet, I'm not sure what to make of it, because it doesn't seem consistent.

However, my broad takeaways these 3 things.

1. optimizing sub timings before you worry about the main timings. Unfortunatly, sub timings are not understood by most users and there isn't a lot of easily accessible info about it.
Indeed, it seems that XMP's sub timings are not good for Ryzen.

2. keeping your RAM speed at a good divider for the Infinity fabric.

3. Don't use Dual Rank RAM, unless you need more than 16GB. However, flip a coin on Multi-rank Vs. Single rank.

----------

Indeed, lowering the main timings is always good. But I think the hyper focus on the main timings, is a bit overblown. I mean in terms of spending money.

I also have to wonder if gaming data at 720p should even matter.


Having not dabbled with DDR4 or Ryzen before, I find this kind of frustrating.

My old DDR3 x79 system Will seemingly take whatever RAM I throw at it, at any timings in any quantity regardless of it being dual or single rank, and just work, and work well.

It's frustrating that these new designs are so sensitive to ram. Zen2 does seem.like a huge improvement over Zen and Zen+ though.
 
Having not dabbled with DDR4 or Ryzen before, I find this kind of frustrating.

My old DDR3 x79 system Will seemingly take whatever RAM I throw at it, at any timings in any quantity regardless of it being dual or single rank, and just work, and work well.

It's frustrating that these new designs are so sensitive to ram. Zen2 does seem.like a huge improvement over Zen and Zen+ though.
Well who knows, your x79 may benefit from customized sub timings, as well. Its not that RAM doesn't work on Ryzen. Its apparently that some brands have stock/XMP timings which are not optimized for Ryzen. And simply optimizing those, even before tweaking the main timings, has a rather large benefit.
 
So, has anyone tested yet if they are able to get 3200+ speeds with all four slots populated with RAM?
 
ya im in the same boat here can someone plz post a link for 3733 16 gb ram cl 16 or LOWER please im so confused by all this ram stuff Tankies in advanced :D
 
Should be looking at minimum FPS in terms of maximum frametimes, but yes, this is where a difference will actually be felt, if it can be.
Absolutely and this works great for some games but others are horrendously random in there minimum FPS.
 
ya im in the same boat here can someone plz post a link for 3733 16 gb ram cl 16 or LOWER please im so confused by all this ram stuff Tankies in advanced :D

Based on TPU's testing, it looks like on Zen2, 3200 cl14 is actually performing better in most games though. In creative/professional/productivity workloads it looks to bounce around all over the place, sometimes coming out ahead, sometimes falling behind.

Based on this I kind of changed my mind and was planning on going with 3200cl14 instead.
 
Last edited:
Looks like G.Skill is intending to release the 3600 14-15-15 1.4v kit they showed off a while back.
https://www.guru3d.com/news-story/g...mory-series-for-ryzen-3000-x570-platform.html

Looks nice. Says nothing about pricing and availability dates though.

And always with that damned RGB. Why do they always have to put lights on everything. It's a computer, not a Christmas tree!

Oh, and the kit with good timing is only for 8GB sticks though. That means its out for me.
 
Last edited:
Of the tests I have seen, Speed is overall better than low timings And at the same total latency, speed wins even more often. People put a lot of emphasis on low timings, but its not as important as it seems. Of course, low timings are nice if you can get them.
https://www.tomshardware.com/reviews/best-ram-speed,5951-5.html

However, Zen 2 just isn't that sensitive. Even Zen+ doesn't scale a lot with RAM
https://www.tomshardware.com/reviews/best-ram-speed-x470-pinnacle-ridge,6064-5.html

The impact of RAM speeds and timings appears to be very different from architecture to architecture.

You can't use Skylake based testing to determine what to buy for Ryzen. You have to read a test performed on the exact architecture you are going to use it for. The TPU test does just that, an dit suggests 3200cl14 is best for most games, and sometimes best, sometimes mid pack for productivity/creative software.
 
3200 CL14 only won 2 of the tests at 1080p. And only 1 of those 2, was more than 2 FPS difference for the win.

Which has nothing to do with the RAM itself, and simply to do with the fact that the higher you set the resolution, the more work the GPU does, detracting from the CPU. At 1080p, we wouldn't typically consider a system to be GPU limited today, but it doesn't have to be visibly limited to start having an impact.

You can tell this by how the range of the results shrinks as you increase he resolution to 1080p

Up the resolution enough and the only difference you'll see between the different RAM will be due to random measurement error.

The exception to these results seems to be Battlefield, which for whatever reason appears to like high clocked RAM more than other titles on Zen2
 
Having not dabbled with DDR4 or Ryzen before, I find this kind of frustrating.

My old DDR3 x79 system Will seemingly take whatever RAM I throw at it, at any timings in any quantity regardless of it being dual or single rank, and just work, and work well.

It's frustrating that these new designs are so sensitive to ram. Zen2 does seem.like a huge improvement over Zen and Zen+ though.

Yeah, it's funny -- My X79 system has four different dual channel kits that are totally different. Two different GSkill kits, one set of the old Samsung Wonder RAM, and then the fourth kit I can't remember. 8 x 4GB sticks, runs fine!
 
Which has nothing to do with the RAM itself, and simply to do with the fact that the higher you set the resolution, the more work the GPU does, detracting from the CPU. At 1080p, we wouldn't typically consider a system to be GPU limited today, but it doesn't have to be visibly limited to start having an impact.

You can tell this by how the range of the results shrinks as you increase he resolution to 1080p

Up the resolution enough and the only difference you'll see between the different RAM will be due to random measurement error.

The exception to these results seems to be Battlefield, which for whatever reason appears to like high clocked RAM more than other titles on Zen2
My point is that, while 720p is a fun test, its not practical and tells us almost zero about real gaming loads.

Almost no one with the hardware used in this test, is gaming at 720p

And now Zen 2 seems to be even more agnostic about the RAM you feed it. I wonder if TPU will revisit it with a more in depth tweaking article. But so far, it seems pretty much a wash.

Which brings me back to what I've been questioning in multiple threads here: Is spending a bunch of extra money to get a coupe of points lower on latency and/or to get sky high RAM Mhz, as important as it sometimes seems? I don't think it is. Even here at [H], I don't think its important to spend double to get special RAM sticks, for 5% or less average in gains. There's lots of solid RAM available at very affordable prices. I would much rather buy 32GB of "slack" 3600mhz, for example. Than 16GB of whatever good stuff. Or go budget with some E-die or something and just tweak for what I can get. and not worry if I miss the tightest marks.

Also, if a 2080 ti is GPU limited at 1080p, we better hang it all up.
 
My point is that, while 720p is a fun test, its not practical and tells us almost zero about real gaming loads.

Almost no one with the hardware used in this test, is gaming at 720p

And now Zen 2 seems to be even more agnostic about the RAM you feed it. I wonder if TPU will revisit it with a more in depth tweaking article. But so far, it seems pretty much a wash.

Which brings me back to what I've been questioning in multiple threads here: Is spending a bunch of extra money to get a coupe of points lower on latency and/or to get sky high RAM Mhz, as important as it sometimes seems? I don't think it is. Even here at [H], I don't think its important to spend double to get special RAM sticks, for 5% or less average in gains. There's lots of solid RAM available at very affordable prices. I would much rather buy 32GB of "slack" 3600mhz, for example. Than 16GB of whatever good stuff. Or go budget with some E-die or something and just tweak for what I can get. and not worry if I miss the tightest marks.

Also, if a 2080 ti is GPU limited at 1080p, we better hang it all up.

Yeah, but you are missing the point.

Testing at 720p tells us more about how it will behave when it is CPU limited, which is the only time you'll ever care about CPU performance on a gaming machine, and is thus the most relevant test.

As you raise the resolution, you are mostly testing the GPU, not the CPU or the RAM.

If it were up to me, I'd test at 1024x768, or even lower if possible.
 
Yeah, but you are missing the point.

Testing at 720p tells us more about how it will behave when it is CPU limited, which is the only time you'll ever care about CPU performance on a gaming machine, and is thus the most relevant test.

As you raise the resolution, you are mostly testing the GPU, not the CPU or the RAM.

If it were up to me, I'd test at 1024x768, or even lower if possible.
I think its much better to test the real use cases and actually see the times when the extra performance comes into play, if ever. Rather than guessing about it with data which matters to almost nobody with that hardware.

You can show all the performance difference you want at 1024. But if there's never a difference at real use case, then those 1024 test virtually do not matter. Aside from fun side project data.
 
I think its much better to test the real use cases and actually see the times when the extra performance comes into play, if ever. Rather than guessing about it with data which matters to almost nobody with that hardware.

You can show all the performance difference you want at 1024. But if there's never a difference at real use case, then those 1024 test virtually do not matter. Aside from fun side project data.

You should be looking at both. Testing at lower resolution exaggerates the differences so they are easier to see. There is definitely a use for that. No kidding it's not a real world scenario, it's not supposed to be. Just like synthetic benchmarks, they are data points.
 
I think its much better to test the real use cases and actually see the times when the extra performance comes into play, if ever. Rather than guessing about it with data which matters to almost nobody with that hardware.

You can show all the performance difference you want at 1024. But if there's never a difference at real use case, then those 1024 test virtually do not matter. Aside from fun side project data.


I disagree. Testing at higher resolutions tells you more about the GPU than it does the CPU. It can complement the test results, but you absolutely need subsystem isolation to understand what is going on.

Part of the problem is that everyone doesn't use the same GPU, so as soon as the GPU becomes part of the equation, the test results are irrelevant for everyone who doesn't have the exact GPU under test. In the case of that TPU test, that is apparently an EVGA GeForce RTX 2080 Ti FTW3 Ultra
 
I disagree. Testing at higher resolutions tells you more about the GPU than it does the CPU. It can complement the test results, but you absolutely need subsystem isolation to understand what is going on.

Part of the problem is that everyone doesn't use the same GPU, so as soon as the GPU becomes part of the equation, the test results are irrelevant for everyone who doesn't have the exact GPU under test. In the case of that TPU test, that is apparently an EVGA GeForce RTX 2080 Ti FTW3 Ultra


Indeed, most people don't have a 2080 ti. but if a 2080 ti is GPU limited at 1080p, then at which GPU line do we cross where 720p becomes a GPU limit? or 1024? The goal posts keep moving. Its like saying a GPU can render 5 trillion flat shaded polygons. Ok. But what happens when we actually have them textured and shaded, etc?

As you said, low resolution test can be an interesting data point to see what's going on or as I said, a fun thing. But its not actually useful to me, for actual gaming. Without the GPU limit, there might be some large performance differences. I might see that and think "oh, guess I should spend the extra money on that RAM". But then I actually go to play at 1080p or 1440p or whatever I'm realistically gonna play at, and now that difference has shrunk to 3% or less. Those low range tests are interesting. But they don't tell me how to spend my money and I have a hard time pointing at those low range tests and saying this or that is "better". Because it might not matter when I'm actually gaming.
 
  • Like
Reactions: N4CR
like this
But then I actually go to play at 1080p or 1440p or whatever I'm realistically gonna play at, and now that difference has shrunk to 3% or less.
It may have shrunk to 3% or less on average but if there is still a 3% difference on average when say 95% of the test shows 0% difference due to a GPU bottlneck then for the short duration of CPU limited gameplay the difference will be up to the amount seen in a 720p test.
 
Looks like G.Skill is intending to release...

Looks nice. Says nothing about pricing and availability dates though.

And always with that damned RGB. Why do they always have to put lights on everything. It's a computer, not a Christmas tree!

Oh, and the kit with good timing is only for 8GB sticks though. That means its out for me.

And, even though AMD made a point of saying 32GB would now be supported, and G.Skill is making this new Trident Z Neo RAM "Optimized for Ryzen 3000 & X570 Platform", I see no 2@32GB kits from G.Skill...
 
Absolutely and this works great for some games but others are horrendously random in there minimum FPS.

To be clear, if there are horrendously random minimum FPS, this will be revealed in detail by looking at maximum frametimes. Frametime analysis is where less specific metrics like 'minimum FPS' come from.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I guess with Ryzen 2 Samsung B die doesnt matter. Like Intel I guess anything now works great.

Gonna order some Corsair Dominator Platinum 3600 c18 today I guess. Probably Hynix C die who cares its Ryzen 2 woohoo!
 
Probably 3200 man...you mighttt be able to get a kit that will boot 3433 or whatever...that's playing with timings though man....this kit nooo...it's definitely 3200 and drop to cl15-17 etc...
 
lab501 is working on a RAM speed test where they tweak sub timings after finding 3733c14 was slower than 3733c16 due to poorly configured sub timings which could explain why reviews are seeing such a small improvement from higher RAM speeds.
They managed to get latency down from 72ns at 3200c14 to 63ns at 3800c15 1:1

 
I've been agonizing over RAM for my incoming 3900x and Asus Crosshairs VIII. I found this thread and now I have more questions than answers. I want 32 GB and it seems the QVL tested a lot of stuff, but most of it is unobtainium or ridiculously expensive. There's a lot of good points in this thread. This is my first real upgrade since 2011 besides graphics cards, so I hold on to stuff for a while. I'm thinking I'll be best off with a 2 by 16 GB kit instead of 4 x 8. I know that the QVL is not all encompassing. I'm trying to decide between 3200 CAS 14 B Dies and 3600 CAS 16. I know a lot of people seem to have no problem getting 3200 B Dies to 3600 CAS 16. Man, I really don't know where to go here. There are compelling arguments both ways. I'll game some and hopefully do some video editing. Gaming will be the most use though. Also, I want to get some RGB if possible. The upcoming G.Skill Neo looks cool, but I'll bet it's going to be priced through the roof.

Those weirdo 32 GB Samsung DIMMS look pretty interesting, but ugly ;-). I don't know if I will have the time to do massive amounts of tweaking.
 
Back
Top