3200 or 3733 for upcoming Zen 2?

Discussion in 'Memory' started by tangoseal, Jun 12, 2019.

  1. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    Honestly, its difficult to use the data from that review. Because the 3733 RAM speed is also using Precision Boost Overclocking. And they are comparing that to the stock clocks, with 3200 RAM. So, RAM isn't the only thing changing performance.

    In Techpowerup's Zen 2 memory scaling article, RAM is the only variable they changed.
     
    Last edited: Jul 8, 2019
    IdiotInCharge likes this.
  2. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    At 1080p, Far Cry 5's improvement from 3000 CL16 to 3200CL14 is 3.7% and 3600 CL17 was virtually the same performance as 3200CL14. And considering 3200 is actually standard spec, 3000 is technically under max standard spec anyway.

    And that's using the fastest graphics card on the market.

    Zen 2 seems a whole lot less memory sensitive.
     
    Last edited: Jul 8, 2019
    IdiotInCharge likes this.
  3. dasa

    dasa Limp Gawd

    Messages:
    315
    Joined:
    Feb 19, 2012
    Yes but looking at other data suggests that PBO makes very little difference far less than there results so most of the change must be from RAM.
    Still we do need some better tests than what is currently available.

    Looks like there is ~2% margin of error in tests as the FPS increased at 3000c16 1080p over 720p

    Honestly I am surprised 3600c17 is not slower than 3200c14 with such slack timings.
     
    Last edited: Jul 8, 2019
  4. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    Those aren't slack timings CL16 3600mhz is expensive RAM. Industry standard for total latency is 13.5 - 14 nanoseconds. DDR4 3600 at CL 17 is 9.4 nanoseconds of total latency. DDR4 3200 at CL14, is 8.75 nanoseconds.

    So, DDR4 3600 CL17 is not even a whole nanosecond slower in total latency. And its running 400mhz faster on clock rate. So yeah, it should be the same or better overall peformance.
     
    Last edited: Jul 8, 2019
    IdiotInCharge likes this.
  5. dasa

    dasa Limp Gawd

    Messages:
    315
    Joined:
    Feb 19, 2012
    The below tests try to answer the following questions:
    • How high memory frequency can be expected with Ryzen 3000 on X370/X470/X570?
    • How high fabric clock can be expected?
    • Is there a benefit to high memory frequency and high fabric clock?
    • How big is the latency penalty when running MCLK != 2*FCLK?
     
  6. tangoseal

    tangoseal [H]ardness Supreme

    Messages:
    7,305
    Joined:
    Dec 18, 2010
    dasa

    3600 at cas 16 is

    16÷3600(2000) = 8.888888 ns

    3200 cas 16

    16÷3200(2000) = 10 ns

    3200 cas 14

    14÷3200(2000) = 8.75

    3000 cas 14

    14÷3000(2000)= 9.33ns


    So for pure gaming and cost savings 3200 at 14 is best latency.
     
    IdiotInCharge likes this.
  7. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    Of the tests I have seen, Speed is overall better than low timings And at the same total latency, speed wins even more often. People put a lot of emphasis on low timings, but its not as important as it seems. Of course, low timings are nice if you can get them.
    https://www.tomshardware.com/reviews/best-ram-speed,5951-5.html

    However, Zen 2 just isn't that sensitive. Even Zen+ doesn't scale a lot with RAM
    https://www.tomshardware.com/reviews/best-ram-speed-x470-pinnacle-ridge,6064-5.html
     
    IdiotInCharge likes this.
  8. dasa

    dasa Limp Gawd

    Messages:
    315
    Joined:
    Feb 19, 2012
    Looks like G.Skill is intending to release the 3600 14-15-15 1.4v kit they showed off a while back.
    https://www.guru3d.com/news-story/g...mory-series-for-ryzen-3000-x570-platform.html

    Maybe things have changed with newer CPU and more cores becoming more starved for bandwidth I know my 6700K doesn't really care about increased bandwidth in games only the final latency.
     
  9. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    Plenty of tests out there showing Skylake loves more Megahertz on your RAM. Digital Foundry in particular, did a bunch of good test videos and also published some text articles on Eurogamer.
    Here's an interesting test on Skylake, with dual 980ti
    https://www.techspot.com/article/1171-ddr4-4000-mhz-performance/
     
  10. dasa

    dasa Limp Gawd

    Messages:
    315
    Joined:
    Feb 19, 2012
    But they don't show the change in latency, is that gain just coming from reduced final latency at higher MHz? from my testing it is mostly from the final latency.
     
    Last edited: Jul 9, 2019
  11. mda

    mda [H]ard|Gawd

    Messages:
    1,557
    Joined:
    Mar 23, 2011
    Either which way, just make sure you get a decent board.

    With the same 2700X CPU tested (to rule out the CPU IMC):

    My X470 Ultra Gaming and my B450 Strix E won't do higher than 2933 even on 2 8GB DIMMs.

    The Crosshair 7 however, happily does 3200 with 4x16GB.
     
    Last edited: Jul 9, 2019
  12. OnceSetThisCannotChange

    OnceSetThisCannotChange Limp Gawd

    Messages:
    135
    Joined:
    Sep 15, 2017
    That article just shows that bandwidth alone does not have much impact on Zen+, above xmp 3200 speed.

    However, with a low latency setup you can get 20% more in the same game on Zen+, compared to what they were getting. (you can check TPU article I linked to earlier in the thread).

    However it seems Zen2 is different and such gains like we had with Zen+ are not on the table anymore.
     
  13. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    Man I wish they would have split up that data better. I spent a long time looking at that. And its not consistent. For each game, you have a different speed and rank of RAM, winning Max/Avg/Min framerate. Its tough to make really specific conclusions, because the results aren't well presented. Especially annoying, because this was seemingly the most timings sensitive set of data I have ever seen. Yet, I'm not sure what to make of it, because it doesn't seem consistent.

    However, my broad takeaways these 3 things.

    1. optimizing sub timings before you worry about the main timings. Unfortunatly, sub timings are not understood by most users and there isn't a lot of easily accessible info about it.
    Indeed, it seems that XMP's sub timings are not good for Ryzen.

    2. keeping your RAM speed at a good divider for the Infinity fabric.

    3. Don't use Dual Rank RAM, unless you need more than 16GB. However, flip a coin on Multi-rank Vs. Single rank.

    ----------

    Indeed, lowering the main timings is always good. But I think the hyper focus on the main timings, is a bit overblown. I mean in terms of spending money.

    I also have to wonder if gaming data at 720p should even matter.
     
    Last edited: Jul 9, 2019
  14. dasa

    dasa Limp Gawd

    Messages:
    315
    Joined:
    Feb 19, 2012
    It is a lot more relevant than 1080p low to medium detail tests as at least resolution has no impact on CPU performance unlike detail settings.
    In my mind it is a more consistent look into what the minimum FPS could be if you hit a section of gameplay where the CPU is the bottlneck instead of the GPU.
     
    Last edited: Jul 9, 2019
    IdiotInCharge likes this.
  15. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    Should be looking at minimum FPS in terms of maximum frametimes, but yes, this is where a difference will actually be felt, if it can be.
     
  16. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,023
    Joined:
    Oct 29, 2000

    Having not dabbled with DDR4 or Ryzen before, I find this kind of frustrating.

    My old DDR3 x79 system Will seemingly take whatever RAM I throw at it, at any timings in any quantity regardless of it being dual or single rank, and just work, and work well.

    It's frustrating that these new designs are so sensitive to ram. Zen2 does seem.like a huge improvement over Zen and Zen+ though.
     
  17. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    Well who knows, your x79 may benefit from customized sub timings, as well. Its not that RAM doesn't work on Ryzen. Its apparently that some brands have stock/XMP timings which are not optimized for Ryzen. And simply optimizing those, even before tweaking the main timings, has a rather large benefit.
     
  18. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,023
    Joined:
    Oct 29, 2000
    So, has anyone tested yet if they are able to get 3200+ speeds with all four slots populated with RAM?
     
  19. fightingfi

    fightingfi Look at Me! I need the attention.

    Messages:
    2,512
    Joined:
    Oct 9, 2008
    ya im in the same boat here can someone plz post a link for 3733 16 gb ram cl 16 or LOWER please im so confused by all this ram stuff Tankies in advanced :D
     
  20. dasa

    dasa Limp Gawd

    Messages:
    315
    Joined:
    Feb 19, 2012
    Absolutely and this works great for some games but others are horrendously random in there minimum FPS.
     
  21. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,023
    Joined:
    Oct 29, 2000
    Based on TPU's testing, it looks like on Zen2, 3200 cl14 is actually performing better in most games though. In creative/professional/productivity workloads it looks to bounce around all over the place, sometimes coming out ahead, sometimes falling behind.

    Based on this I kind of changed my mind and was planning on going with 3200cl14 instead.
     
    Last edited: Jul 9, 2019
    IdiotInCharge likes this.
  22. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,023
    Joined:
    Oct 29, 2000
    Looks nice. Says nothing about pricing and availability dates though.

    And always with that damned RGB. Why do they always have to put lights on everything. It's a computer, not a Christmas tree!

    Oh, and the kit with good timing is only for 8GB sticks though. That means its out for me.
     
    Last edited: Jul 9, 2019
  23. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,023
    Joined:
    Oct 29, 2000
    The impact of RAM speeds and timings appears to be very different from architecture to architecture.

    You can't use Skylake based testing to determine what to buy for Ryzen. You have to read a test performed on the exact architecture you are going to use it for. The TPU test does just that, an dit suggests 3200cl14 is best for most games, and sometimes best, sometimes mid pack for productivity/creative software.
     
    IdiotInCharge likes this.
  24. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    3200 CL14 only won 2 of the tests at 1080p. And only 1 of those 2, was more than 2 FPS difference for the win.
     
  25. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,023
    Joined:
    Oct 29, 2000
    Which has nothing to do with the RAM itself, and simply to do with the fact that the higher you set the resolution, the more work the GPU does, detracting from the CPU. At 1080p, we wouldn't typically consider a system to be GPU limited today, but it doesn't have to be visibly limited to start having an impact.

    You can tell this by how the range of the results shrinks as you increase he resolution to 1080p

    Up the resolution enough and the only difference you'll see between the different RAM will be due to random measurement error.

    The exception to these results seems to be Battlefield, which for whatever reason appears to like high clocked RAM more than other titles on Zen2
     
    IdiotInCharge likes this.
  26. extide

    extide 2[H]4U

    Messages:
    3,366
    Joined:
    Dec 19, 2008
    Yeah, it's funny -- My X79 system has four different dual channel kits that are totally different. Two different GSkill kits, one set of the old Samsung Wonder RAM, and then the fourth kit I can't remember. 8 x 4GB sticks, runs fine!
     
  27. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    My point is that, while 720p is a fun test, its not practical and tells us almost zero about real gaming loads.

    Almost no one with the hardware used in this test, is gaming at 720p

    And now Zen 2 seems to be even more agnostic about the RAM you feed it. I wonder if TPU will revisit it with a more in depth tweaking article. But so far, it seems pretty much a wash.

    Which brings me back to what I've been questioning in multiple threads here: Is spending a bunch of extra money to get a coupe of points lower on latency and/or to get sky high RAM Mhz, as important as it sometimes seems? I don't think it is. Even here at [H], I don't think its important to spend double to get special RAM sticks, for 5% or less average in gains. There's lots of solid RAM available at very affordable prices. I would much rather buy 32GB of "slack" 3600mhz, for example. Than 16GB of whatever good stuff. Or go budget with some E-die or something and just tweak for what I can get. and not worry if I miss the tightest marks.

    Also, if a 2080 ti is GPU limited at 1080p, we better hang it all up.
     
  28. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,023
    Joined:
    Oct 29, 2000
    Yeah, but you are missing the point.

    Testing at 720p tells us more about how it will behave when it is CPU limited, which is the only time you'll ever care about CPU performance on a gaming machine, and is thus the most relevant test.

    As you raise the resolution, you are mostly testing the GPU, not the CPU or the RAM.

    If it were up to me, I'd test at 1024x768, or even lower if possible.
     
    IdiotInCharge likes this.
  29. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005
    I think its much better to test the real use cases and actually see the times when the extra performance comes into play, if ever. Rather than guessing about it with data which matters to almost nobody with that hardware.

    You can show all the performance difference you want at 1024. But if there's never a difference at real use case, then those 1024 test virtually do not matter. Aside from fun side project data.
     
  30. extide

    extide 2[H]4U

    Messages:
    3,366
    Joined:
    Dec 19, 2008
    You should be looking at both. Testing at lower resolution exaggerates the differences so they are easier to see. There is definitely a use for that. No kidding it's not a real world scenario, it's not supposed to be. Just like synthetic benchmarks, they are data points.
     
    IdiotInCharge and dasa like this.
  31. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,023
    Joined:
    Oct 29, 2000

    I disagree. Testing at higher resolutions tells you more about the GPU than it does the CPU. It can complement the test results, but you absolutely need subsystem isolation to understand what is going on.

    Part of the problem is that everyone doesn't use the same GPU, so as soon as the GPU becomes part of the equation, the test results are irrelevant for everyone who doesn't have the exact GPU under test. In the case of that TPU test, that is apparently an EVGA GeForce RTX 2080 Ti FTW3 Ultra
     
    IdiotInCharge and dasa like this.
  32. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    2,919
    Joined:
    Aug 15, 2005

    Indeed, most people don't have a 2080 ti. but if a 2080 ti is GPU limited at 1080p, then at which GPU line do we cross where 720p becomes a GPU limit? or 1024? The goal posts keep moving. Its like saying a GPU can render 5 trillion flat shaded polygons. Ok. But what happens when we actually have them textured and shaded, etc?

    As you said, low resolution test can be an interesting data point to see what's going on or as I said, a fun thing. But its not actually useful to me, for actual gaming. Without the GPU limit, there might be some large performance differences. I might see that and think "oh, guess I should spend the extra money on that RAM". But then I actually go to play at 1080p or 1440p or whatever I'm realistically gonna play at, and now that difference has shrunk to 3% or less. Those low range tests are interesting. But they don't tell me how to spend my money and I have a hard time pointing at those low range tests and saying this or that is "better". Because it might not matter when I'm actually gaming.
     
    N4CR likes this.
  33. dasa

    dasa Limp Gawd

    Messages:
    315
    Joined:
    Feb 19, 2012
    It may have shrunk to 3% or less on average but if there is still a 3% difference on average when say 95% of the test shows 0% difference due to a GPU bottlneck then for the short duration of CPU limited gameplay the difference will be up to the amount seen in a 720p test.
     
    IdiotInCharge and Zarathustra[H] like this.
  34. Boil

    Boil [H]ard|Gawd

    Messages:
    1,375
    Joined:
    Sep 19, 2015
    And, even though AMD made a point of saying 32GB would now be supported, and G.Skill is making this new Trident Z Neo RAM "Optimized for Ryzen 3000 & X570 Platform", I see no 2@32GB kits from G.Skill...
     
  35. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    To be clear, if there are horrendously random minimum FPS, this will be revealed in detail by looking at maximum frametimes. Frametime analysis is where less specific metrics like 'minimum FPS' come from.
     
  36. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    When doing empirical testing using the scientific method, there are many goalposts.

    They aren't moving. They're all needed.
     
  37. ccityinstaller

    ccityinstaller 2[H]4U

    Messages:
    4,048
    Joined:
    Feb 23, 2007
    chameleoneel likes this.
  38. tangoseal

    tangoseal [H]ardness Supreme

    Messages:
    7,305
    Joined:
    Dec 18, 2010
    I guess with Ryzen 2 Samsung B die doesnt matter. Like Intel I guess anything now works great.

    Gonna order some Corsair Dominator Platinum 3600 c18 today I guess. Probably Hynix C die who cares its Ryzen 2 woohoo!
     
    chameleoneel and IdiotInCharge like this.
  39. JRZoid

    JRZoid [H]Lite

    Messages:
    71
    Joined:
    Feb 1, 2019
    Probably 3200 man...you mighttt be able to get a kit that will boot 3433 or whatever...that's playing with timings though man....this kit nooo...it's definitely 3200 and drop to cl15-17 etc...
     
  40. dasa

    dasa Limp Gawd

    Messages:
    315
    Joined:
    Feb 19, 2012
    lab501 is working on a RAM speed test where they tweak sub timings after finding 3733c14 was slower than 3733c16 due to poorly configured sub timings which could explain why reviews are seeing such a small improvement from higher RAM speeds.
    They managed to get latency down from 72ns at 3200c14 to 63ns at 3800c15 1:1

     
    IdiotInCharge likes this.