New Ryzen 2 (Pinnacle Ridge) gets only 200 MHz boost according to a leak

No numbers were just out of thin air to ilustrate my point.. Now this highly depend on game i think...

I know, but that's as bad as juanrga saying that Intel is 1000% better than AMD because of his made up numbers :p. It is dependent on the game, sure. But during the reasonable lifespan of the CPU's for gaming enthusiasts (2-3 years in a lot of cases), I doubt you'll see any Ryzen CPU be unplayable even if it is slightly slower than the Intel CPU.
 
Different resolutions stress different things. Low resolutions (or what you call cpu test) basically just test geometry setup, which Intel is usually faster at. At higher resolutions, geometry is no longer the limiting factor from the cpu side and shifts to other aspects, which amd is sometimes faster at, which is why amd sometimes pulls ahead at higher resolutions.

That's some nice speculation, feel like backing it up with data?
 
Anandtech did a interview with GlobalFoundries CTO, looks pretty in-depth to me, and they even discuss 14nm and the 12nm some of you have been mentioning.
 
I doubt you'll see any Ryzen CPU be unplayable even if it is slightly slower than the Intel CPU.

Unplayable, no.

But slower/choppier? The future of high-end gaming is likely VR; here, PCs can push even further ahead, and VR is not only heavily resolution dependent, it's also highly sensitive to long frametimes. If we're going to make comparisons, that's what we should be using.
 
I know, but that's as bad as juanrga saying that Intel is 1000% better than AMD because of his made up numbers :p. It is dependent on the game, sure. But during the reasonable lifespan of the CPU's for gaming enthusiasts (2-3 years in a lot of cases), I doubt you'll see any Ryzen CPU be unplayable even if it is slightly slower than the Intel CPU.
but ppl said it is a pointless tests, i dont think it is. u wont buy slow ram for ryzen either because? why test ram speed for ryzen its pointless!
 
That's some nice speculation, feel like backing it up with data?

Sure:


Draw call performance is ~33% slower on Ryzen. Doesn't matter at higher resolutions, but limits framerates at low resolutions. By dropping the resolution to where the GPU isn't doing much work, you're testing the cpu draw call performance, not how well it does the other computation that the gpu needs to feed it at higher resolutions. The question is does it matter in any current real world situations, and will it matter in the future? In mult-threaded dx12 engines that split draw call performance among the cores instead of single thread, it won't, even in single threaded dx11 draw call limited games it only shows up at low resolutions where draw call performance is 10-20x less than that of multi threaded dx12 performance. Game engines are trending to dx12 and more threading, so it's, imo, a red herring.
 
That's some nice speculation, feel like backing it up with data?

Back it up as in reports from the future? Nah, he probably can't.

At the base he is just saying that better 720p or lower tests, don't necessarily mean better 1080p or higher performance on tomorrows games. We had seen this when comparing a 2500k to a FX chips years later.
 
Sure:

Draw call performance is ~33% slower on Ryzen. Doesn't matter at higher resolutions, but limits framerates at low resolutions. By dropping the resolution to where the GPU isn't doing much work, you're testing the cpu draw call performance, not how well it does the other computation that the gpu needs to feed it at higher resolutions. The question is does it matter in any current real world situations, and will it matter in the future? In mult-threaded dx12 engines that split draw call performance among the cores instead of single thread, it won't, even in single threaded dx11 draw call limited games it only shows up at low resolutions where draw call performance is 10-20x less than that of multi threaded dx12 performance. Game engines are trending to dx12 and more threading, so it's, imo, a red herring.

Thanks!

Starting out: this is 3DMark, not an actual game, which makes the results academically useful but also questionable versus how real-world results are be affected.

Further, we should expect draw calls to increase over time: this means that AMD CPUs will be more affected going forward if nothing is done to balance any draw call throughput differences, and places a higher importance on 720p CPU benchmarks which are run specifically to reveal these issues.

[and while we would like to be totally on Vulkan/DX12, we're not, and DX12 is still showing a noticeable delta on AMD as well- so when games are released to take advantage of the higher draw call capabilities available with lower-level APIs, AMD will still be behind, again, if nothing is done]
 
Thanks!

Starting out: this is 3DMark, not an actual game, which makes the results academically useful but also questionable versus how real-world results are be affected.

Further, we should expect draw calls to increase over time: this means that AMD CPUs will be more affected going forward if nothing is done to balance any draw call throughput differences, and places a higher importance on 720p CPU benchmarks which are run specifically to reveal these issues.

[and while we would like to be totally on Vulkan/DX12, we're not, and DX12 is still showing a noticeable delta on AMD as well- so when games are released to take advantage of the higher draw call capabilities available with lower-level APIs, AMD will still be behind, again, if nothing is done]

Yeah, 3dmark is a benchmark but it nicely splits out draw call performance which nothing else does that I'm aware of, and let's you see where the cpus are stronger/weaker.

We see it on some dx12 engines, likely because they're still not heavily threaded/optimized. That's not the trend.

You can make your bets how you like, but more cores/threads overall seems to be more important going forward than single core draw call performance, which is essentially what you're testing on older/unoptimized game engine, low resolution tests. Yeah, Intel is faster at that but I honestly don't think that's going to be as important as high general multi-thread performance in the next 3-5 years at higher resolutions.
 
You can make your bets how you like, but more cores/threads overall seems to be more important going forward than single core draw call performance, which is essentially what you're testing on older/unoptimized game engine, low resolution tests. Yeah, Intel is faster at that but I honestly don't think that's going to be as important as high general multi-thread performance in the next 3-5 years at higher resolutions.

Draw calls are one aspect of CPU performance. Physics and AI are another, along with netcode and the rest of the game. All of these should become more advanced over the years, on top of increased usage of draw calls for more immersive graphics on Vulkan and DX12.

And yes, the trend should be toward more multi-thread usage; however, we still see significant single-thread dependencies as not everything can be broken up, and games are an example of massive interdependent systems.
 
Draw calls are one aspect of CPU performance. Physics and AI are another, along with netcode and the rest of the game. All of these should become more advanced over the years, on top of increased usage of draw calls for more immersive graphics on Vulkan and DX12.

And yes, the trend should be toward more multi-thread usage; however, we still see significant single-thread dependencies as not everything can be broken up, and games are an example of massive interdependent systems.

Right, but draw calls are what you're testing at low res in older engines, not the other aspects. Which is why its not very good.
 
Right, but draw calls are what you're testing at low res in older engines, not the other aspects. Which is why its not very good.

This is the point that you're not supporting. Games are more than just draw calls per frame- so you're seeing everything that needs to be done on the CPU. You'd have to prove that draw calls are such a disproportionate fraction of game CPU usage that the rest of the work is immaterial for your point to be logically consistent.
 
This is the point that you're not supporting. Games are more than just draw calls per frame- so you're seeing everything that needs to be done on the CPU. You'd have to prove that draw calls are such a disproportionate fraction of game CPU usage that the rest of the work is immaterial for your point to be logically consistent.
it's not so much that they're more of them, but they're more expensive than other operations. The number of draw calls will depend on scene complexity and varies from game to game.
 
it's not so much that they're more of them, but they're more expensive than other operations. The number of draw calls will depend on scene complexity and varies from game to game.

Which is why many games are tested and analyzed?

[and your supporting point still lacks proper reference]
 
The sad part is no matter what you tell him he will just try to FUD his way to how Intel is better, that was a excellent description of the issue Bobzdar. A synthetic benchmark is perfect for showing the issue as it will showcase the worse possible issue since it's a artificial overload. In real world draw calls are not a issue and even less of a issue the higher up you go in resolution. Also plenty of newer games are showing much heavier use of that multi core power and less dependency on just single threaded performance. Intel isnt getting any faster on the single threaded IPC, just tiny bumps in clock speed is all they have these days and we dont know if AMD will have the same issue or not down the road. Since video cards are taking multiple years to advance now it will mean even less. That is why you see Ryzen actually beat Intel from time to time at 4K, just a matter if the program can leverage the cpu properly. Biggest thing Intel actually has going for it is everyone codes to them first and AMD secondly.
 
The sad part is no matter what you tell him he will just try to FUD his way to how Intel is better

If it's FUD, then it should be easy to soundly refute with references. And if that really is the case, I'd love to see you do it- honestly!- but just screaming and flailing emotionally because his perspective doesn't align with your chosen religious tech preferences is not a refutation.
 
Added link for support.

Thanks, but while I see the impact of low-overhead APIs discussed, I do not see a substantiation for the conclusion draw calls are as significant of a factor versus other CPU tasks performed by games that's being claimed here.
 
If it's FUD, then it should be easy to soundly refute with references. And if that really is the case, I'd love to see you do it- honestly!- but just screaming and flailing emotionally because his perspective doesn't align with your chosen religious tech preferences is not a refutation.

What have you proven other saying "nope cant be." Bobzdr showed you the issue, you have proven nothing and there are plenty of posted fps charts to prove Ryzen and Intel trade performance crowns. Heck Civilization 6 shows quite a bit better on Ryzen despite all that clock speed on Intel.
 
Thanks, but while I see the impact of low-overhead APIs discussed, I do not see a substantiation for the conclusion draw calls are as significant of a factor versus other CPU tasks performed by games that's being claimed here.

CPU optimization - draw call count
In order to render any object on the screen, the CPU has some work to do - things like figuring out which lights affect that object, setting up the shader & shader parameters, sending drawing commands to the graphics driver, which then prepares the commands to be sent off to the graphics card. All this "per object" CPU cost is not very cheap, so if you have lots of visible objects, it can add up.

So for example, if you have a thousand triangles, it will be much, much cheaper if they are all in one mesh, instead of having a thousand individual meshes one triangle each. The cost of both scenarios on the GPU will be very similar, but the work done by the CPU to render a thousand objects (instead of one) will be significant.

Do I need more sources? There are other things that can cause a bottleneck (vertex calculation, for example) but draw calls are a common cause.

Edit: and Unreal Engine, too...
 
Last edited:
This is the point that you're not supporting. Games are more than just draw calls per frame- so you're seeing everything that needs to be done on the CPU. You'd have to prove that draw calls are such a disproportionate fraction of game CPU usage that the rest of the work is immaterial for your point to be logically consistent.

That's why I posted the 3dmark results, that isolates the different performance aspects of the cpu and is the main thing that Ryzen lags in. It's marginally faster at some, marginally slower at others, but draw calls are the big differentiator. And they're just not that important for high res gaming, even less so in multithread optimized dx12 engines as draw call performance jumps up 20x vs. single threaded dx11 (and older) engines. I also remember reading a developer quote that stated exactly that - Ryzen is slower at lower resolutions due to draw call performance but that dx12 and multithread optimization drastically improves draw call performance across the board to where it's basically inconsequential. I could find the quote again but I don't feel like doing a google hunt.
 
Do I need more sources?

You probably should, as the UE reference simply says 'probably', which is something we already know- but is far from universally true. Seems when you take draw calls more out of the equation and pump up AI processing in Civilization VI, Ryzen 8C16T at 4.0GHz is still slower than a 4C4T Intel CPU at 5.0GHz.

That's why I posted the 3dmark results, that isolates the different performance aspects of the cpu and is the main thing that Ryzen lags in. It's marginally faster at some, marginally slower at others, but draw calls are the big differentiator. And they're just not that important for high res gaming, even less so in multithread optimized dx12 engines as draw call performance jumps up 20x vs. single threaded dx11 (and older) engines. I also remember reading a developer quote that stated exactly that - Ryzen is slower at lower resolutions due to draw call performance but that dx12 and multithread optimization drastically improves draw call performance across the board to where it's basically inconsequential. I could find the quote again but I don't feel like doing a google hunt.

You haven't added anything here- the synthetic benchmark can isolate draw-call performance, and that's great, but as I showed above that's not real-world gaming. Academically useful, but it's not something that we can apply.
 
You probably should, as the UE reference simply says 'probably', which is something we already know- but is far from universally true. Seems when you take draw calls more out of the equation and pump up AI processing in Civilization VI, Ryzen 8C16T at 4.0GHz is still slower than a 4C4T Intel CPU at 5.0GHz.



You haven't added anything here- the synthetic benchmark can isolate draw-call performance, and that's great, but as I showed above that's not real-world gaming. Academically useful, but it's not something that we can apply.

Because there is no gaming benchmark out there that isolates draw call performance (well, I'd argue that's what low res testing does, but it doesn't break out the cpu tasks), but we can see that's the one area that Ryzen lacks compared to Intel using synthetics and, as mentioned, at least one developer confirmed that. So I'd say it is something we can apply and use to inform our decision making.

Point is, just lowering the res doesn't magically make something a 'cpu benchmark' because it may (I'd argue, does) place importance on something that is not and will not be important at actual gaming resolutions going forward. It's useful if you want to run 720p at 150fps, sure, but won't tell you anything about how that system will run 4k games 3 years from now, and may tell you the opposite.
 
Because there is no gaming benchmark out there that isolates draw call performance (well, I'd argue that's what low res testing does, but it doesn't break out the cpu tasks), but we can see that's the one area that Ryzen lacks compared to Intel using synthetics and, as mentioned, at least one developer confirmed that. So I'd say it is something we can apply and use to inform our decision making.

Point is, just lowering the res doesn't magically make something a 'cpu benchmark' because it may (I'd argue, does) place importance on something that is not and will not be important at actual gaming resolutions going forward. It's useful if you want to run 720p at 150fps, sure, but won't tell you anything about how that system will run 4k games 3 years from now, and may tell you the opposite.

Well, if the information is not out there, then the claim that draw calls are a significant issue lacks support. I see them as less significant, and the [H] Civ VI AI test confirms- on DX12, where draw calls should be taken out of the equation, the Intel CPUs still run away with AI where Ryzen is comparably bogged down. This supports the point that Ryzen in current forms is a poorer choice for gaming.
 
Well, if the information is not out there, then the claim that draw calls are a significant issue lacks support. I see them as less significant, and the [H] Civ VI AI test confirms- on DX12, where draw calls should be taken out of the equation, the Intel CPUs still run away with AI where Ryzen is comparably bogged down. This supports the point that Ryzen in current forms is a poorer choice for gaming.

And Ryzen beats up on Intel in the AOTS: Singularity CPU focused test. At 4ghz vs 5ghz for the Intel cpus.

Like I said, some things are faster on intel and others faster on amd, but the main difference between them is draw call performance, which heavily influences fps at low resolutions. Which is why it's a bad cpu benchmark, at least in terms of predicting future performance.
 
but the main difference between them is draw call performance, which heavily influences fps at low resolutions

This is still unsupported- everything that must be done per-frame at 4k, must also be done per-frame at 720p, and we've seen that aggressive AI implementations, as one example, can actually be a bottleneck.
 
This is still unsupported- everything that must be done per-frame at 4k, must also be done per-frame at 720p, and we've seen that aggressive AI implementations, as one example, can actually be a bottleneck.

Yes, but at 720p it generally has to be done 5x faster (along with everything else) to no longer be a limitation. Difference is we have hard data on the Ryzen draw call performance vs. Intel and it matches what we see at low resolutions, plus developers have weighed in on it. Sure there can be other bottlenecks as well, but low resolution tests are not a good measure of overall gaming performance.
 
Yes, but at 720p it generally has to be done 5x faster (along with everything else) to no longer be a limitation. Difference is we have hard data on the Ryzen draw call performance vs. Intel and it matches what we see at low resolutions, plus developers have weighed in on it. Sure there can be other bottlenecks as well, but low resolution tests are not a good measure of overall gaming performance.

They're absolutely a good measure of overall gaming performance in that they are the best way to minimize the effect that the GPU has on performance. No one is claiming any different than that.
 
They're absolutely a good measure of overall gaming performance in that they are the best way to minimize the effect that the GPU has on performance. No one is claiming any different than that.

That defeats the purpose, nobody games without a GPU so you need to see which processor works best in real gaming loads, not artificial ones. At low resolutions where the GPU is not fill rate or bandwidth limited, it needs lots of draw calls to maintain high fps. At higher resolutions, it doesn't, so you're stressing a part of cpu performance that is not necessarily important in real world gaming. That's the problem. You're not removing the GPU, it still needs to be fed, you're just changing what it needs to be fed to perform, which is not what it needs in normal resolutions and not what will limit performance in newer games. You're not minimizing the effect of the gpu, you're changing what the GPU needs from the CPU to something artificial making it a meaningless measure.
 
91810.png
91812.png


Hmm how odd that the 8700k loses at fps on 1080p, seems like my experience would be much better on a AMD system. I dont think the 8700k is going to get better with time on this game...
 
That defeats the purpose

It does no such thing.

It allows comparisons of CPUs.

Hmm how odd that the 8700k loses at fps on 1080p, seems like my experience would be much better on a AMD system. I dont think the 8700k is going to get better with time on this game...

Why would this be odd? You look at particular game and the threaded resources actually help- for this particular game. However, the AI benchmark isolates the AI function within this game and shows potential potential performance (and weaknesses).
 
91810.png
91812.png


Hmm how odd that the 8700k loses at fps on 1080p, seems like my experience would be much better on a AMD system. I dont think the 8700k is going to get better with time on this game...

It does no such thing.

It allows comparisons of CPUs.



Why would this be odd? You look at particular game and the threaded resources actually help- for this particular game. However, the AI benchmark isolates the AI function within this game and shows potential potential performance (and weaknesses).

You can lead a horse to water....
 
So when are we gonna get some actual info on these new chips? Im ready to get off my 3570k!!
 
This is turning into another intel vs amd ughh.. Hopefully we just can geg more info.. Wanna know if zen+ would be good enough for mmo's.
 
This is turning into another intel vs amd ughh.. Hopefully we just can geg more info.. Wanna know if zen+ would be good enough for mmo's.

Why what is the issue with Zen and the current MMO's? Granted the only MMO I have played lately is SWTOR.
 
This is turning into another intel vs amd ughh.. Hopefully we just can geg more info.. Wanna know if zen+ would be good enough for mmo's.

Here's your summary. Generally speaking, AMD performs at 90% of Intel at essentially 60% of the price in gaming.* Some will argue all day long that the extra 10% is worth the extra 40% more money. Others won't. The new Zen+ chips might mean 92% of Intel at 60% of the price. In non-gaming tasks, it is very application specific as the Intel and AMD trade blows depending on whether core count or clock speed matter more.

* Depending on your specific needs (VR, high refresh monitors), there might be other considerations for Intel, but your average 1080P gaming (e.g. 70% of Steam users), most won't be able to tell a big difference. [H] even did an article a few months back with Vega/Ryzen and Nvidia/Intel and asked users to tell them apart and the average user couldn't.
 
They're absolutely a good measure of overall gaming performance in that they are the best way to minimize the effect that the GPU has on performance. No one is claiming any different than that.

720P is about as realistic as 4K, the only difference is that 4K is closer to standard than 720P is. I am happy with 1080P benches, it cuts the middles ground where IMC, draw calls, and GPU render stacks and bandwidth are at there peak, 720P is like a drag race, limited IMC overhead, clockspeed and wider pipelines tend to affect that, but the worst effect on 720P is unplayable for modern shooters in particular, render time is like 4x slower than a GPU, by the time it draws a player you are already dead.
 
91810.png
91812.png


Hmm how odd that the 8700k loses at fps on 1080p, seems like my experience would be much better on a AMD system. I dont think the 8700k is going to get better with time on this game...

Civilizations is clearly a marker that uses Ryzen efficiently, if you look at Frame time variances it is silky smooth, clearly a massive victory there for AMD and in a very popular game.
 
I've like what I've posted before Ryzen doesn't farewell good with current MMO's like Blade and Soul and GW2. I'm following this thread cause I prefer Ryzen it runs generally cooler and suits perfect for my DAN A4 mitx case but I might go with Intel if Zen+ wouldn't provide any improvements. Open world and 12+ man raids/dungeons give less than 40 fps and sometimes even drop to 20-30's.
 
Back
Top