New Ryzen 2 (Pinnacle Ridge) gets only 200 MHz boost according to a leak

False, games that relies on a powerful CPU will stu stut stutterrr & chugg chugggg on an FX CPU at ANY RESOLUTION.

Honestly that is more dependent on how thread aware the program is. If it's just single thread then yeah it will hurt on the FX.
 
I remember when CIvilizations was a favoured intel benchmark for 6-7 years

civ6_1920_1080.png


It is the 9th biggest game played globally
 
Raven ridge is 14nm,I can find nothing that denotes it as 12nm (or 14nm++), otherwise I agree it has all of the other zen+ enhancements. Everything I can find says pinnacle ridge will be the first 12nm parts.

So you missed all my posts where I linked and quoted James Prior (AMD Product Manager) explaining that Raven Ridge does not use 14nm and explaining the "main benefit is it offers a lower voltage for the frequency". And I guess you also missed my posts explaining 12nm is a rebrand for what was formerly known as 14nm+.
 
Dangerous question to ask, you may get Carizo + 55%.

Anyways Ryzen as a product continues to evolve as the platform matures and developers have access. Take Tomb Raider as the best example, in march reviews were done and Ryzen struggled with Tomb Raider by a substantial amount.

MARCH 2017

index.php


FEB 2018

index.php



That is substantial gain given that no clock enhancements were made or hardware level updates, that is the developers adapting the game engine to better utilize Ryzen and the gains were substantial. Zen+ will feature the kind of advances you'd expect from more of a stepping upgrade, higher clocks, better efficiency, cache clocks higher, faster IMC and all this has x influence on a per application usage basis.

Wrong. Guru3D is changing the hardware. Guru3D is changing the RAM speed between reviews, 3200MHz RAM in the latter reviews, but slower memory in the launch MARCH review. As you would know, overclocking RAM on Ryzen chips automatically overclocks the chip, because the IF clocks are tied to memory clocks. So, contrary to your claim, there are clock enhancements, and you cannot attribute the performance differences to software patches.

Also those hardware improvements you claim for the Zen+ cores "higher clocks, better efficiency, cache clocks higher, faster IMC" are already present in the Zen cores in Raven Ridge.
 
Last edited:
I remember when CIvilizations was a favoured intel benchmark for 6-7 years

civ6_1920_1080.png


It is the 9th biggest game played globally

It is better to check the average over all the games, isn't?

perfrel_1280_720.png


4GHz 4C/8T Zen (with Precision Boost 2, faster cache and memory) is just behind 3.3GHz KBL 4C/4T in CPU performance.
 
Last edited:
It is better to check the average over all the games, isn't?



4GHz 4C/8T Zen (with Precision Boost 2, faster cache and memory) is just behind 3.3GHz KBL 4C/4T in CPU performance.

You're looking at two different sets of data. He's posting 1080p and you're looking at 720p. In a real world scenario (71% of steam users run 1080p), Orange's benchmarks mean more than your usual drivel. A 720p benchmark is practically a synthetic benchmark at this point.
 
It is also not that far behind the 4.0 ghz 7640x, so clearly MCE was used with the lesser clocked KBL. Furthermore, the 1500x will most likely run faster at 3200 ghz as you point out and faster than the 2400g in some games due to the larger cache.
 
Even using the Academic 720p settings, a 4.0 ghz 6700k is only 16% faster than a 4.0 ghz 2400g (It is hard to compare other intel cpus because of MCE).

16% by itself is not that much and would be lower if Hitman was omitted which sees an unexplainable 70% advantage to the 6700k.

6.PNG
 
Wrong. Guru3D is changing the hardware. Guru3D is changing the RAM speed between reviews, 3200MHz RAM in the latter reviews, but slower memory in the launch MARCH review. As you would know, overclocking RAM on Ryzen chips automatically overclocks the chip, because the IF clocks are tied to memory clocks. So, contrary to your claim, there are clock enhancements, and you cannot attribute the performance differences to software patches.

Also those hardware improvements you claim for the Zen+ cores "higher clocks, better efficiency, cache clocks higher, faster IMC" are already present in the Zen cores in Raven Ridge.

They tested up to 3600Mhz on the March 2017 review, due to memory compatibility issues on early AM4 platforms it was not stable. Memory alone cannot account for substantial gains of that nature.
 
It is better to check the average over all the games, isn't?

perfrel_1280_720.png


4GHz 4C/8T Zen (with Precision Boost 2, faster cache and memory) is just behind 3.3GHz KBL 4C/4T in CPU performance.

The issue was around Civilizations so no it would not be pertinent to look at others given it was the initial assertion that Intel used Civilizations to beat home the dominance on FX line, now it is strange to see it discarded because the shoe is on the proverbial other foot.
 
They tested up to 3600Mhz on the March 2017 review, due to memory compatibility issues on early AM4 platforms it was not stable. Memory alone cannot account for substantial gains of that nature.

Read the March 2017 review: memory wasn't 3600MHz for game testing. The improvement in performance in that Guru3D review is 9% since the March review, and a good amount of that percentage is the result of using 7% higher clocks (memory, cache, and interconnect) in recent reviews.
 
So the thing still can't exceed 4ghz as far as I can tell. Too bad. I was hoping for at least 4.2ghz for AMD's sake. Not much of a refresh for enthusiasts, IMO. However it's nice to see them with integrated graphics for other markets. I expect Ryzen to be a very decent laptop chip.
 
So the thing still can't exceed 4ghz as far as I can tell. Too bad. I was hoping for at least 4.2ghz for AMD's sake. Not much of a refresh for enthusiasts, IMO. However it's nice to see them with integrated graphics for other markets. I expect Ryzen to be a very decent laptop chip.

With the stock cooler it seems to top out at 4ghz, but with good cooling I've seen them at 4.2. Depends whether they also click the gpu or not.
 
but it's not a pure cpu??? it's not like intels cpu/gpu have so much clocks like their normal cpu's? so wouldnt it be stupid to make the assumption that the pure cpu's would clock less. or am i beeing stupid again (possible) lol :p enlighten me.
 
So the thing still can't exceed 4ghz as far as I can tell. Too bad. I was hoping for at least 4.2ghz for AMD's sake. Not much of a refresh for enthusiasts, IMO. However it's nice to see them with integrated graphics for other markets. I expect Ryzen to be a very decent laptop chip.

Which chip are you talking about? Do you have a new CPU in hand?
 
Read the March 2017 review: memory wasn't 3600MHz for game testing. The improvement in performance in that Guru3D review is 9% since the March review, and a good amount of that percentage is the result of using 7% higher clocks (memory, cache, and interconnect) in recent reviews.

irrelevant, all the reviews are done best hardware available, this is a common theme since like 2003 already, nobody does dedicated reviews on entry IMC clocks, if the methodology is consistent then it is fine. What I do know is Ubisoft did release Ryzen intended updates so ROTR shows uplifts due to partner relationships. AM4 has also gotten more stable updates to accommodate 3000+ speeds, something early releases didn't have.
 
So the thing still can't exceed 4ghz as far as I can tell. Too bad. I was hoping for at least 4.2ghz for AMD's sake. Not much of a refresh for enthusiasts, IMO. However it's nice to see them with integrated graphics for other markets. I expect Ryzen to be a very decent laptop chip.

raven ridge is based on the original ryzen architecture on the same 14nm process but on a single CCX instead of split CCX while having the updated version of their branded variable boost clock(always forget what AMD calls it). due to testing with the 1700/1800 we already knew that limiting ryzen to only one of the two CCX's didn't change how it overclocked since it was a limitation of the process/architecture not heat/power/infinity fabric. with zen+ being on a different process and tweaked architecture the hope is that they've figured out the clock limitations but we won't know for sure until April when it releases.
 
raven ridge is based on the original ryzen architecture on the same 14nm process but on a single CCX instead of split CCX while having the updated version of their branded variable boost clock(always forget what AMD calls it). due to testing with the 1700/1800 we already knew that limiting ryzen to only one of the two CCX's didn't change how it overclocked since it was a limitation of the process/architecture not heat/power/infinity fabric. with zen+ being on a different process and tweaked architecture the hope is that they've figured out the clock limitations but we won't know for sure until April when it releases.
Fair enough, thanks, I thought these new chips were Zen+. Hopefully we will see those soon enough with higher clockspeeds.
 
I'd like to see what, if any IPC gains cmoe from zen2

This is going to be the kicker- if AMD continues their trend of releasing parts with more cores than Intel at lower prices, gains in IPC could easily push them into the 'recommended' territory for many workloads.

And if they start bolting a Vega GPU to all of them...
 
This is going to be the kicker- if AMD continues their trend of releasing parts with more cores than Intel at lower prices, gains in IPC could easily push them into the 'recommended' territory for many workloads.

And if they start bolting a Vega GPU to all of them...

Personally I would like the opportunity to buy without the igp.. e.g. have all the x chips without the igp and the non X's to have it.. I have no need for the igp and would rather have the tdp available to have higher base clocks.

seeing the 2400g being the same size as a 8 core ryzen chip, an 8 core + Vega would be a massive die.
 
seeing the 2400g being the same size as a 8 core ryzen chip, an 8 core + Vega would be a massive die.

This is faulty logic- Vegazen has a significant number of CUs, which should be stripped from the higher end versions. It just needs to drive video outputs.
 
Personally I would like the opportunity to buy without the igp.. e.g. have all the x chips without the igp and the non X's to have it.. I have no need for the igp and would rather have the tdp available to have higher base clocks.

seeing the 2400g being the same size as a 8 core ryzen chip, an 8 core + Vega would be a massive die.

They'll continue to offer high end CPU's without the IGP...they know that high end users simply don't want it and won't waste the very valuable silicone on something that's not used!
 
You're looking at two different sets of data. He's posting 1080p and you're looking at 720p. In a real world scenario (71% of steam users run 1080p), Orange's benchmarks mean more than your usual drivel. A 720p benchmark is practically a synthetic benchmark at this point.

How many times the goal of "CPU benches" has to be explained? How many times it has to be explained that the amount of people playing at X resolution means nothing for running a CPU test?

It is also not that far behind the 4.0 ghz 7640x, so clearly MCE was used with the lesser clocked KBL. Furthermore, the 1500x will most likely run faster at 3200 ghz as you point out and faster than the 2400g in some games due to the larger cache.

I don't know if MCE was enabled or not, but I know both APUs are overclocked, not stock. The one draw as green in the graphs has the interconnect and cache overclocked. The APU draw as blue in graphs has the interconnect, the cache, and the core overclocked. So what is your point? That we can compare overclocked AMD chips to stock Intel chips only?

16% by itself is not that much and would be lower if Hitman was omitted which sees an unexplainable 70% advantage to the 6700k.

Civ 6 also shows an unusual performance for AMD. So you suggest to eliminate outliers for Intel, but I don't see you suggesting the same for AMD. Why would I eliminate Hitman from the average but leave Civ?

irrelevant, all the reviews are done best hardware available, this is a common theme since like 2003 already, nobody does dedicated reviews on entry IMC clocks, if the methodology is consistent then it is fine.

At contrary, it is very relevant. You did claim that the performance increase in the game was due to patches and that the hardware was the same in the March 2017 review and in the February 2018 reivew. You specifically wrote "That is substantial gain given that no clock enhancements were made or hardware level updates", but I have demonstrated that clocks were increased because the latter review uses a 7% higher overclock.

Now if you want spin, and/or ignore what you wrote then that is another history...
 
raven ridge is based on the original ryzen architecture on the same 14nm process but on a single CCX instead of split CCX while having the updated version of their branded variable boost clock(always forget what AMD calls it). due to testing with the 1700/1800 we already knew that limiting ryzen to only one of the two CCX's didn't change how it overclocked since it was a limitation of the process/architecture not heat/power/infinity fabric. with zen+ being on a different process and tweaked architecture the hope is that they've figured out the clock limitations but we won't know for sure until April when it releases.

First, Raven Ridge doesn't use the same 14nm process, but an improved process that gives higher clocks


This improved process node is the reason why Raven Ridge can get up to 4.2GHz.


Second, there is no "tweaked architecture" on the Zen+ cores that is not present on the Zen cores on Raven Ridge. All those tweaked elements (Precision Boost 2, lower latency cache and IMC,...) that people repeats to justify the branding change to "Zen+"are already present in the Zen cores on Raven Ridge.
 
Civ 6 also shows an unusual performance for AMD. So you suggest to eliminate outliers for Intel, but I don't see you suggesting the same for AMD. Why would I eliminate Hitman from the average but leave Civ?

..

Simply because I doubt Hitman has had the same optimization for Ryzen since release.
 
I don't see why it would hurt to have a minimal non-3d IGP just to navigate the bios and perform basic server like functions.

ultimately it's down to cost on their end and cost on the consumer end.. if they start adding igp's to everything then they have to start selling at intel prices and they can't really do that in this early stage of trying to gain back market share unless they outperform intel outright on both cpu and igp.
 
It is also not that far behind the 4.0 ghz 7640x, so clearly MCE was used with the lesser clocked KBL.

It is not so obvious to me. The 7640x was 19% faster in the non-gaming tests, so I think the lower gap in gaming wasn't due to MCE.
 
Yup, I don't even care about game performance. I just want display outputs powered and the fixed-function media processing bolted on to every part.

Absolutely. More video outputs are always nice, and no need for a GPU add-in card for use cases that don't need it will create sales too. Besides that, it makes the AMD platform much more idiot proof, since I guarantee there are people out there who assume that the video outputs on an AMD motherboard are functional regardless of which CPU is plugged in. Besides that, easier trouble shooting by being able to eliminate the GPU entirely as a factor is always nice.

Best of all this would create a whole new class of notebooks. Eight or more cores with switchable graphics? Hell yes. That alone would be worth the transistor cost.
 
Last edited:
ultimately it's down to cost on their end and cost on the consumer end.. if they start adding igp's to everything then they have to start selling at intel prices and they can't really do that in this early stage of trying to gain back market share unless they outperform intel outright on both cpu and igp.
Also, nearly everything is rendered using the 3d accelerator now--on Linux the 2d path has been depreciated and removed from most graphics drivers, replaced by 2d rendering libraries which either render 2d objects with the 3d accelerator, or the cpu. I'm not sure if any modern gpu even has a 2d pipeline anymore. If they did, it'd be about useless except for cli and maybe bios.
 
Best of all this would create a whole new class of notebooks. Eight or more cores with switchable graphics? Hell yes. That alone would be worth the transistor cost.

Yup. With Intel shoving quads into 15W- I have one, it's faster than my aging 100W-class DTR!- I'm betting both companies could put eight cores into something still moderately performant.
 
Yup. With Intel shoving quads into 15W- I have one, it's faster than my aging 100W-class DTR!- I'm betting both companies could put eight cores into something still moderately performant.

It's a matter of tuning. More cores means that for well threaded scenarios you can lower the clocks and voltage more and be more power efficient than you would otherwise be able to, at least in theory. That said, while I wouldn't mind seeing Intel do the same thing with their HEDT chips, Skylake-X doesn't seem all that power efficient for mobile use.
 
Back
Top