Why is AMD's Zen more efficient than intel's Core?

It's simply incredible how AMD managed to pull this off. Not only demolishing the 8 core competitor, but laying a whooping on the 4 core competitor too! I can only laugh and smile to those who claimed AMD were dead a few months ago. :D

AND WHY DO THE INTEL CPUs run so HOT!

Because they do not have Nvidia engineers :)
More seriously because Intel are tight £$%"£$$ who indirectly 'punish' (too strong a wording but not sure what to call it) consumers just by looking to save 10c here and there, clearest example is the IHS solution when compared between Ryzen and 4C/HEDT Intel consumer CPUs.

Cheers
 
Wait, it was mentioned before that THG numbers for the 1800X review don't agree with rest of reviews. It is happening the same now with their new review. Therefore, if half a dozen of reviews point in one direction and another points in the opposite, we have to take this one and discard the rest?

Edit: I was reading temperatures, sorry by this stupid mistake.
 
Last edited:
Wait, it was mentioned before that THG numbers for the 1800X review don't agree with rest of reviews. It is happening the same now with their new review. Therefore, if half a dozen of reviews point in one direction and another points in the opposite, we have to take this one and discard the rest?

And the numbers they are giving for the 9590 look as a bad joke.

FX @3.8GHz = 81.7W

FX @4.7GHz = 90.8W

I.e. increasing frequency by 24% only increased power consumption by 11%!!! This violates any known physical law.
It is the load hence the nearly identical power usage. READ the charts it is there. Besides the 9590 WILL pull 200+ W under full load. It is there with the small fft test torture loop.

Reading is fundamental.
 
I.e. increasing frequency by 24% only increased power consumption by 11%!!! This violates any known physical law.
That's a GPU load, bruh, look at P95 and bear in mind that Ryzen IIRC uses K10 code path in it at the moment.
 
Wait, it was mentioned before that THG numbers for the 1800X review don't agree with rest of reviews. It is happening the same now with their new review. Therefore, if half a dozen of reviews point in one direction and another points in the opposite, we have to take this one and discard the rest?

And the numbers they are giving for the 9590 look as a bad joke.

FX @3.8GHz = 81.7W

FX @4.7GHz = 90.8W

I.e. increasing frequency by 24% only increased power consumption by 11%!!! This violates any known physical law.

Tom's Hardware results are accurate for Ryzen and the Intel CPUs (but with some big caveats), what is missing is the power loss-efficiency fed into the CPU by the power stage.
But importantly I am looking at the actual Review for 6900Ks separately and for some reason its TDP performance is lower than when they did it against Ryzen, so something is up with the comparison results when compared to the actual 6900K review results.
Anyway as hardware.fr mentions when taking that into account the sensors are accurate for actual processor die.

However the 6900K with its results of 145W ish boosts all cores at max 4GHz frequency (in the specific 6900K review rather than Ryzen comparison), now doing the same with Ryzen and its power demand/TDP is quite a lot worst than 6900K and especially so when comparing both all cores at 4GHz.
So part of this also would need to consider boost core behaviour.

I looked at other sites that has engineers such as PCPer and their figures align with Tom's Hardware for Ryzen, and also Hardware.fr when they take into consideration the power stage loss.
But the important context between Ryzen and 6900K is the all core behaviour and influence on TDP/power, which can be missed when discussing results and how well that works on Intel.
And also of context is why is the 6900K review has lower TDP than when compared in the Ryzen review.
From their specific Broadwell-E review:

42-6900K-Power-Consumption-Torture.png


They mention FPU so it would be a fair assumption to say this is still Prime95 and focused on small FFT-FPU like the Ryzen review that has higher results for some reason.

Even the gaming loop TDP is higher when done in the Ryzen review compared to the actual specific 6900K review, so something is up in that context.
Personally I would compare the specific results of each review rather than the comparison table of their Ryzen review.
It also does not help that TDP can vary so much between different applications/benchmark, maybe Prime95 is more ideal for Ryzen while x264 (as per hardware.fr) is better for Intel from a TDP perspective.

Cheers
 
Last edited:
Tom's Hardware results are accurate for Ryzen and the Intel CPUs (but with some big caveats), what is missing is the power loss-efficiency fed into the CPU by the power stage.
But importantly I am looking at the actual Review for 6900Ks separately and for some reason its TDP performance is lower than when they did it against Ryzen, so something is up with the comparison results when compared to the actual 6900K review results.
Anyway as hardware.fr mentions when taking that into account the sensors are accurate for actual processor die.

However the 6900K with its results of 145W ish boosts all cores at max 4GHz frequency (in the specific 6900K review rather than Ryzen comparison), now doing the same with Ryzen and its power demand/TDP is quite a lot worst than 6900K and especially so when comparing both all cores at 4GHz.
So part of this also would need to consider boost core behaviour.

I looked at other sites that has engineers such as PCPer and their figures align with Tom's Hardware for Ryzen, and also Hardware.fr when they take into consideration the power stage loss.
But the important context between Ryzen and 6900K is the all core behaviour and influence on TDP/power, which can be missed when discussing results and how well that works on Intel.
And also of context is why is the 6900K review has lower TDP than when compared in the Ryzen review.
From their specific Broadwell-E review:

42-6900K-Power-Consumption-Torture.png


They mention FPU so it would be a fair assumption to say this is still Prime95 and focused on small FFT-FPU like the Ryzen review that has higher results for some reason.

Even the gaming loop TDP is higher when done in the Ryzen review compared to the actual specific 6900K review, so something is up in that context.
Personally I would compare the specific results of each review rather than the comparison table of their Ryzen review.
It also does not help that TDP can vary so much between different applications/benchmark, maybe Prime95 is more ideal for Ryzen while x264 (as per hardware.fr) is better for Intel from a TDP perspective.

Cheers

The numbers aren't accurate. They got unusually higher power numbers in their review of the 7700k, even considerably higher than their german homonym, and they attributed their numbers to a 'defective' sample. Latter, during their 1800X review they got again power numbers that disagree with virtually every other review of the same chip.

Moreover, Prime95 doesn't stress RyZen.
 
The numbers aren't accurate. They got unusually higher power numbers in their review of the 7700k, even considerably higher than their german homonym, and they attributed their numbers to a 'defective' sample. Latter, during their 1800X review they got again power numbers that disagree with virtually every other review of the same chip.

Moreover, Prime95 doesn't stress RyZen.
As I said big caveat, use the individual reviews for the 7700K and the 6900K as something is not right in the comparison table in the Ryzen review.
And as I mentioned Prime may not fully stress (but IMO it is pretty close to) Ryzen as much as Intel but x264 does not fully stress Intel as much as Ryzen, it goes both ways.
Anyway what also has to be considered is core boost behaviour and all core TDP.

Gaming tests using more powerful GPUs such as GTX1080 could mean that Intel CPUs look worst from a TDP perspective but that is because for whatever reason (some may be fixable others not) Ryzen is bottlenecking performance to a certain extent (when using GPU more powerful than GTX1070).
Cheers
 
Last edited:
Someone made this table on another forum.

https://forums.anandtech.com/thread...eviews-prices-and-discussion.2499879/page-209

dsOsQ9v.png


That's a complete smackdown. Intel desperately needs 10nm to get close again. With all the problems intel are having with their process manufacturing lately, it does not look good for them! What a huge turn around.

Really looking forward to add Vega to my AM4 platform. :) Vega should be the first architecture designed after those 100,000 confidential documents were stolen from AMD and walked through Nvidia's doors, so bye bye NV you theavin bastads! :p
 
Someone made this table on another forum.

https://forums.anandtech.com/thread...eviews-prices-and-discussion.2499879/page-209

dsOsQ9v.png


That's a complete smackdown. Intel desperately needs 10nm to get close again. With all the problems intel are having with their process manufacturing lately, it does not look good for them! What a huge turn around.

Really looking forward to add Vega to my AM4 platform. :) Vega should be the first architecture designed after those 100,000 confidential documents were stolen from AMD and walked through Nvidia's doors, so bye bye NV you theavin bastads! :p

It needs a broad amount of benchmark tools because the TDP stress varies with each, as does setting all cores.
Case in point Tom's Hardware shows large differences for 1800X; Blender had 95.5W, Luxrender 111.8W, Prime95 112W, and Luxrender @ 3.8GHz 141.4W.
With each maybe being a bit less stress on one CPU rather than the other, such as seen with hardware.fr and their own benchmark with Intel being more efficient, along with core boost and all core behaviour when measured.

That is not taking anything away from Ryzen as it has IMO pretty good TDP/power demand in general and seems very competitive until fully OC'd all cores at 4GHz.
Cheers
 
It needs a broad amount of benchmark tools because the TDP stress varies with each, as does setting all cores.
Case in point Tom's Hardware shows large differences for 1800X; Blender had 95.5W, Luxrender 111.8W, Prime95 112W, and Luxrender @ 3.8GHz 141.4W.
With each maybe being a bit less stress on one CPU rather than the other, such as seen with hardware.fr and their own benchmark with Intel being more efficient, along with core boost and all core behaviour when measured.

That is not taking anything away from Ryzen as it has IMO pretty good TDP/power demand in general and seems very competitive until fully OC'd all cores at 4GHz.
Cheers

Of course. It depends on workload. Even if there were only one workload where Zen exceeds Core in perf/watt, then that is an incredible victory. But as we all know this isn't an isolated case, so the point is moot. The very fact that AMD have gone from so far behind to leading is miraculous really. As for overclocking, yes power demand goes up--as it does on any platform. And I think the charts show that intel's power usage actually increases at a higher rate than AMD's when overclocking. Intel must have been caught completely off guard by Zen's capabilities.
 
That's a complete smackdown. Intel desperately needs 10nm to get close again. With all the problems intel are having with their process manufacturing lately, it does not look good for them! What a huge turn around.

Really looking forward to add Vega to my AM4 platform. :) Vega should be the first architecture designed after those 100,000 confidential documents were stolen from AMD and walked through Nvidia's doors, so bye bye NV you theavin bastads! :p

AMD definitely made a huge turn around with Ryzen, and I think this gets lost in some of the nitpicking against the platform. People say "oh, well, gaming performance is weak" or "on this specific benchmark, Intel is better" without understanding that compared to Crapdozer, AMD has made a CPU architecture that it is so far ahead of their previous offerings it's utterly insane. They said +52% IPC over Bulldozer. They weren't bullsh*tting. If anything, they may have understated the improvement. I mean we are talking about competitiveness with Intel, now, in these threads. That ALONE is a tremendous leap forward for AMD. We haven't had a serious conversation like that since at least '09, or so.

That being said, it's not a clear win over Intel, either. It is competitive. Which means in some things, AMD will win, and in some things, Intel will win. In efficiency, IMHO, AMD shows a slight lead, because idle power draw is very, very good. And let's face it... the proportion of 100% CPU utilization vs. less than 20% utilization... the less than 20% wins for most users. Power draw under load is competitive with the Intel parts, but it's hard to call it a complete win. More of a draw. In some scenarios, Intel pulls ahead in the load efficiency, and in some scenarios AMD does. So a win for light loads, and a draw for heavy loads. Minor advantage to AMD.

So Ryzen isn't the greatest thing since sliced bread, or a bonafide Intel killer either. It's good, and it's a great option for some of us, myself included. But let's not minimize how awesome it is to have competition in the market again, or to have AMD go from utter garbage to being right smack in the mix of things again.

As for Vega, I'm a little more skeptical on that. Nvidia has quite the lead. I would be pleased if AMD proved me wrong on this, but I'm not holding my breath, either.
 
Of course. It depends on workload. Even if there were only one workload where Zen exceeds Core in perf/watt, then that is an incredible victory. But as we all know this isn't an isolated case, so the point is moot. The very fact that AMD have gone from so far behind to leading is miraculous really. As for overclocking, yes power demand goes up--as it does on any platform. And I think the charts show that intel's power usage actually increases at a higher rate than AMD's when overclocking. Intel must have been caught completely off guard by Zen's capabilities.
Just to clarify, Intel at 4GHz all cores with 6900K has less power demand/TDP than 1800X at 4GHz all cores, plenty of reviews show this if you look at the specific reviews for 6900K/6950X/.
All cores at 3.8Ghz its closer, but again needs an application/benchmark that is not overly efficient or overly stressing one CPU over the other.
Prime95 can be good but it supports AVX option and that can mess up the TDP figures on Intel as it creates much more stress due to the fact their design can make use of it very well.
Anyway something is up in the Tom's review of Ryzen where they compared it to 6900K as it had lower figures in the Broadwell-E review done by a different tech journalist there.
Cheers
 
Last edited:
AMD definitely made a huge turn around with Ryzen, and I think this gets lost in some of the nitpicking against the platform. People say "oh, well, gaming performance is weak" or "on this specific benchmark, Intel is better" without understanding that compared to Crapdozer, AMD has made a CPU architecture that it is so far ahead of their previous offerings it's utterly insane. They said +52% IPC over Bulldozer. They weren't bullsh*tting. If anything, they may have understated the improvement. I mean we are talking about competitiveness with Intel, now, in these threads. That ALONE is a tremendous leap forward for AMD. We haven't had a serious conversation like that since at least '09, or so.

That being said, it's not a clear win over Intel, either. It is competitive. Which means in some things, AMD will win, and in some things, Intel will win. In efficiency, IMHO, AMD shows a slight lead, because idle power draw is very, very good. And let's face it... the proportion of 100% CPU utilization vs. less than 20% utilization... the less than 20% wins for most users. Power draw under load is competitive with the Intel parts, but it's hard to call it a complete win. More of a draw. In some scenarios, Intel pulls ahead in the load efficiency, and in some scenarios AMD does. So a win for light loads, and a draw for heavy loads. Minor advantage to AMD.

So Ryzen isn't the greatest thing since sliced bread, or a bonafide Intel killer either. It's good, and it's a great option for some of us, myself included. But let's not minimize how awesome it is to have competition in the market again, or to have AMD go from utter garbage to being right smack in the mix of things again.

As for Vega, I'm a little more skeptical on that. Nvidia has quite the lead. I would be pleased if AMD proved me wrong on this, but I'm not holding my breath, either.


From the vibes, leaks and little hints floating around, i'm thinking Vega is going to really put the hurt on Nviidia. Just a gut feeling then.

As for Ryzen and especially Naples and it's impact on intel, i think it does have an impact beyond just what the headlines are revealing at this moment in time. Keep in mind though that AMD has stated that they expect to be a close second in CPUs and number 1 in graphics, so not a win across the entire spectrum, no. But said another way, intel will be the second choice in workloads also. I don't see how intel can keep it's prices intact. I think they are going to have to seriously cut MSRPs and margins or lose significant market share. Even with lower MSRP, i think AMD are going to gain substantial market share, particularly in server and data centers. The clock sweet spot for 14nm LPP is right where AMD needs it to scale up core count, so a higher core count with higher clock speed with similar TDP. At a lower cost and lower operating cost to customers.

Anyway, i think we mostly agree. :p
 
From the vibes, leaks and little hints floating around, i'm thinking Vega is going to really put the hurt on Nviidia. Just a gut feeling then.

As for Ryzen and especially Naples and it's impact on intel, i think it does have an impact beyond just what the headlines are revealing at this moment in time. Keep in mind though that AMD has stated that they expect to be a close second in CPUs and number 1 in graphics, so not a win across the entire spectrum, no. But said another way, intel will be the second choice in workloads also. I don't see how intel can keep it's prices intact. I think they are going to have to seriously cut MSRPs and margins or lose significant market share. Even with lower MSRP, i think AMD are going to gain substantial market share, particularly in server and data centers. The clock sweet spot for 14nm LPP is right where AMD needs it to scale up core count, so a higher core count with higher clock speed with similar TDP. At a lower cost and lower operating cost to customers.

Anyway, i think we mostly agree. :p

If Vega surprises me, all to the good. I'm mostly an nvidia buyer when it comes to GPUs, but let me tell you, back when crypto currency mining was a thing, I definitely bought myself a nice stable of Radeons (a 7970 and a pair of 7950s - I burned up one of the 7950s eventually, still have the 7970 and other 7950 lurking around in my other two comps). That made me a nice tidy profit. So if AMD manages to impress here, they could win me over. In the meantime, I'm more than happy with my 1080 Ti.

They've done very well with Ryzen, though. It's great for budget workstation/mixed-use builds like mine. 8 cores on the cheap. And none of those crappy cores w/shared FPUs like with Bulldozer. Very happy with my Ryzen build so far.
 
That being said, it's not a clear win over Intel, either. It is competitive. Which means in some things, AMD will win, and in some things, Intel will win. In efficiency, IMHO, AMD shows a slight lead, because idle power draw is very, very good. And let's face it... the proportion of 100% CPU utilization vs. less than 20% utilization... the less than 20% wins for most users. Power draw under load is competitive with the Intel parts, but it's hard to call it a complete win. More of a draw. In some scenarios, Intel pulls ahead in the load efficiency, and in some scenarios AMD does. So a win for light loads, and a draw for heavy loads. Minor advantage to AMD.

This is what I have an issue with Peppercorn's statements. It is not a clear win nor a complete smackdown, as Peppercorn keeps claiming. It is competitive and that is it. The 6900k, 7700k, and the rest of Intel's lineup are on high performance manufacturing processes, which already means they are sacrificing efficiency for performance. Ryzen is on a low power manufacturing process, which sacrifices performance for efficiency. The fact that AMD has better efficiency than Intel most of the time does not mean much, and is something clearly seen with the low clocked 10 core 6950x vs the highly clocked 7700k. Even the 1700 loses a great deal of efficiency when locked to 4 ghz on all cores.

The efficiency argument for AMD "smacking" Intel is entirely bogus due to the characteristics and nature of the chips. Focus on what AMD is winning at, and that is the incredible value of the 1700, and soon the 6 core and 4 core variants.
 
This is what I have an issue with Peppercorn's statements. It is not a clear win nor a complete smackdown, as Peppercorn keeps claiming. It is competitive and that is it. The 6900k, 7700k, and the rest of Intel's lineup are on high performance manufacturing processes, which already means they are sacrificing efficiency for performance. Ryzen is on a low power manufacturing process, which sacrifices performance for efficiency. The fact that AMD has better efficiency than Intel most of the time does not mean much, and is something clearly seen with the low clocked 10 core 6950x vs the highly clocked 7700k. Even the 1700 loses a great deal of efficiency when locked to 4 ghz on all cores.

The efficiency argument for AMD "smacking" Intel is entirely bogus due to the characteristics and nature of the chips. Focus on what AMD is winning at, and that is the incredible value of the 1700, and soon the 6 core and 4 core variants.

I just have to applaud you. You are one of the few people I've seen have a level head and not ride any hype train coming with the release of Ryzen. Everything you posted right there is spot on, thank you.
 
I just have to applaud you. You are one of the few people I've seen have a level head and not ride any hype train coming with the release of Ryzen. Everything you posted right there is spot on, thank you.

I've seen a lot of Hype Train for sure... and also a lot of "zOMG Ryzen is teh sux, Intel foreva." The truth is located in between these relative extremes.
 
I've seen a lot of Hype Train for sure... and also a lot of "zOMG Ryzen is teh sux, Intel foreva." The truth is located in between these relative extremes.

I've yet to see a single post in these forums that say Ryzen sucks. I do concede that I might miss some posts that may get deleted however; but the majority of what some members deem negetive is simply discussing Ryzen's flaws. And yes, Ryzen has some flaws, but some members here would rather those discussions not take place for some odd reason. They somehow feel the need to defend Ryzen with every breath.
 
I've yet to see a single post in these forums that say Ryzen sucks. I do concede that I might miss some posts that may get deleted however; but the majority of what some members deem negetive is simply discussing Ryzen's flaws. And yes, Ryzen has some flaws, but some members here would rather those discussions not take place for some odd reason. They somehow feel the need to defend Ryzen with every breath.

Not necessarily here. This isn't the only enthusiast forum I frequent. But... I delurked here because the discussion seemed more balanced here. Ryzen has some flaws for sure. Namely, the weird, schizoid gaming performance. It's hard to nail that one down. Many ideas and theories have been tried, and found wanting. Also, the IPC is still behind Intel, though that's partly mitigated by what looks to be superior SMT implementation (see Cinebench single core vs. multicore comparisons -- AMD chips gain more in MT than Intel). The clock ceiling of 4.1-4.2 GHz is another problem, though what AMD can be expected to do about that when it's being manufactured on a low power process... I don't know.

AMD is *definitely* back in the game (f*cking FINALLY), but the game is merciless. Zen+ needs an IPC bump, and some attention to whatever weirdness is plaguing gaming performance. If that's optimization, as AMD claims, then they need to provide the industry with the support they need to do the optimizing. If it's memory or BIOS related, AMD needs to get on that. And if it's a design flaw, they need to correct it in the next iteration.
 
Not necessarily here. This isn't the only enthusiast forum I frequent. But... I delurked here because the discussion seemed more balanced here. Ryzen has some flaws for sure. Namely, the weird, schizoid gaming performance. It's hard to nail that one down. Many ideas and theories have been tried, and found wanting. Also, the IPC is still behind Intel, though that's partly mitigated by what looks to be superior SMT implementation (see Cinebench single core vs. multicore comparisons -- AMD chips gain more in MT than Intel). The clock ceiling of 4.1-4.2 GHz is another problem, though what AMD can be expected to do about that when it's being manufactured on a low power process... I don't know.

AMD is *definitely* back in the game (f*cking FINALLY), but the game is merciless. Zen+ needs an IPC bump, and some attention to whatever weirdness is plaguing gaming performance. If that's optimization, as AMD claims, then they need to provide the industry with the support they need to do the optimizing. If it's memory or BIOS related, AMD needs to get on that. And if it's a design flaw, they need to correct it in the next iteration.

I don't think we will see any higher clocks from Ryzen; in fact I think AMD should focus on lower clocks as their efficiency between 3.3GHz and 3.6 GHz is great. This would make an amazing mobile CPU.

As for fixing the flaws, I don't think AMD will be able to fix everything as it is right now. I have a feeling they'll need to change Ryzen further down the line to get everything right, but that's just my hunch. Some optimizations should certainly help aliviate most of the issues however.

AMD screwed the pooch with their launch, they were just lucky Ryzen was actually competitive this time around. They really need to relearn how to release new CPUs for next time though. They shouldn't expect partners to just sit idly by and be thrown under the bus because they decided to not clue their partners in with enough time.
 
I am happy to see you all are still fighting along fanboy line and arguing over few watts. Carry on, can't believe this thread is on top again.
 
I am happy to see you all are still fighting along fanboy line and arguing over few watts. Carry on, can't believe this thread is on top again.

Lulz. Either way, we all know which CPU I ultimately went with. I'm just happy to see competition again. I was so tired of the slow cadence and high prices Intel was pushing. I held on to that 2600k forever, because everything was just a slow, modest bump every few years. Only after 6 years of slow bumps in performance was it even starting to become worth it to look at a new CPU. Ryzen changed that up, though. Sure, single core performance still not tremendously better... but having more cores meant *major* non-trivial improvements in rendering, encoding, and pretty much all multi-threaded tasks. That was worth it for me.

Intel folks should be happy too. You can dispute Ryzen's benchmarks all day, but what is beyond dispute is that Intel got lazy and expensive due to lack of competition. They *needed* a kick in the ass. And they finally got one.
 
Lulz. Either way, we all know which CPU I ultimately went with. I'm just happy to see competition again. I was so tired of the slow cadence and high prices Intel was pushing. I held on to that 2600k forever, because everything was just a slow, modest bump every few years. Only after 6 years of slow bumps in performance was it even starting to become worth it to look at a new CPU. Ryzen changed that up, though. Sure, single core performance still not tremendously better... but having more cores meant *major* non-trivial improvements in rendering, encoding, and pretty much all multi-threaded tasks. That was worth it for me.

Intel folks should be happy too. You can dispute Ryzen's benchmarks all day, but what is beyond dispute is that Intel got lazy and expensive due to lack of competition. They *needed* a kick in the ass. And they finally got one.

The thing is, I don't believe the story/situation is as simple as that. I believe the IPC we are seeing is getting close to the limit of x86 silicon designs. The fact that AMD landed at a very similar IPC is another indicator of that IMO.

In the past, IPC was below one, well below it due to not having enough transistors for dedicated calculations. That means each type of operation had to share resources. Throwing more transistors at the problem allowed no sharing of resources, allowing IPC to progress from something like 0.1 to 1 in a few short years. Branch prediction and the micro-op queue boosted it well above 1, culminating at somewhere around 9 on average. Now think about this for a second: on average, CPUs are able to predict the results of the next 9 calculations. That is quite a feat in of itself. In order to see a 20% increase, it would have to be able to accurately predict 11. Anyone that knows statistics will know that it gets exponentially more difficult to predict the more there are. Going from 9 to 10 is probably at least 5 times as difficult as going from 8 to 9.
 
The thing is, I don't believe the story/situation is as simple as that. I believe the IPC we are seeing is getting close to the limit of x86 silicon designs. The fact that AMD landed at a very similar IPC is another indicator of that IMO.

In the past, IPC was below one, well below it due to not having enough transistors for dedicated calculations. That means each type of operation had to share resources. Throwing more transistors at the problem allowed no sharing of resources, allowing IPC to progress from something like 0.1 to 1 in a few short years. Branch prediction and the micro-op queue boosted it well above 1, culminating at somewhere around 9 on average. Now think about this for a second: on average, CPUs are able to predict the results of the next 9 calculations. That is quite a feat in of itself. In order to see a 20% increase, it would have to be able to accurately predict 11. Anyone that knows statistics will know that it gets exponentially more difficult to predict the more there are. Going from 9 to 10 is probably at least 5 times as difficult as going from 8 to 9.

It's not just IPC. There are several ways to increase performance, as you know. IPC, clocks, cores... So if IPC is starting to max out, go for clocks. If clocks are maxing out, go for cores. Intel could have easily given us a 6 core mainstream Kaby Lake if they wanted to, and then Ryzen would have fallen flat. I'd have bought a 6 core Kaby Lake all day.
 
It's not just IPC. There are several ways to increase performance, as you know. IPC, clocks, cores... So if IPC is starting to max out, go for clocks. If clocks are maxing out, go for cores. Intel could have easily given us a 6 core mainstream Kaby Lake if they wanted to, and then Ryzen would have fallen flat. I'd have bought a 6 core Kaby Lake all day.

Intel could also have given the bloody I7 EDRAM that worked very well as an L4 cache on the 5775C, but nooooo because they would probably have to adjust their margins to be lower as the cost would be higher even with that adjustment.
Still miffed at that tbh, they go on about 'for gamers' and carefully ignore how the EDRAM works nicely as an L4 cache on a dog shit CPU called 5775C (freaking 65W), even if it had limitations at the time they could had overcome them by now.
Cheers
 
https://forums.anandtech.com/thread...eviews-prices-and-discussion.2499879/page-209



The same person on AT is doing a lot of work that reviewers don't seem to have the ability to do.

Handbrake:

R7 1700------- .675 FPS/W
i7 6900-------- .584 FPS/W
i7 7700-------- .539 FPS/W

The 8 core 16 thread Ryzen uses less power than even the 4 core 8 thread intel 7700! For sure the power saving techniques used in Ryzen are far more advanced than what intel's brute force approach is able to achieve, but it is starting to look like GloFo's 14nm LPP is also just a better process at around 3.8GHz and lower. With a more efficient design and process, AMD should be in a great position as they scale up the core count.
 
https://forums.anandtech.com/thread...eviews-prices-and-discussion.2499879/page-209



The same person on AT is doing a lot of work that reviewers don't seem to have the ability to do.

Handbrake:

R7 1700------- .675 FPS/W
i7 6900-------- .584 FPS/W
i7 7700-------- .539 FPS/W

The 8 core 16 thread Ryzen uses less power than even the 4 core 8 thread intel 7700! For sure the power saving techniques used in Ryzen are far more advanced than what intel's brute force approach is able to achieve, but it is starting to look like GloFo's 14nm LPP is also just a better process at around 3.8GHz and lower. With a more efficient design and process, AMD should be in a great position as they scale up the core count.

Yes, keep making yourself stupid with stupid comparisons. If you want to truly compare efficiency, compare them at the same clocks before making your claims. A mid 4 ghz clock CPU against a low 3 ghz CPU that can't get above 4. Your fanboyism just has no limits does it?

It's not just IPC. There are several ways to increase performance, as you know. IPC, clocks, cores... So if IPC is starting to max out, go for clocks. If clocks are maxing out, go for cores. Intel could have easily given us a 6 core mainstream Kaby Lake if they wanted to, and then Ryzen would have fallen flat. I'd have bought a 6 core Kaby Lake all day.

Clocks are process and architecture dependent, and often silicon limited. We have been at a max of ~5 ghz on ambient cooling for years.

More cores is irrelevant to performance progress when people talk about architecture improvements. You don't complain about Ferraris not selling at Corvette prices. Not selling the product you want at the price you want does not denote a lack of progress.

Now, I'm not arguing that Ryzen doesn't provide Intel with some much needed competition. However, Intel and AMD are for profit companies in an industry where having money for cutting edge research means getting people to upgrade from their older stuff. High volume does not necessarily guarantee greater profits, especially at low margins. Also keep in mind Intel's mainstream and mobile processors are typically on the same manufacturing line.
 
Last edited:
Not too concerned about fanboys with blinders and car analogies not liking what they see. ;)
One thing that hasn't been mentioned as often is Ryzen's superior SMT scaling. Although that wouldn't account for the total efficiency lead it would obviously play a part.
 
Someone made this table on another forum.

https://forums.anandtech.com/thread...eviews-prices-and-discussion.2499879/page-209

dsOsQ9v.png


That's a complete smackdown. Intel desperately needs 10nm to get close again. With all the problems intel are having with their process manufacturing lately, it does not look good for them! What a huge turn around.

Really looking forward to add Vega to my AM4 platform. :) Vega should be the first architecture designed after those 100,000 confidential documents were stolen from AMD and walked through Nvidia's doors, so bye bye NV you theavin bastads! :p

That table is misleading. The creator is taking total platform power and BDW-E uses a more complex platform quad-channel more I/O,... Moreover it is using CB, which is favorable case for RyZen due to the bigger L2, and place RyZen performance above the average. The bigger L2 on SKL would draw a different perspective; 10nm isn't needed for this.

To get the efficiency of the CPU one has to use the power consumed by the CPU. PcPer doesn't give this data, but we can estimate it using full loads and idle.

I am not going to redo the whole table. Consider only the 1800X and the 6900k. Performance is the same, but power has to be replaced by

1800X: (155.1 - 37.6) W= 117.5 W
6900k: (160.5 - 70.2) W = 90.3 W

Therefore efficiencies for the CPUs are

1800X: 1620cb / 117.5 W = 13.8 cb/W
6900k: 1486cb / 90.3 W = 16.5 cb/W

The 1800X is less efficient, contrary to what the author of the that table pretended. But we already knew that the 1800X is less efficient than the 6900k

getgraphimg.php
 
https://forums.anandtech.com/thread...eviews-prices-and-discussion.2499879/page-209



The same person on AT is doing a lot of work that reviewers don't seem to have the ability to do.

Handbrake:

R7 1700------- .675 FPS/W
i7 6900-------- .584 FPS/W
i7 7700-------- .539 FPS/W

The 8 core 16 thread Ryzen uses less power than even the 4 core 8 thread intel 7700! For sure the power saving techniques used in Ryzen are far more advanced than what intel's brute force approach is able to achieve

Nope. A lower-clocked octo-core beating a high-clocked quad-core on efficiency is just a consequence of the laws of physics. Power depends linearly on area, but non-linearly on frequency.

Moreover, Handbrake is a GPU-like workload and moar cores with low clocks is a more adequate chip. A low-clocked quad is optimized for latency-sensitive workloads.
 
Nope. A lower-clocked octo-core beating a high-clocked quad-core on efficiency is just a consequence of the laws of physics. Power depends linearly on area, but non-linearly on frequency.

Moreover, Handbrake is a GPU-like workload and moar cores with low clocks is a more adequate chip. A low-clocked quad is optimized for latency-sensitive workloads.
Power increases linearly with frequency and quadratically with voltage. Check your facts.
 
Power increases linearly with frequency and quadratically with voltage. Check your facts.

There is a relationship between voltage and frequency. When this is taken into account, we obtain the well-known nonlinear dependence of Power with frequency

ivb_power.png
 
There is a relationship between voltage and frequency. When this is taken into account, we obtain the well-known nonlinear dependence of Power with frequency
That only applies if you vary the voltage by frequency, which isn't a requirement. If you fix the voltage at say 1V, the relationship will be linear for lower clock frequencies. Power optimizations aside, it is a linear relationship.
 
That only applies if you vary the voltage by frequency, which isn't a requirement. If you fix the voltage at say 1V, the relationship will be linear for lower clock frequencies. Power optimizations aside, it is a linear relationship.
Usage of corner cases does little good to your point though, considering that automatic voltage scaling is a thing.
 
AND WHY DO THE INTEL CPUs run so HOT!

Crappy TIM under the IHS. As a 7700K owner, AMD did it right on using solder. People are getting 15C or better drops in temps by delidding their 7X00K chips. I have a NH-D14 on my 7700K 4.8GHz @ 1.24V and without delid it hits high 70s under full load, AVX takes it to low/mid 80s.
 
Usage of corner cases does little good to your point though, considering that automatic voltage scaling is a thing.
What corner case? That's literally the direct relationship, which is linear, according to laws of physics.

Voltage will affect the max obtainable clocks, but automatic voltage scaling is a separate relationship. Varying a bunch of other variables to make that power curve is the corner case.
 
You should also ask around with Dell, Lenovo etc why they dont pick Naples or Ryzen for workstations and servers.

They would if it wasn't for contract issues. Dell Technologies mostly use INTEL on high-end products because it's top of the line and to prove it's the "best of the best".
I do see AMD being used for low-end/mid-range systems soon.
 
What corner case?
Different frequency on same voltage on same CPU. It is indeed a corner case nowadays.
Varying a bunch of other variables to make that power curve is the corner case.
Fair, but which one is more relevant corner case? I dare claim that the only relevant corner cases are automatic scaling and manual Vmin-Fmax tweaking. Neither leads to linear power/frequency relationship.
 
Fair, but which one is more relevant corner case? I dare claim that the only relevant corner cases are automatic scaling and manual Vmin-Fmax tweaking. Neither leads to linear power/frequency relationship.
Still the tweaking of voltages. Even with that curve the voltage will follow a limited number of power states or require a circuit to calculate or adjust the voltage. Some of the recent AMD power savings were adjusting voltage dynamically based on errors to my understanding. Disable the feature and you're left with that linear relationship again.
 
Back
Top