Why is AMD's Zen more efficient than intel's Core?

If you do not find anything weird, then you are not looking at all.

Not only it is not weird, but I knew before launch RyZen was performing like that in those benches. I wrote the 27 Febr a PM to certain someone:

It seems XXXXXXXX is not totally solved and affects performance on benches as 7zip
 
Last edited:
Not only it is not weird, but I knew before launch RyZen was performing like that in those benches. I wrote the 27 Febr a PM to certain someone:
Is English your native tongue? I am going to guess here either it isn't or you are just being obtuse. Weird as in out of line with all other tests. In your suite the 3 skewed heavily against all others therefore bringing down the total avg, which was their point to begin with. You keep deflecting and sidestepping the questions, not once answering directly, so it must be your comprehension of English right? I mean that would make sense.
 
You can add any benchmark with a small unrealistic workset to benefit a 512KB L2 or outdated software without AVX support. Or one a bottleneck elsewhere than the CPU.


So any benchmark where Ryzen matches or dominates BW-E, is a 'small unrealistic workset' ?

Weren't you hanging out for Cinebench results to surface for this very reason back when CPU-z benches were floating about? (not sure if quoting users from another forum is against the rules here, so i won't paste it here)

As for AVX2, well, apart form the fact you simply can't use it everywhere.. isn't it interesting that even your own example illustrates the real world uplift is what, 19%? All that extra die area, and power consumption, higher TDP requirements (to account for worst case ) , negative AVX multi offsets for that. Seems totally worth it so far.
 
And as mentioned above, RyZen is better or worse than expected depending if the personal expectations of each one of us.
Since you're tracking people's "personal expectations".... mine was that it wouldn't beat Intel, but would be pretty much on par with it. That means I was amazed that it did so well where it has, and admittedly a bit let down that the gaming isn't as good; however, at the same time I can't deny that in the majority of these instance that I'VE seen in reviews, the performance has been at a point that being slower isn't going to make a difference. By the time the framerate has dropped to a meaningful framerate, it was because the of the GPU, and as such the Intel "suffered" the same.

If at 4K the Ryzen gets 35FPS while the Intel gets 75FPS, then things become a bit more relevant. However, given I'm still a 1080p gamer due to not having the need for a 4K TV (no personal access to content), Ryzen is the more future proof choice, as I plan to keep it for many years.
 
Well so far RyZen gaming performance exceeds my monitor refresh rate by a large margin at lower resolutions in games which I do not play on or will never use. Native resolutions the two 1070's in SLI are the limitation. So for gaming performance it would make zero difference if I had a 7700K at 5.1 ghz or running my RyZen at 3ghz. Now other stuff I do, RyZen creams the 7700k. Folks just need to buy what is best for them.
 
So any benchmark where Ryzen matches or dominates BW-E, is a 'small unrealistic workset' ?

Weren't you hanging out for Cinebench results to surface for this very reason back when CPU-z benches were floating about? (not sure if quoting users from another forum is against the rules here, so i won't paste it here)

As for AVX2, well, apart form the fact you simply can't use it everywhere.. isn't it interesting that even your own example illustrates the real world uplift is what, 19%? All that extra die area, and power consumption, higher TDP requirements (to account for worst case ) , negative AVX multi offsets for that. Seems totally worth it so far.

I want to see your excuse when SKL-X launches and you have to tell why these benches are suddenly so much faster on the same Skylake uarch ;)

And I think you confuse 100% AVX/FMA loads with variable loads.
 
Well so far RyZen gaming performance exceeds my monitor refresh rate by a large margin at lower resolutions in games which I do not play on or will never use. Native resolutions the two 1070's in SLI are the limitation. So for gaming performance it would make zero difference if I had a 7700K at 5.1 ghz or running my RyZen at 3ghz. Now other stuff I do, RyZen creams the 7700k. Folks just need to buy what is best for them.

Unless your gaming experience is very limited or based on prescripted benchmarks I doubt its true.
 
Since you're tracking people's "personal expectations"

I am not. I was only replying to someone that posted his personal expectations as if they were the expectations of everyone else.

Back to the topic, many people was expecting R7-1800X to be more efficient than i7-6900k. Some people as 'chip-architect' even wrote that 1800X was 60% more efficient than the 6900k. The title of this thread is "Why is AMD's Zen more efficient than intel's Core?", but data, hard data proves otherwise: Zen is less efficient, about 15% less efficient.

We know that the R7-1800X has a real TDP of 125W or 130W. And CanardPC just confirmed that the real TDP of the R7-1700 model is 90W


The french "AMD bullshit son TDP" doesn't even need translation.
 
Last edited:
So any benchmark where Ryzen matches or dominates BW-E, is a 'small unrealistic workset' ?

Weren't you hanging out for Cinebench results to surface for this very reason back when CPU-z benches were floating about? (not sure if quoting users from another forum is against the rules here, so i won't paste it here)

As for AVX2, well, apart form the fact you simply can't use it everywhere.. isn't it interesting that even your own example illustrates the real world uplift is what, 19%? All that extra die area, and power consumption, higher TDP requirements (to account for worst case ) , negative AVX multi offsets for that. Seems totally worth it so far.

That is not what he said.

What he said is that toy benches like CPU-Z or the integer subtest of Passmark are small enough to fit in the 512KB of L2 cache of RyZen and gave RyZen extra performance doesn't translate to realistic workloads. Moreover CPU-Z has a bug that affects cores with only 256KB of cache

About Cinebench. Cinebech is a workload where AMD now performs above the average. Still, Cinebench results show that RyZen is not what that AMD promised us

PCWorld: "My own tests don’t quite match AMD’s results."

PCPER: "Still, the 8% gap between the 6900K and the Ryzen 7 1800X at 3.5 GHz tells me that AMD’s claims of equal IPC appear to have been overstated."

About AVX2. Haswell/Broadwell have 2x the max throughput of Sandy/Ivy/RyZen (32FLOP/core vs 16FLOP/core). The exact performance gap will depend on the workload; it can be so low as 19% or so large as 60%; it depends of each algorithm and the amount of computation can be vectorized.

Back to the topic. RyZen is not more efficient. It is less efficient, As demonstrated above, the real TDP is not the advertised TDP.
 
I am not. I was only replying to someone that posted his personal expectations as if they were the expectations of everyone else.

Back to the topic, many people was expecting R7-1800X to be more efficient than i7-6900k. Some people as 'chip-architect' even wrote that 1800X was 60% more efficient than the 6900k. The title of this thread is "Why is AMD's Zen more efficient than intel's Core?", but data, hard data proves otherwise: Zen is less efficient, about 15% less efficient.

We know that the R7-1800X has a real TDP of 125W or 130W. And CanardPC just confirmed that the real TDP of the R7-1700 model is 90W


The french "AMD bullshit son TDP" doesn't even need translation.
Change AMD to juanrga and TDP to AMD and yes you have a point. How many times, and I say this expecting some number hopefully less than 10 but likely more than 1000, do we have to tell you TDP does not equal Watts. I have told you numerous times My 8350 is rated at 120TDP (later changed to 140 on MoBo listings) and pulls in excess of 200 W under full load StOCK. @4.7Ghz I can pull in excess of 400W from the wall, so likely at or just over 300W.

The TDP number refers to the cooler minimum to maintain normal workloads, not whats necessary for full 100% loads. And in the least with downclocking and TDP restraints built in to MoBos and CPUs That TDP number is less of an issue as far as max.

You are just being obtuse to further spread ignorance and hopefully quell the widely apparent enthusiasm for AMDs new line up. Actually this applies also to Shintai.

As far as workloads, again you are playing ignorant of the context, making excuses when AMD wins a benchmark. If you look at previous AMD vs Intel power usage numbers then in that Context AMD beat Intel. Look at it like this: Intel keeps increasing efficiency over IPC generation after generation over the past few years (AVX2 skews results in this regard as it pertains to specific coding not architecture efficiency). AMD releases a new arch and they leap right up to Intel. All on a new node and new arch thus leading to a reasonable conclusion that AMD has in fact got a winner on their hands and possibly with further iterations able to beat Intel in that regard.
 
I don't know how many reviews show the AMD RyZen 8 core rig using less power then the 6800k rig while out performing the 6800k. Some folks are just rocks for thought process. Then again I don't really care about the small difference but do very care about the price difference. My RyZen rig ROCKS! Have not had this much fun in over 6 years in building a system.
 
Change AMD to juanrga and TDP to AMD and yes you have a point. How many times, and I say this expecting some number hopefully less than 10 but likely more than 1000, do we have to tell you TDP does not equal Watts. I have told you numerous times My 8350 is rated at 120TDP (later changed to 140 on MoBo listings) and pulls in excess of 200 W under full load StOCK. @4.7Ghz I can pull in excess of 400W from the wall, so likely at or just over 300W.

The TDP number refers to the cooler minimum to maintain normal workloads, not whats necessary for full 100% loads. And in the least with downclocking and TDP restraints built in to MoBos and CPUs That TDP number is less of an issue as far as max.

You are just being obtuse to further spread ignorance and hopefully quell the widely apparent enthusiasm for AMDs new line up. Actually this applies also to Shintai.

As far as workloads, again you are playing ignorant of the context, making excuses when AMD wins a benchmark. If you look at previous AMD vs Intel power usage numbers then in that Context AMD beat Intel. Look at it like this: Intel keeps increasing efficiency over IPC generation after generation over the past few years (AVX2 skews results in this regard as it pertains to specific coding not architecture efficiency). AMD releases a new arch and they leap right up to Intel. All on a new node and new arch thus leading to a reasonable conclusion that AMD has in fact got a winner on their hands and possibly with further iterations able to beat Intel in that regard.

Believe it or not (I don't care) but reviewers, CanardPC, and myself know what TDP is and is not. And all of us agree on that the marketing numbers that AMD give us are not the real TDPs of the chips.

Reviewers, CanardPC, myself, and other people know that the official TDP numbers that AMD gives for its chips are only marketing labels. We demonstrated it with hard data. We have also demonstrated with hard data that i7-6900k is more efficient than R7-1800X. Like CanardPC has stated "AMD bullshit son TDP". We also know that Earth is not flat, despite some people will continue to argue that the Earth is flat, and we know that people continuously insult anyone that doesn't agree that Earth is flat.
 
Last edited:
If you're going to try and compare TDP numbers between AMD and Intel (and anyone else), you're gonna have a bad time. AMD and Intel define TDP differently.

You need to measure actual power draw and compare that to work done, period.
 
If you're going to try and compare TDP numbers between AMD and Intel (and anyone else), you're gonna have a bad time. AMD and Intel define TDP differently.

You need to measure actual power draw and compare that to work done, period.
...divided by time taken to get it done.
 
Believe it or not (I don't care) but reviewers, CanardPC, and myself know what TDP is and is not. And all of us agree on that the marketing numbers that AMD give us are not the real TDPs of the chips.

Reviewers, CanardPC, myself, and other people know that the official TDP numbers that AMD gives for its chips are only marketing labels. We demonstrated it with hard data. We have also demonstrated with hard data that i7-6900k is more efficient than R7-1800X.

TDP numbers have always been bullsh*t. Manufacturers don't use the same standards when discussing them. It's kind of like how car engines were rated, once upon a time, where total horsepower was sometimes advertised without any "accessories" like an alternator, A/C compressor, power steering pump, etc... while hooked directly up to an engine dyno. Hell, crank horsepower is still kind of misleading.

However, to the original topic, it is my understanding, with the benchmarks I've seen, that for power draw at idle and under light load, the Ryzen is more efficient. When downclocked, it is ridiculously efficient (as much due to process as to design). But that around 3.6 GHz, Ryzen starts to become less efficient under load, and by the time you hit 4 GHz, both power draw and thermals are going up like a rocket. So under full load, as the CPU tries to extract maximum available performance, it's drawing considerably more power. No longer a geometric gain, but an exponential one.

Point being, we rarely load up a machine constantly under full load. Even me, with my workload, it's not at 100% load for all that long out of the day. So net-net, Ryzen may still be more efficient in real world usage regardless of what load efficiency is like, because idle and light efficiency is better.

Of course, if you DO load your machine near maximum all day long, that's a very different story. But the kind of user that does THAT should really be buying dual socket Xeons, or something... or maybe waiting for Naples.
 
Unless your gaming experience is very limited or based on prescripted benchmarks I doubt its true.

Man really? You hate amd so much that you are going to tell someone that? WTF. Lol. Here is a suggestion why not show some love to the dude who is enjoying his rig and get off "buy Intel only" bandwagon.
 
From what I've been reading around the web, here is what I've picked up:

The Samsung/GloFo process is a mobile-focused process, which as such is meant for low power-high efficiency. As a result that also explains the clock ceiling, though they did get a fair bit out of it...
Based on TheStilt's test results it seems Ryzen was aimed to be more of a 3GHz part to me, but maybe that's in part due to the process?
Ryzen's Stock voltage isn't necessarily what it needs to run at its speed, due to binning being based on voltage not clocks, and a wafer gets a blanketed programmed voltage rating based on the worst performing die.
It's when you clock Ryzen down, that things start to get really interesting and telling in regards to efficiency and power draw.

The bulk of all that can be discerned by reading this: Ryzen: Strictly technical
Which note I hate myself for having slacked off after around page 10, since there's 27 more pages now! lol

It will be interesting to see the clock frequency of IBM's Power 9 processors as they are also on Global Foundry.

Same was said about GPUs for AMD Polaris and yet Nvidia's 1050/ti still hits just over 1900MHz OC'd (cannot go higher because there is a 75W hard limit applied even if the AIB provides auxiliary 6-pin and split power distribution) using the Samsung fab; however voltages were subtly different performance envelope between Samsung/GF and TSMC.

Cheers
 
...divided by time taken to get it done.

That'd be implicit, as we're not talking about FPS, but a particular job like a render pass. But I guess it needs to be said for those that don't yet have their mind wrapped around the issue.
 
It will be interesting to see the clock frequency of IBM's Power 9 processors as they are also on Global Foundry.

Same was said about GPUs for AMD Polaris and yet Nvidia's 1050/ti still hits just over 1900MHz OC'd (cannot go higher because there is a 75W hard limit applied even if the AIB provides auxiliary 6-pin and split power distribution) using the Samsung fab; however voltages were subtly different performance envelope between Samsung/GF and TSMC.

Cheers
Yea but the IBM is using the 14HP process whereas Ryzen uses the 14LPP process (which so is Polaris, and sorry for WCCFTech but I wasn't going to dig for a better english source lol)

Pascal is on 16nm though, and TMSC on top of that. So no really as comparable for those two reasons. It'd be cool to be able to compare Power9 to Ryzen, but even then they are completely different architectures which changes things further. Look at how Pentium 4's clocked compared to the chips at that time, everyone was like :O lol
 
Yea but the IBM is using the 14HP process whereas Ryzen uses the 14LPP process (which so is Polaris, and sorry for WCCFTech but I wasn't going to dig for a better english source lol)

Pascal is on 16nm though, and TMSC on top of that. So no really as comparable for those two reasons. It'd be cool to be able to compare Power9 to Ryzen, but even then they are completely different architectures which changes things further. Look at how Pentium 4's clocked compared to the chips at that time, everyone was like :O lol
The 1050/1050ti is on the Samsung fab with LPP, the rest is TSMC; hence how we can definitely say the voltage performance envelope is subtly different.

Why did AMD go with 14LPP rather than 14HP?
The 1st CPUs are being deployed around summer 2017 by IBM, unless it was just 1-2 quarters too late for Ryzen.
So is Naples 14LPP or 14HP?
If it was timescales you would expect it to be 14HP.

Edit:
Actually that IBM article does not expressly say 14HP just that "be a 14nm high-performance FinFET ".
So it is down to interpretation, as even GF call their LPP "high performance": https://www.globalfoundries.com/technology-solutions/cmos/performance/14lpp

They have their own process I think with Global Foundries.

Cheers
 
Last edited:
The 1050/1050ti is on the Samsung fab with LPP, the rest is TSMC; hence how we can definitely say the voltage performance envelope is subtly different.

Why did AMD go with 14LPP rather than 14HP?
The 1st CPUs are being deployed around summer 2017 by IBM, unless it was just 1-2 quarters too late for Ryzen.
So is Naples 14LPP or 14HP?
If it was timescales you would expect it to be 14HP.

Edit:
Actually that IBM article does not expressly say 14HP just that "be a 14nm high-performance FinFET ".
So it is down to interpretation, as even GF call their LPP "high performance": https://www.globalfoundries.com/technology-solutions/cmos/performance/14lpp

Cheers
Ah, I didn't know the 1050 was made on a different process. Interesting :) Speculation was that the actual Samsung 14LPP can't be 100% compared to GloFo's 14LPP, despite GloFo having licensed the process from Samsung. Only 'reason' that was given for why that is, was due to the different machinery used and possibly even different chemicals (or at least different suppliers of the same chemicals, thus different quality). Whether or not that's true, we may never really know.

As for Naples, what I picked up in Stilt's thread on Ryzen over at AnandTech forums was that those would be manufactured on GloFo 14LPE, which is supposed to be even more efficient than LPP, but will have an even lower clock. What that translates to is that Naples will be high core, higher thread, low clock (~2GHz or perhaps even less), but also extremely low power and heat output. It's those last two which really are of interest for the datacenters, but when you add in the high core count you hit the trifecta.... so long as it can perform still (which from what has been shown with Ryzen downclocked, it definitely does).

Also in that same thread they've discussed why AMD likely didn't use 14HP though. The power draw becomes greater, and as such the thermal output climbs, which apparently would spiral a bit out of control if Ryzen used it. It's due to that which would've made the clocks even worse, not better, due to hitting TDP window too soon.

In regards to the link, if it doesn't say it, I know this PDF does:
https://openpowerfoundation.org/wp-content/uploads/2016/04/5_Brad-McCredie.IBM_.pdf
"Global Foundries 14HP finFET technology with eDRAM"
 
Also in that same thread they've discussed why AMD likely didn't use 14HP though. The power draw becomes greater, and as such the thermal output climbs, which apparently would spiral a bit out of control if Ryzen used it. It's due to that which would've made the clocks even worse, not better, due to hitting TDP window too soon.

We have to realize that AMD was extremely TDP-sensitive for this release; even missing some of their pre-release performance hype can't compare to the volume buyers shunning Ryzen if TDPs were in the 150w+ range, even though most enthusiasts wouldn't break a sweat if that meant stable 4.5GHz+ operation.

Maybe as AMD fills the slots that actually get their revenue streams moving again they can play around with different processes and get some hot clocked ~200w halo parts out.
 
Ah, I didn't know the 1050 was made on a different process. Interesting :) Speculation was that the actual Samsung 14LPP can't be 100% compared to GloFo's 14LPP, despite GloFo having licensed the process from Samsung. Only 'reason' that was given for why that is, was due to the different machinery used and possibly even different chemicals (or at least different suppliers of the same chemicals, thus different quality). Whether or not that's true, we may never really know.

As for Naples, what I picked up in Stilt's thread on Ryzen over at AnandTech forums was that those would be manufactured on GloFo 14LPE, which is supposed to be even more efficient than LPP, but will have an even lower clock. What that translates to is that Naples will be high core, higher thread, low clock (~2GHz or perhaps even less), but also extremely low power and heat output. It's those last two which really are of interest for the datacenters, but when you add in the high core count you hit the trifecta.... so long as it can perform still (which from what has been shown with Ryzen downclocked, it definitely does).

Also in that same thread they've discussed why AMD likely didn't use 14HP though. The power draw becomes greater, and as such the thermal output climbs, which apparently would spiral a bit out of control if Ryzen used it. It's due to that which would've made the clocks even worse, not better, due to hitting TDP window too soon.

In regards to the link, if it doesn't say it, I know this PDF does:
https://openpowerfoundation.org/wp-content/uploads/2016/04/5_Brad-McCredie.IBM_.pdf
"Global Foundries 14HP finFET technology with eDRAM"
Samsung LPP and GF LPP is close enough tbh with regards to comparing behaviour of Pascal between 1050/1050ti and TSMC models.

Thinking about it a bit more I think IBM may be using their own process at Global Foundries.
Well that is disappointing if Naples has to go with 14LPE IMO, yeah I appreciate more cores means greater efficiency but IBM also is using high core count with Power 9 (it is a really nice design) and same can be said about Xeon Skylake.
Cheers
 
We have to realize that AMD was extremely TDP-sensitive for this release; even missing some of their pre-release performance hype can't compare to the volume buyers shunning Ryzen if TDPs were in the 150w+ range, even though most enthusiasts wouldn't break a sweat if that meant stable 4.5GHz+ operation.

Maybe as AMD fills the slots that actually get their revenue streams moving again they can play around with different processes and get some hot clocked ~200w halo parts out.
Yea I don't think most are realizing that fact, particularly the ones bitching about the overclocking headroom as it is. The chips were no doubt originally intended to have a max of 3.4GHz on the high end models, but due to a few a architecture things needing to be ironed out, the performance wasn't entirely where they had hoped. To offset, they "overclocked", and here we are. However, their labeling has caused the less informed --which seem to, oddly, be most of the Intel lovers who are slamming Ryzen-- to be claiming poor overclocks, of only ~200MHz on the 1800X, because they think that the Boost Clock is relevant. Unfortunately it isn't since it, like Intel's, is conditional and only applies to one core. The only reason it's partially relevant in our current technosphere is because of software devs not jumping all over the multithreaded bandwagon, and so a single core running at 57GHz WILL ultimately help in certain occasions. However, since it is single core's speed, it means you have to base the overclock amount on the chips baseclock, and in the 1800X's case that's 3.6GHz, meaning 400-500Mhz since most get 4GHz to 4.1GHz on ALL cores. :D

Which that brings up a legitimate question... If it boosts a single core, that means both of its threads are operating at 4.1GHz, yes? (Please excuse my ignorance lol I've honestly ignored SMT up until now, since I prefer to use AMD and have lacked that previously)
 
The 1050/1050ti is on the Samsung fab with LPP, the rest is TSMC; hence how we can definitely say the voltage performance envelope is subtly different.

Why did AMD go with 14LPP rather than 14HP?

Because 14HP is not suitable for mobile, GPUs,... and AMD doesn't have money for porting the design to different processes.

So is Naples 14LPP or 14HP?

14LPP.

Edit:
Actually that IBM article does not expressly say 14HP just that "be a 14nm high-performance FinFET ".
So it is down to interpretation, as even GF call their LPP "high performance": https://www.globalfoundries.com/technology-solutions/cmos/performance/14lpp

No. IBM uses 14HP

power9.jpg
 
Because 14HP is not suitable for mobile, GPUs,... and AMD doesn't have money for porting the design to different processes.



14LPP.



No. IBM uses 14HP

power9.jpg

My question (answered anyway by another poster) about 14HP was why is AMD not using it for Naples considering IBM and Intel are not going low power even higher efficiency with their larger core server CPU dies, not about mobile or GPUs :)

IBM call it 14HP, but I have other reports suggesting it is their own process (IBMs) at Global Foundry.
Can you find or have any info specifically from GF to say it will be generally available or 14HP exists?
There should be information from GF by now on this if they were providing it generally, especially as IBM is using it from this summer onwards.
Thanks
 
The 14HP process is a wafer burn process in reality mades specifically to power chips.. Its only good if you sell 2000$+ chips. And its the last process of such kind.
 
My question (answered anyway by another poster) about 14HP was why is AMD not using it for Naples considering IBM and Intel are not going low power even higher efficiency with their larger core server CPU dies, not about mobile or GPUs :)

And I just answered above why AMD is not using 14HP for Naples: "AMD doesn't have money for porting the design to different processes."

IBM call it 14HP, but I have other reports suggesting it is their own process (IBMs) at Global Foundry.
Can you find or have any info specifically from GF to say it will be generally available or 14HP exists?
There should be information from GF by now on this if they were providing it generally, especially as IBM is using it from this summer onwards.
Thanks

14HP was initially developed by IBM foundries. Then it was acquired by Globalfoundries when they got IBM foundry business. It is now "Globalfoundries 14HP" and developed at Globalfoundries.

I have given you a slide from IBM talk stating clearly that Power9 uses Globalfoundries 14HP process node. You have the full talk here

https://openpowerfoundation.org/wp-content/uploads/2016/04/5_Brad-McCredie.IBM_.pdf

What more is needed?
 
Last edited:
And I just answered above why AMD is not using 14HP for Naples: "AMD doesn't have money for porting the design to different processes."



14HP was initially developed by IBM foundries. Then it was acquired by Globalfoundries when they got IBM foundry business. It is now "Globalfoundries 14HP" and developed at Globalfoundries.

I have given you a slide from IBM talk stating clearly that Power9 uses Globalfoundries 14HP process node. You have the full talk here

https://openpowerfoundation.org/wp-content/uploads/2016/04/5_Brad-McCredie.IBM_.pdf

What more is needed?
You have told me everything I already knew, plus assumption why AMD is not using 14HP for Naples that has a much higher margin for AMD than Ryzen.

Please show me any information NOT from IBM but from GF regarding 14HP.
BTW I have all the IBM presentations going back to 2014 for Power 9 and linked a couple in the past or some important info from them.

If it is a general GF process you should have no problems finding a GF source for 14HP.
I am not necessarily disagreeing, but a source outside of IBM and not related to IBM information is needed rather than just assumptions and ideally needs to be a GF source, especially as I keep saying I have other reports suggesting it is specifically an IBM process at GF.
Until then it is all speculation and assumptions.
Thanks
 
Last edited:
I want to see your excuse when SKL-X launches and you have to tell why these benches are suddenly so much faster on the same Skylake uarch ;)

And I think you confuse 100% AVX/FMA loads with variable loads.


What Benches?, Cinebench?
 
And I just answered above why AMD is not using 14HP for Naples: "AMD doesn't have money for porting the design to different processes."



14HP was initially developed by IBM foundries. Then it was acquired by Globalfoundries when they got IBM foundry business. It is now "Globalfoundries 14HP" and developed at Globalfoundries.

I have given you a slide from IBM talk stating clearly that Power9 uses Globalfoundries 14HP process node. You have the full talk here

https://openpowerfoundation.org/wp-content/uploads/2016/04/5_Brad-McCredie.IBM_.pdf

What more is needed?

Here is the 2017 Fab schedule for Global Foundry on offer from one of the companies that work with GF, only stated 14LPP to end of 2017 (calendar year schedule).
https://www.mosis.com/db/pubf/fsched?ORG=GF

And we know IBM is in manufacturing by summer 2017 for the supercomputer contracts.
Hence why it would be ideal having source of the 14nm from outside of IBM and more directly from GF.
Cheers
 
Last edited:
As part of the agreement between GF and Samsung.
Would GF have access to Samsung's 14nm LPU?
From memory I think this would be a separate negotiation.
Although Samsung are being rather vague-coy about its performance position relative to LPP while happy to mention 10LPE and 10LPP performance figures, so LPU is probably a minor performance differentiation to LPP but with LPU being more cost effective.

Anyway I know that GF has expanded 14nm production with new IP to assist 14LPP performance coming into effect sometime between now and 2018.
Cheers
 
I don't know how many reviews show the AMD RyZen 8 core rig using less power then the 6800k rig while out performing the 6800k. Some folks are just rocks for thought process. Then again I don't really care about the small difference but do very care about the price difference. My RyZen rig ROCKS! Have not had this much fun in over 6 years in building a system.

I've seen several, off hand, that show Ryzen idle and light load power consumption being much lower than the 6900k, and load consumption being similar to the 6900k. But I can't remember seeing anything comparing the 6800k, off hand. Not saying it doesn't exist -- I'm sure it does. I'll have to dig for them later, unless you've got a link handy.
 
I want to see your excuse when SKL-X launches and you have to tell why these benches are suddenly so much faster on the same Skylake uarch ;)

And I think you confuse 100% AVX/FMA loads with variable loads.

Skylake-X and Kaby Lake-X will certainly dominate those particular benchmarks. But they will do so with a retarded cost. Folks were estimating a $1700+ price point for these. OTOH, if a version of the regular old Kaby Lake were offered in 6 core flavor, for around $500-ish, I'd have bought that instead of the Ryzen.

Ryzen's appeal is not in being the best gaming chip (obviously it isn't), nor the best content creation chip (right now, it's CLOSE to the 6900k, but it doesn't beat it -- and of course it loses to the 10 core 6950). Its appeal is providing a middle-ground between the two options, where it's good enough for both at a much more reasonable price than the 6900k, and much better content creation capability than the 7700k.

A 6 core Kaby Lake at around $500, say 3.8-4.0 GHz base, would have changed that calculus immensely. I'd have gone that route all day. But Intel doesn't seem to want to serve that niche. They want you to either buy a top-of-the-line gaming CPU at what is admittedly a very fair price, or get bent over by Bubba for anything vaguely suitable for content creation.

Intel still has the superior uarch, though AMD is now starting to play catch up finally. But Intel's market positioning sucks, and they got too greedy. Ryzen is giving them a much needed kick in the ass.
 
Believe it or not (I don't care) but reviewers, CanardPC, and myself know what TDP is and is not. And all of us agree on that the marketing numbers that AMD give us are not the real TDPs of the chips.

Reviewers, CanardPC, myself, and other people know that the official TDP numbers that AMD gives for its chips are only marketing labels. We demonstrated it with hard data. We have also demonstrated with hard data that i7-6900k is more efficient than R7-1800X. Like CanardPC has stated "AMD bullshit son TDP". We also know that Earth is not flat, despite some people will continue to argue that the Earth is flat, and we know that people continuously insult anyone that doesn't agree that Earth is flat.

LOL! You certainly add entertainment value to this thread, I will give you that. :D :)
 
AMD is back in the race, and I'm very happy as a long time AMD fan. I had a Barton 2500, and before that an athlon 1ghz. I remember back when the Athlon was far better than the higher clocked intel. I used a Duron as well. Good times!

But you know what troubles me? They had to have the same Chip designer come in and remake them. The same guy that put AMD at the top then put AMD back at the top again. What does that mean? It means AMD must not have the talent needed to do this on their own. AMD needs it's own high-end chip engineers. I know AMD has to have some great people to have completed this, but what concerns me is that they had to bring in a specialist to get this done. What will happen when this runs it's course like Athlon did? I hope AMD can have their own homebrew chip engineers that can do this all over again without having to bring someone in. I doubt the same guy will be around when this is needed again. That's what worries me about AMD long term.
 
I've seen several, off hand, that show Ryzen idle and light load power consumption being much lower than the 6900k, and load consumption being similar to the 6900k. But I can't remember seeing anything comparing the 6800k, off hand. Not saying it doesn't exist -- I'm sure it does. I'll have to dig for them later, unless you've got a link handy.
It does rather well in the end, here some:

 
I am not. I was only replying to someone that posted his personal expectations as if they were the expectations of everyone else.

Back to the topic, many people was expecting R7-1800X to be more efficient than i7-6900k. Some people as 'chip-architect' even wrote that 1800X was 60% more efficient than the 6900k. The title of this thread is "Why is AMD's Zen more efficient than intel's Core?", but data, hard data proves otherwise: Zen is less efficient, about 15% less efficient.

We know that the R7-1800X has a real TDP of 125W or 130W. And CanardPC just confirmed that the real TDP of the R7-1700 model is 90W


The french "AMD bullshit son TDP" doesn't even need translation.

You know what AMD means. Its typical workload. If 90w is across the board then yea you got a point and AMD bullshitted you. There are more then enough websites that show during normal workload ryzen chips stay within spec, that is what they refer to with their TDP. Would it ever go in the 80s or 90s? Yes you bet, under certain conditions. But it does do just fine under normal workloads. If you understood AMDs meaning of their TDP how they mean it, you wouldn't be calling bullshit on it, because you failed to understand their terminology in the first fucking place. It has been well knows for ages what they mean.

I feel like you keep hammering the same damn thing over and over and more over. You should know people here understand your math but in real world scenario they also know it will probably use less power. You get me? So stop beating a dead horse and move on. Ryzen does just fine efficiency wise, its efficient enough thats what matters. No one really cares who wins by a damn watt or two or 5% or so. No one expected amd to match 6900k in anything but it does. It just makes it seem like you are just mad about it.
 
Last edited:
LOL! You certainly add entertainment value to this thread, I will give you that. :D :)
That reminds me.....

juanrga I forgot to post this yesterday... You know that in post #88 the two Sources you linked to are actually linking back to my post that you were quoting? But worse yet, that your link in #93 (that ManofGod just quoted) is set to REPORT poor JustReason for his post that, once again, you were quoting? :\ So anyone that clicked it ends up reporting JustReason for... well... no just reason (lol). Seriously though, I hope it's some sort of user-error and wasn't intentional, cuz that'd be hitting below the belt.
 
Back
Top