about IPC

I already commented about that graph measuring cluster scaling, proved that GROMACS has a 0.9 scaling factor above 16 core, and compared a 22 core Intel vs a 16 core AMD, making moot your core scaling-up argument, but you can insist.

About frequencies, 2.1GHz is the base clock for x86 stuff. For AVX512 stuff as GROMACs the base clock is 1.4GHz. I stop here.

Regardless to say that all cores are running at 1.4ghz when it turbos to 2.0ghz depending on use, is disingenuous at best.

Either way you are using an outlying benchmark to make dubious claims. It scales poorly and that should be enough to disqualify it from such metrics.
 
Last edited:
The main discussion is about games and IPC. AMD IPC is about 15% behind in games. This and the clock difference is the reason why Intel is better for gaming. The sub-discussion about AVX512 was started by people that still continues negating that Intel has a 2x or 3x IPC leap in heavy AVX workloads.

Intel loses on performance per dollar, because price is a nonlinear function of performance. IPC is a nonlinear function of number of transistors. So getting 20% higher IPC costs about 40% more, not 20%. Similarly getting a node capable of 5GHz costs much more than a node that only does 4GHz.

So if you compare a fast core and a slow core, the slow core will win in performance per dollar. It is the reason why a R7 1800X loses to a R7 1700X in performance per dollar.

Moreover, for servers, AMD is using a MCM approach that further reduces price, but at expense of additional performance and power penalties. The multidie approach in EPYC is the reason why it has same internal latencies than a four-socket Broadwell.

Cascade Lake is plan B. Cascade Lake was born from 10nm fiasco. The original plan was CannonLake 10nm for 2016.

Intel charges more not because it cost more to make but because they could. I promise you they are making plenty of profit off what they are selling. I would guess that AMD's margins are much smaller over all than Intel's. AMD is looking at the long game. They want to show everybody that they can still compete and so they are selling things are a much lower margin to gain a foot hold. Intel could really give AMD a spanking if they would just lower the cost. They could still make money, but at the same time hurt AMD a bit if they would just cut the cost of their consumer line. The server side of things is another beast completely and it comes down to the number of cores you need to complete your tasks. Yes core for core Intel is faster both in IPC and Clocks, but as AMD closes that gap with each revision Intel is scrambling to keep up. I really think that the core architecture doesn't have much left in it. Intel isn't innovating anymore. They are just reacting to what is happening. Unplanned launches to keep up with the core counts and things like that are why we can see that Intel is scared. Intel is why the mainstream was stuck on 4c8t for nearly seven years. They could have upped the number of cores on the mainstream easily but they told everybody we didn't need them. So software developers didn't write code for more than that. AMD comes along with Zen and hands us mainstream parts with 8c16t that was very near the performance of Intel's parts and all the sudden Intel has 6c12t parts on hand. Now Intel has 8c16t parts to compete too, but want you to pay nearly double to get at best a twenty percent performance increase over AMD. It doesn't make sense, but people keep arguing that Intel is better. Yes but not for what you pay. I have used both AMD and Intel over the years and have no real love for either other than who is going to give me the most bang for my dollar, and right now AMD is the winner.
 
The sub-discussion about AVX512 was started by people that still continues negating that Intel has a 2x or 3x IPC leap in heavy AVX workloads.

You started this subdiscussion. I provided The Stilt's non 256b results. You insisted I shouldn't use that value, and that I must include AVX workloads. I have repeatedly said this is not very appropriate given the use case. You insist otherwise... then claim other people did that. Check yoself mang.

Nobody has negated that Intel has a commanding lead in AVX workloads. In fact, I said you'd have to be an idiot to not buy Intel if that is your intended use case. I did not dispute your 2x to 3x figure there. If Gromacs is your thing, go buy Intel. The dispute is if this has anything to do with general use IPC. These are edge cases. Intel tends to be much better in edge cases. They have the R&D budget to attack them effectively.

I consider the 9% figure to be more accurate for a general user. Combined with better clocks, this makes Skylake/Kaby/CFL superior on a per core basis. This is especially clear in games where AMDs superior core/dollar value doesn't help enough because of latency and general lack of ability for games to handle more than 4 to 8 threads.

Future value... I expect games to use more threads going forward, so the difference is likely to be mitigated somewhat. Somewhat because latency issues will continue to be a problem for the multi-CCX/multi-die setup. Intel's own mesh design for Skylake X suffers a bit here too. Core scaling is a challenge.

As before, AMD tends to be a performance/dollar winner in many cases, for reasons already discussed. It tends to be good for mixed case users as well. Gamers with money to spend should go Intel. Gamers on a budget have a number of good Intel and AMD options, depending on specific price bracket. With the 9900k addressing the last weakness of the 8700k vs 2700X: streaming and gaming, it becomes the undisputed mainstream king, where the 2700X vs 8700k battle was less clear cut, and more usage dependent. AVX users should go Intel no matter what, obviously.

Zen 2 will need to address the IPC deficit - whether you accept 9% or 14% it's just too big of a gap with core counts leveling off. It also needs to address clockspeed, since the 9900k has leveled the mainstream core count playfield.

As I understand it, 256b performance will be doubled with Zen 2, though it doesn't seem like it has AVX512 support... not sure yet. So that problem should be partly addressed, at a minimum. Maybe more. We will see soon enough.
 
Intel charges more not because it cost more to make but because they could. I promise you they are making plenty of profit off what they are selling. I would guess that AMD's margins are much smaller over all than Intel's. AMD is looking at the long game. They want to show everybody that they can still compete and so they are selling things are a much lower margin to gain a foot hold. Intel could really give AMD a spanking if they would just lower the cost. They could still make money, but at the same time hurt AMD a bit if they would just cut the cost of their consumer line. The server side of things is another beast completely and it comes down to the number of cores you need to complete your tasks. Yes core for core Intel is faster both in IPC and Clocks, but as AMD closes that gap with each revision Intel is scrambling to keep up. I really think that the core architecture doesn't have much left in it. Intel isn't innovating anymore. They are just reacting to what is happening. Unplanned launches to keep up with the core counts and things like that are why we can see that Intel is scared. Intel is why the mainstream was stuck on 4c8t for nearly seven years. They could have upped the number of cores on the mainstream easily but they told everybody we didn't need them. So software developers didn't write code for more than that. AMD comes along with Zen and hands us mainstream parts with 8c16t that was very near the performance of Intel's parts and all the sudden Intel has 6c12t parts on hand. Now Intel has 8c16t parts to compete too, but want you to pay nearly double to get at best a twenty percent performance increase over AMD. It doesn't make sense, but people keep arguing that Intel is better. Yes but not for what you pay. I have used both AMD and Intel over the years and have no real love for either other than who is going to give me the most bang for my dollar, and right now AMD is the winner.

(i)
You cannot compare margins directly. One is a foundry, the other is not.

(ii) Already explained you that price grows nonlinearly with performance. Making a chip that is 20% faster doesn't cost 20% more, but it costs much more. So the faster chip will have worse performance/price ratio.

So, this is not an Intel vs AMD thing. The same happens when you compare AMD to AMD. I gave you the example of the R7-1800X and the R7-1700. At launch the 1800X was 50% more expensive, but wasn't 50% faster.

(iii)
Both Intel and AMD are releasing unplanned launches. Threadripper and Zen+ refreshes weren't in the original roadmap.

(iv)
In 2015 Intel already planned to release 8 core CannonLake for mainstream. It was 4 core i3, 6 core i5 and 8 core i7. But then the problems with 10nm foundry forced Intel to cancel all the roadmaps, abandon the tick/tock execution, return to the drawing board and introduce Kabylake, and CoffeeLake refreshes.

I am sure that 6C/8C Coffeelake is a response to 6C/8C Ryzen. I am also sure that 6C/8C Ryzen was planned as an anticipated response to 6C/8C Cannonlake.
 
You started this subdiscussion. I provided The Stilt's non 256b results. You insisted I shouldn't use that value, and that I must include AVX workloads. I have repeatedly said this is not very appropriate given the use case. You insist otherwise... then claim other people did that. Check yoself mang.

Nobody has negated that Intel has a commanding lead in AVX workloads. In fact, I said you'd have to be an idiot to not buy Intel if that is your intended use case. I did not dispute your 2x to 3x figure there. If Gromacs is your thing, go buy Intel. The dispute is if this has anything to do with general use IPC. These are edge cases. Intel tends to be much better in edge cases. They have the R&D budget to attack them effectively.

I consider the 9% figure to be more accurate for a general user. Combined with better clocks, this makes Skylake/Kaby/CFL superior on a per core basis. This is especially clear in games where AMDs superior core/dollar value doesn't help enough because of latency and general lack of ability for games to handle more than 4 to 8 threads.

I started saying that the IPC gap is "10--15% behind Skylake/CoffeeLake". Then mentioned as sidenote that it is "14.4%" average according to the Stilt. You and others ignored my 10--15% (10% for applications, and 15% for games), and started discussing about if the 14.4% average claimed by the Stilt for applications was relevant or now, because he used AVX256/512 workloads in the mix. It then degenerated further with some other people here pretending that Intel isn't ~3x ahead in 512bit workloads.

I will repeat you, once again, I am using 10% IPC gap for applications. And that my 10% correspond to your 9%. And I am using 15% for games. I said you this in #29, but you once again introduce AVX in the discussion.
 
Cool video here by Gamer Nexus showing some benchmarks with all the latest CPUs using settings to test CPUs. You guys can compare the i5 8400 to the R5 2600 cause they boost to similar clocks. I am really digging the videos they have been putting out lately.

 
Cool video here by Gamer Nexus showing some benchmarks with all the latest CPUs using settings to test CPUs. You guys can compare the i5 8400 to the R5 2600 cause they boost to similar clocks. I am really digging the videos they have been putting out lately.


Totally lost all interest when i noticed the tests with Medium settings at 1080p with high end cards...In other words nothing i would ever run across on purposeo_O
 
Totally lost all interest when i noticed the tests with Medium settings at 1080p with high end cards...In other words nothing i would ever run across on purposeo_O
Thats the point. The test will show the differences in the CPUs. It completely takes the gpu bottleneck out of the test. They were testing cpus not gpus.
 
Well, on the gaming front, gamers with any set budget can grab a "slower" Ryzen processor that is significantly less expensive than the Intel counterpart (R5 VS i5 etc.) and use that saving for a better video card and have a system that has better gaming performance. So which one is ACTUALLY better for gaming?

Honestly, try speccing out a system with an 8400 versus a 2600 with a set-in-stone budget, and you'll find a huge difference with the video card you can afford.
 
Our line for acceptable performance varies greatly. At one time it was 30 fps, then it was 60 fps, then it was back to 30 fps now it's back to 60 or better FPS with monitors that have 144+ HZ refresh rates.

The refresh rate thing is funny because CRT's are STILL faster for refresh rates. ;)
 
It then degenerated further with some other people here pretending that Intel isn't ~3x ahead in 512bit workloads.

That's horseshit and you know it. Everybody knows AVX512 gives Intel an enormous advantage for that. I haven't seen anyone claim otherwise. Retract your false claim, or cite where somebody has said otherwise in this thread.

The claim is that such workloads are edge cases. OP's use case has zilch to do with AVX, which is why I used the 9% figure, not the 14% one. If we are discussing all workloads, including AVX, then I will use the 14% figure. If we are discussing AVX exclusively, then I will use much higher figures (depending on the specific case).

...and started discussing about if the 14.4% average claimed by the Stilt for applications was relevant or now, because he used AVX256/512 workloads in the mix.

Spin. I cited The Stilt's excl. 256b results to answer OP's question, and provided a reason for why I chose this figure over the other figures in the deep dive. You were the one who took issue with my reasons, and tried to justify why AVX workloads should be included in this calculation.
 
That's horseshit and you know it. Everybody knows AVX512 gives Intel an enormous advantage for that. I haven't seen anyone claim otherwise. Retract your false claim, or cite where somebody has said otherwise in this thread.

The claim is that such workloads are edge cases. OP's use case has zilch to do with AVX, which is why I used the 9% figure, not the 14% one. If we are discussing all workloads, including AVX, then I will use the 14% figure. If we are discussing AVX exclusively, then I will use much higher figures (depending on the specific case).



Spin. I cited The Stilt's excl. 256b results to answer OP's question, and provided a reason for why I chose this figure over the other figures in the deep dive. You were the one who took issue with my reasons, and tried to justify why AVX workloads should be included in this calculation.

Juanrga is just wants to start a amd vs intel war. So lets just ignore him. We all know that Intel is faster right now in both ipc and clocks, but it doesn't matter. It cost so much more that it isn't really worth it even if you have the money to pay for it.
 
All I have to say about this dispute is that if you're such a fan boi about a company, why come to the competing fan boi page, post up some junk that has little meaning to the OP? Are you trying to convert people? We're not lemmings and do not blindly follow an anagram of dogmatic practice. Yes, we know the facts, but we like saving money and still playing our game/rendering/whatever. Please stop trying as it just looks really 'special', if ya catch my drift. I would love to have an Inhell, but it is not within my budget and their business practice is really not well thought out. AMD was that way with FX series, but they listen to their base, improve their products, or just start from scratch. What did Inhell do? They got scared, released old tech with a new label (like AMD has done before) and have made their products premium priced. Why spend good hard money on tech that is old and will need full upgrading later, when I can (and have) spent money on a little Ryzen 5 1600 with a board and ram that will not need upgrading for a while? [Yes, the 1600 is old tech, but it can be changed out for cheaper and the rest of the hardware will last longer.] BOOM!
 
Juanrga is just wants to start a amd vs intel war.

Summarizing the flaws in the Guru3D review mentioned in #2 isn't starting a war, neither reminding that Cinebench measurements of IPC are useless to understand gaming IPC is starting a war.
 
Last edited:
Lol at this thread and certain folks that have taken this to new levels..... ha
 
Summarizing the flaws in the Guru3D review mentioned in #2 isn't starting a war, neither reminding that Cinebench measurements of IPC are useless to understand gaming IPC is starting a war.
Understanding that IPC stopped being important because gaming does not rely on single core cpu bound API any more is another thing...
 
What about AMD's upcoming 'new' processors - the 3000 line? Supposedly, some are going to be clocked either close to 5Ghz or practically at that speed? Are these going to be neck and neck with the 8700k/9700k/9900k or is it all hype again?
 
If it weren't for my move and the fact that my current gaming needs are still fully covered by the rig in sig I would have made a new box with a 2xxx-series AMD processor. I'm not a competitive gamer by any means and the small percentage difference between AMD and Intel (not to mention that I'd be GPU-bound anyway) is not an impediment for me adopting an AMD processor again. The 3xxx-series promises to be better even if it's only by a few percentage points.
 
What about AMD's upcoming 'new' processors - the 3000 line? Supposedly, some are going to be clocked either close to 5Ghz or practically at that speed? Are these going to be neck and neck with the 8700k/9700k/9900k or is it all hype again?

Actually they improved the IPC which should close the gap. And for the money you spend on Intel cpu you would be able to get more cores :).
Gaming is still largely single-core and few-core bound.
And there more DX9 games out there then DX 10 , 11&12 , but what else is new. If you want to be stuck in the past go for that I3 buddy :) .
 
What about AMD's upcoming 'new' processors - the 3000 line? Supposedly, some are going to be clocked either close to 5Ghz or practically at that speed? Are these going to be neck and neck with the 8700k/9700k/9900k or is it all hype again?

We'll see when they get here?

Intel isn't sitting on their laurels either; they've had a new arch ready to replace Skylake for some time, waiting on their next process to mature. It's likely (referencing history) that they'll maintaing their IPC advantage at least.
 
Understanding that IPC stopped being important because gaming does not rely on single core cpu bound API any more is another thing...

Nothing more far from reality. IPC continues being very relevant for gaming. In fact both reviews and final users use the latest AGESA/BIOS and overclocked RAM just to increase the IPC of Ryzen.
 
Actually they improved the IPC which should close the gap. And for the money you spend on Intel cpu you would be able to get more cores :).

The IPC gap will be reduced unless Icelake brings higher IPC gains than Zen2. And we don't know pricing either...
 
We'll see when they get here?

Intel isn't sitting on their laurels either; they've had a new arch ready to replace Skylake for some time, waiting on their next process to mature. It's likely (referencing history) that they'll maintaing their IPC advantage at least.

Yeah renaming the same crap and pushing it to the max temp and frequency is hardly innovating. According to their often revised chart they are a little late, reality is they are years late and no amount of cash is going to change that quickly.

9110881-15271462873472602_origin.png
 
Yeah renaming the same crap and pushing it to the max temp and frequency is hardly innovating. According to their often revised chart they are a little late, reality is they are years late and no amount of cash is going to change that quickly.

You literally just quoted me talking about their next process and arch.
 
one thing if it wasn't for amd we would still have 4c/8t processors from intel and another year of go F yourself...so the price is high we deserve it
 
one thing if it wasn't for amd we would still have 4c/8t processors from intel and another year of go F yourself...so the price is high we deserve it

This statement really confuses me as I'm typing it on my 2010 workstation with 12 cores and 24 threads. :confused:
 
one thing if it wasn't for amd we would still have 4c/8t processors from intel and another year of go F yourself...so the price is high we deserve it

The inverse argument is valid as well: if it wasn't for Intel we would still have 4 modules 8 thread processors with 220W and $990 from AMD...
 
We all know how those Intel IPC gains work , you need to recompile and very often use their improved instruction set. That comment amount to as much as this one does:
https://www.overclock3d.net/news/cp...es_reports_of_zen_2_s_29_ipc_boost_over_zen/1

yay IPC gains totally meaningless ;) Hurrah !!

I am not talking about new instructions, but about generic IPC improvements that accelerate existing programs. Improvements such as moar execution ports, bigger caches, deeper buffers,...
 
Nothing more far from reality. IPC continues being very relevant for gaming. In fact both reviews and final users use the latest AGESA/BIOS and overclocked RAM just to increase the IPC of Ryzen.

Ryzen is memory-sensitive. More so than Intel's offerings, generally speaking. So it makes sense for an enthusiast interested in a Zen product to pay closer attention to this than his Intel counterpart. I certainly spent way more time researching memory options with this build than I did back in the day for my 2600k build. Mine was even more difficult, because I required 32GB of RAM for some of my work, and this meant either 4 sticks, or 2x dual rank sticks - which at my price point meant Hynix M-die RAM, not the better Samsung B-die. Getting 2933 out of it on tight timings was hard and took a lot of research and trial-and-error. But it eventually worked. Zen users are almost certainly going to do a lot more of this.

That being said, I prefer to see reviews where the RAM clocks/settings are as close as possible between competitors, on general principle, so we can better isolate CPU performance.

IPC and clockspeed will always remain relevant to gaming. However, I do expect to see core count also become more relevant in the future than it is currently. Which is why Zen's inferior high-end gaming performance doesn't bother me as much as it does some. For budget and mid-range gamers, Zen continues to be a viable option, depending on your price bracket and needs.
 
The inverse argument is valid as well: if it wasn't for Intel we would still have 4 modules 8 thread processors with 220W and $990 from AMD...

You could also argue that Intel invested more in people defending their marketing propaganda then what they did for improving their processors beside lip service. Good luck with reversing that one :)

I am not talking about new instructions, but about generic IPC improvements that accelerate existing programs. Improvements such as moar execution ports, bigger caches, deeper buffers,...

Yeah bigger caches benchmarks never abused that before :).
 
Back
Top