Intel 7nm ambitions are...lofty

I get what you're saying- and one way that you could better highlight your position would be to note that you're looking at IPC academically. That's fine, if you're attempting to show what a CPU should be capable of.

Where we got on separate tracks is the article you shared (now deleted and not replaced with an equivalent, but still quoted) that uses publicly accessible tests. That set the context for your argument as let's call it 'real-world IPC', while you were arguing 'academic IPC'.


Now, your first paragraph I agree with, academically. I'd make the same argument myself if speaking academically.

Next, when talking about limiting the system to one core- this isn't really feasible. It's certainly testable, but you still have a significant OS / driver / other hardware / software stack that the results cannot be isolated from. Perhaps the results could be shown to be repeatable, which would be something, but with all the extra 'cruft' in the way I don't really see how the results would be wholly applicable to either the academic perspective nor the real-world perspective.

Last, again looking in the context of 'real-world IPC', any test with repeatable results should be valid. Obviously said tests cannot represent the full impact of changes in 'academic IPC', but they can absolutely reveal the utility of said changes to the end-user.


And that's really the point. 'Academic IPC' is just that- academic. It can show that there is potential but that potential must be utilized by end-user applications to be of any use.
 
I get what you're saying- and one way that you could better highlight your position would be to note that you're looking at IPC academically. That's fine, if you're attempting to show what a CPU should be capable of.

Where we got on separate tracks is the article you shared (now deleted and not replaced with an equivalent, but still quoted) that uses publicly accessible tests. That set the context for your argument as let's call it 'real-world IPC', while you were arguing 'academic IPC'.


Now, your first paragraph I agree with, academically. I'd make the same argument myself if speaking academically.

Next, when talking about limiting the system to one core- this isn't really feasible. It's certainly testable, but you still have a significant OS / driver / other hardware / software stack that the results cannot be isolated from. Perhaps the results could be shown to be repeatable, which would be something, but with all the extra 'cruft' in the way I don't really see how the results would be wholly applicable to either the academic perspective nor the real-world perspective.

Last, again looking in the context of 'real-world IPC', any test with repeatable results should be valid. Obviously said tests cannot represent the full impact of changes in 'academic IPC', but they can absolutely reveal the utility of said changes to the end-user.


And that's really the point. 'Academic IPC' is just that- academic. It can show that there is potential but that potential must be utilized by end-user applications to be of any use.

Not sure what happened the post with the article in it, I never touched it, or deleted it. But here it is again: https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/


OS/Driver/other hardware/software stacks are all a part of every PC. So Those are limiting variables are a part of all results be it a single core or multiple cores as those variables can't be removed..

Games don't really test real world IPC, but more of a test of the complete system's hardware, not just CPU. It is the same as testing a car's engine, brakes, acceleration, fuel economy, shifting speed, fuel delivery system, etc all at once. Games are doing the same thing with a computer as it is testing the CPU, GPU, Memory, storage, motherboard, other hardware, etc all at once. That is why you can buy different brands of computers with different hardware, other than having identical CPU's, and GPU's in them, and get completely different results and experiences.
 
Games don't really test real world IPC, but more of a test of the complete system's hardware, not just CPU.

The point is, that's every test that claims to test IPC. They all test the whole system to some degree. Testing 'instructions per cycle' between different processors means equalizing clockspeed and benchmarking how much work is done. So yes, games do test IPC, for games.
 
The point is, that's every test that claims to test IPC. They all test the whole system to some degree. Testing 'instructions per cycle' between different processors means equalizing clockspeed and benchmarking how much work is done. So yes, games do test IPC, for games.

Not until you can isolate memory and infinite fabric latency influences that are outside of the processor, which includes ALL hardware that has DMA such as the GPU, which can't be done (same principle that you are using to imply that single core IPC can't be tested), which means we are back to my original statement: Intel isn't winning because of IPC in games, but because of memory and infinite fabric latency.
 
Last edited:
Not until you can isolate memory and infinite fabric latency influences that are outside of the processor, which includes ALL hardware that has DMA such as the GPU, which can't be done (same principle that you are using to imply that single core IPC can't be tested), which means we are back to my original statement: Intel isn't winning because of IPC in games, but because of memory and infinite fabric latency.
I can't tell... is your disagreement with IdiotInCharge about different definitions of IPC or something else?
 
Not until you can isolate memory and infinite fabric latency influences that are outside of the processor, which includes ALL hardware that has DMA such as the GPU, which can't be done (same principle that you are using to imply that single core IPC can't be tested), which means we are back to my original statement: Intel isn't winning because of IPC in games, but because of memory and infinite fabric latency.

But that's the thing: you're claiming a disparity that cannot be tested.

What can be tested, what has been tested, shows a different disparity than the academic results you're claiming. Memory and Infinity Fabric are part of IPC. Yes, results will vary. They must. But optimal configurations can be tested, and if the results are repeatable and can be recreated, then they become authoritative.

And that's where we are. Intel is 'winning' in games in current testing because their cores are currently faster for games. Period.
 
But that's the thing: you're claiming a disparity that cannot be tested.

What can be tested, what has been tested, shows a different disparity than the academic results you're claiming. Memory and Infinity Fabric are part of IPC. Yes, results will vary. They must. But optimal configurations can be tested, and if the results are repeatable and can be recreated, then they become authoritative.

And that's where we are. Intel is 'winning' in games in current testing because their cores are currently faster for games. Period.

You yourself have said you can't test just the cores because you can't isolate all the various influences, so how can you sit there and say that Intel has faster cores for gaming, or are you ignoring all those influences that can't be isolated? (clock wise, they are faster for gaming, IPC wise, they are not) If gaming was an actual test of IPC of a cpu, then why would the other hardware that makes up the pc have such an influence on the results? If Memory and Infinite fabric where tied to IPC to the point that it effects IPC of a chip, then why is AMD beating Intel in everything but gaming, since the latency between the cpu, memory, and infinite fabric doesn't change based on application or workload? Obviously AMD can chew threw the instructions faster than Intel, as demonstrated in every other workload besides gaming.. So what is influencing gaming that isn't allowing that to happen? If it AMD's IPC was the problem then why where the devs of World War Z able to release a patch that substantially increased AMD's performance and put them within a few frames behind Intel (well within margin of error)? In fact, when testing clock for clock, all results are so close they are all within the margin of error.. So when it comes to gaming. neither one can claim victory when being tested clock for clock.
 
You yourself have said you can't test just the cores because you can't isolate all the various influences, so how can you sit there and say that Intel has faster cores for gaming, or are you ignoring all those influences that can't be isolated?

They cannot be isolated, but they can be controlled for. This depends on the test, but again, so long as those variables are controlled to the point of presenting equitable results, the results are repeatable, and the results can be verified, they stand. As above, you control for those variables by using as similar hardware, firmware, and software surrounding the CPU as possible, with the selection being focused on isolating the CPU as much as possible. At most, different memory might be run on different platforms in order to maximize what the platform is capable of and thus isolate the CPU as much as possible.

(clock wise, they are faster for gaming, IPC wise, they are not)

Again, since IPC cannot be tested 'at the core' alone, these are the same thing and Intel has tested faster.

If gaming was an actual test of IPC of a cpu, then why would the other hardware that makes up the pc have such an influence on the results?

Why wouldn't it? And if the other hardware has such an influence, why would changing CPUs matter at all? We're looking at what the CPU is able to do with the work it is given, and comparing between CPUs while controlling for as many other variables as possible.

Further, let's say that the CPU were hypothetically even more isolated with a representative gaming workload- if Intel CPUs are faster now, then we would expect them to be even faster with more of the gaming workload shifted to them.

If Memory and Infinite fabric where tied to IPC to the point that it effects IPC of a chip, then why is AMD beating Intel in everything but gaming, since the latency between the cpu, memory, and infinite fabric doesn't change based on application or workload?

the latency between the cpu, memory, and infinite fabric doesn't change based on application or workload

This isn't true except for access latency. For many workloads, mainly throughput-bound applications, access latency is all but irrelevant. Data must stream well and much of the execution is SIMD.

For others, it is critical, especially in branching code, and most especially in branching code that has co-dependencies that spread out among threads and thus CPU resources, and extremely so when the results of the code are being used real-time, like games. But not only games!

Obviously AMD can chew threw the instructions faster than Intel, as demonstrated in every other workload besides gaming.. So what is influencing gaming that isn't allowing that to happen?

See above- for gaming workloads, AMD obviously isn't chewing through the instructions faster than Intel. Games represent a different type of workload, and actually, one that used to be more common.

If it AMD's IPC was the problem then why where the devs of World War Z able to release a patch that substantially increased AMD's performance and put them within a few frames behind Intel (well within margin of error)?

The why part you'll have to ask the developer. We can only speculate- at best, AMD's deviation in hardware design from what developers expect is large enough in this case that some hand tuning was needed.

In fact, when testing clock for clock, all results are so close they are all within the margin of error.. So when it comes to gaming. neither one can claim victory when being tested clock for clock.

Well, they're really not that close (per your source), so for gaming, if we're crowning a victor for gaming IPC, it'd be Intel.
 
They cannot be isolated, but they can be controlled for. This depends on the test, but again, so long as those variables are controlled to the point of presenting equitable results, the results are repeatable, and the results can be verified, they stand. As above, you control for those variables by using as similar hardware, firmware, and software surrounding the CPU as possible, with the selection being focused on isolating the CPU as much as possible. At most, different memory might be run on different platforms in order to maximize what the platform is capable of and thus isolate the CPU as much as possible.



Again, since IPC cannot be tested 'at the core' alone, these are the same thing and Intel has tested faster.



Why wouldn't it? And if the other hardware has such an influence, why would changing CPUs matter at all? We're looking at what the CPU is able to do with the work it is given, and comparing between CPUs while controlling for as many other variables as possible.

Further, let's say that the CPU were hypothetically even more isolated with a representative gaming workload- if Intel CPUs are faster now, then we would expect them to be even faster with more of the gaming workload shifted to them.



the latency between the cpu, memory, and infinite fabric doesn't change based on application or workload

This isn't true except for access latency. For many workloads, mainly throughput-bound applications, access latency is all but irrelevant. Data must stream well and much of the execution is SIMD.

For others, it is critical, especially in branching code, and most especially in branching code that has co-dependencies that spread out among threads and thus CPU resources, and extremely so when the results of the code are being used real-time, like games. But not only games!



See above- for gaming workloads, AMD obviously isn't chewing through the instructions faster than Intel. Games represent a different type of workload, and actually, one that used to be more common.



The why part you'll have to ask the developer. We can only speculate- at best, AMD's deviation in hardware design from what developers expect is large enough in this case that some hand tuning was needed.



Well, they're really not that close (per your source), so for gaming, if we're crowning a victor for gaming IPC, it'd be Intel.


branch code.. hello? Branch code can't be used to test IPC. IPC is tested with one set of code, NOT branch code, because branch code consists of more than one set of code,m and cannot accurately demonstrate a cpu's IPC. which is what gaming consists of. You just confirmed that games can't be used to determine IPC.

AMD isn't chewing threw game code, not because of the fault of the CPU.. as the CPU is waiting.. not having instructions to chew threw, caused by the memory/ infinite latency when dealing with branch code, that is NOT due to IPC.

As for the games results.. (9900x vs 3900x - 4 cores disabled - AMD's top CPU vs Intel's top CPU limited to 8 core comparison)

Battle field 1 - 160 vs 156, 4 fps difference
Far Cry New Dawn - 118 vs 112, 6 fps difference
Total War: Three Kingdoms - 128 vs 125, 3 fps difference
World War Z - 211 vs 208, 3 fps difference
World of Tanks - 281 vs 280, 1 fps difference (the game that shows that game code can be the deciding factor on how well a game performs)
Tom Clancy's Rainbows Six - 251 vs 238, 13 fps difference.

Not sure what math you use, but that is damn close, and l within the margin of error. Now, I know, you are trying to look at the 3700x results, AMD's slowest and lowest binned 8 core vs Intel's fastest highest binned part... Which is not a fair comparison. That is why tech spot included the 3900x wiht 4 cores disabled. You keep wanting to use every excuse and omit variable only when it supports Intel winning, but want acknowledge and include such variables when they go against AMD.
 
Last edited:
branch code.. hello? Branch code can't be used to test IPC. IPC is tested with one set of code, NOT branch code, because branch code consists of more than one set of code,m and cannot accurately demonstrate a cpu's IPC. which is what gaming consists of. You just confirmed that games can't be used to determine IPC.

So you don't want to test branch predictors when considering IPC?
 
So you don't want to test branch predictors when considering IPC?

You have no way of testing if it is branch prediction that is causing the slow down (incorrect prediction/wrong guess) or if the slow down was caused by other variables such as Memory Latency when running different branches of Code, that is why IPC is tested with 1 set of code, not branch code, because you add a variable that you have no way of isolating per your own statements. However, it is more than likely it is the memory latency issues that is causing the slow down, and not Branch prediction, which supports what I said, that is Memory / infinite fabric latency that is causing the slow down not IPC, which has been my argument all along.
 
You have no way of testing if it is branch prediction that is causing the slow down (incorrect prediction/wrong guess) or if the slow down was caused by other variables such as Memory Latency when running different branches of Code, that is why IPC is tested with 1 set of code, not branch code, because you add a variable that you have no way of isolating per your own statements. However, it is more than likely it is the memory latency issues that is causing the slow down, and not Branch prediction, which supports what I said, that is Memory / infinite fabric latency that is causing the slow down not IPC, which has been my argument all along.

My argument is that you cannot divorce the CPU from memory (or the rest of the system). You seem to be arguing that you can cut the CPU core out and test it alone, which is academic.

If you insist on your definition of IPC, you cannot make an authoritative statement because that definition cannot be tested.
 
Considering the gate pitch on Intel's 14nm is still tighter than TSMC's 7nm I think it's safe to say that Intel's process is more advanced.

You know i was inclined to believe you, but after review, the CPP of Samsung and TSMC 7nm is 57nm (claim 54nm) and Intel's 14nm++ is 84nm relaxed from formerly 70nm. So how is this true?

Also to you guys arguing i ask, if IPC cannot be tested, stop replying, you have nothing to lose or gain here.

Every benchmark shows for all intents and purposes in almost every single use case when both a 9900k and a 3700x are at the same clockspeed, the 3700x is faster in each task.

Gaming, of course, is the use case that matters to us and intel certainly wins there, IPC in every single task be damned. So yes, everything has to work together to be faster, and the divorced chiplets just aren't as fast as the ringbus core architecture at talking to memory and core to core.
 
Last edited:
  • Like
Reactions: N4CR
like this
My argument is that you cannot divorce the CPU from memory (or the rest of the system). You seem to be arguing that you can cut the CPU core out and test it alone, which is academic.

If you insist on your definition of IPC, you cannot make an authoritative statement because that definition cannot be tested.

I am not divorcing Memory from the CPU, but the link between memory and the CPU, does not change, regardless of the workload therefore is not a variable (non existent) in testing IPC going from one application to another. The latency remains constant between the CPU and Memory, no matter what application. I know you tried to claim otherwise above, But what you explained takes place with all workloads, not just branch code or games in modern CPUs, therefor does not change the latency between the cpu and memory based on workload. AMD would have the same issues In all applications, not just games if that was the case. The relationship, or link between all other aspects of a computer however is a different story.

Every thing I have read about testing IPC of a processor, exudes games as a proper testing platform, as they do not give accurate IPC results. Just because reviewers claim otherwise, and use games to do so, does not change that fact.. sorry!
 
I am not divorcing Memory from the CPU, but the link between memory and the CPU, does not change, regardless of the workload therefore is not a variable (non existent) in testing IPC going from one application to another.

Sure it's a variable. Yes, you're keeping it set to a specific setting as much as possible between tested CPUs and platforms, but applications have different behavior and thus will use buses and memory and caches differently, thus different workloads will have different results.

The latency remains constant between the CPU and Memory, no matter what application.

This is not true at all- difference in cache setups makes a tremendous difference in latency with varying workloads.

AMD would have the same issues In all applications, not just games if that was the case.

AMD has massive latency out to main memory. This is their design choice. They balance that with caches, which means that different workloads run differently. You cannot take one workload and apply its results to another without accounting for cache.

Every thing I have read about testing IPC of a processor, exudes games as a proper testing platform, as they do not give accurate IPC results.

...and yet the sole support you have for your argument does the opposite?

Games can be repeatably tested like any other application. Once other variables are considered, CPUs can be tested for how quickly they chew through game code. This is no different than testing any other application for real-world IPC.
 
Every thing I have read about testing IPC of a processor, exudes games as a proper testing platform, as they do not give accurate IPC results. Just because reviewers claim otherwise, and use games to do so, does not change that fact.. sorry!

Then you're reading the wrong things. IPC is a metric to determine how a processor's design (which includes the memory and I/O subsystems) will perform its intended workloads.

CPI(i) should be be measured and not just calculated from a table in the back of a reference manual since it must include pipeline effects, cache misses, and any other memory system inefficiencies.
--pg. 43 Computer Architecture: A Quantitative Approach [4th edition] (John L. Hennessy and David A. Patterson)

CPI(i) is the number of clock cycles it takes to execute a specific instruction in a program
CPI is the number of clock cycles it takes to execute the entire program
IPC then is just the inverse of CPI
 
...and yet the sole support you have for your argument does the opposite?

Games can be repeatably tested like any other application. Once other variables are considered, CPUs can be tested for how quickly they chew through game code. This is no different than testing any other application for real-world IPC.

False, you seem to be ignoring the other benchmarks on that page that are more accurate at demonstrating IPC than games ever will.. (see below)


Then you're reading the wrong things. IPC is a metric to determine how a processor's design (which includes the memory and I/O subsystems) will perform its intended workloads.



CPI(i) is the number of clock cycles it takes to execute a specific instruction in a program
CPI is the number of clock cycles it takes to execute the entire program
IPC then is just the inverse of CPI

Way to take my comment out of context. Not to mention that IPC is not the number of clock cycles it takes to execute the entire program, as it takes multiple cycles to do so, Unless, somehow, that entire program and all of it's instructions are executed in one single cycle. IPC means Instructions PER cycle, and a program consists of many of those cycles, so please explain how the number of cycles it took the cpu to complete the program, tells us how many instructions where in each cycle... Never mind, i will do it for you:

The calculation of IPC is done through running a set piece of code, calculating the number of machine-level instructions required to complete it, then using high-performance timers to calculate the number of clock cycles required to complete it on the actual hardware. The final result comes from dividing the number of instructions by the number of CPU clock cycles.

The number of instructions per second and floating point operations per second for a processor can be derived by multiplying the number of instructions per cycle with the clock rate (cycles per second given in Hertz) of the processor in question. The number of instructions per second is an approximate indicator of the likely performance of the processor.

Now, when you test IPC, Everything you listed, is irreverent between applications. Why do I say that, because regardless of the application, the included pipeline effect, cash misses, and any other memory system inefficiencies will end up being nearly the same where IPC is concerned. The reason IPC changes between applications is due to the instruction sets used in the application, as there are many that each cpu supports, and the efficiency of each of those instruction sets are different which is not tied to the deficiencies of the memory a(to little cache could cause a bottleneck, on the rare occasions an extremely fast exacutable instruction set is being uses, but that is pretty much non existent in the zen2 and would show up on the intel side first as it has 4 times kess cache, which cannot be made up in memory latency, as intel does bot have 4 times lower memory latency) so, those definciey are nearly the same across all workloads (some instruction sets are more efficient than others within the calculations aka the execution speed varies between instruction sets). The only thing that changes between applications are instruction sets. memory latency, the I/O and such that is tied to IPC does not change. However, Memory latency, infinite fabric effect other parts of the system outside of the IPC, which is tied to total system performance, NOT IPC. That is why Is say they are separate. I know Idiotincharge wants people to believe that the memory latency changes, it does not. He is confusing Memory latency with bandwidth, as well as SIMD with SIMT. He brought up cache, that is to help memory latency deficiencies, but again, the memory latency does NOT change between applications/workloads. And this whole argument is about memory and infinite fabric latency.

My statement was "Everything I have read about IPC of a process, exudes games as a proper testing platform" Why? Because games are not testing IPC, they are testing total system performance (not just processor performance), which IS NOT IPC. You can have a system with low IPC and still have high system performance, as well as you can have a system with High IPC and still have low system performance. (that is where clock speed comes into play which takes IPC times clock speed to give relative performance)

In all reality, NO benchmark gives a true IPC of a cpu, but here are applications or benchmarks that are much closer to demonstrating a real world accurate IPC but games are not one of them. Games are more accurate in demonstrating total system performance and don't even come close to showing a CPU's IPC. AKA game benchmark results are no indication of IPC (see above about low/high IPC and system performance) The problem with this argument, is many of you are confusing IPC with system performance, which are two completely different things.
 
Last edited:
In all reality, NO benchmark gives a true IPC of a cpu, but here are applications or benchmarks that are much closer to demonstrating a real world accurate IPC but games are not one of them.

So your complaint is that because AMD loses, IPC of games being 'less accurate' than other also inaccurate benchmarks in terms of measuring academic IPC makes games inadmissible.
 
...and yet the sole support you have for your argument does the opposite?

Games can be repeatably tested like any other application. Once other variables are considered, CPUs can be tested for how quickly they chew through game code. This is no different than testing any other application for real-world IPC.
So your complaint is that because AMD loses, IPC of games being 'less accurate' than other also inaccurate benchmarks in terms of measuring academic IPC makes games inadmissible.
Go read my original comment that started this argument. You are the one who is fixated on gaming as a metric for IPC, which has Nothing to do with amd being behind a few fps. AMD had higher IPC than Intel, it has been proven. They are not behind in gaming because of IPC..

You are trying to use an Invalid reference (gaming) to define IPC, which gaming does not do. That is the jist of it.
 
Go read my original comment that started this argument. You are the one who is fixated on gaming as a metric for IPC, which has Nothing to do with amd being behind a few fps. AMD had higher IPC than Intel, it has been proven. They are not behind in gaming because of IPC..

Well, they are, if you're actually measuring per-clock performance.

You are trying to use an Invalid reference (gaming) to define IPC, which gaming does not do. That is the jist of it.

I'm using your reference.
 
Well, they are, if you're actually measuring per-clock performance.



I'm using your reference.
No games are not a measure of per clock performance. Games are a measure of a complete system performance.

As for my reference, you are looking at the games only, that do not demonstrate IPC, try taking a looj at the benchmarks that they show before the game results, as those are closer to showing actual IPC than the games.
 
No games are not a measure of per clock performance. Games are a measure of a complete system performance.

Every piece of software measures complete system performance in some mix.

As for my reference, you are looking at the games only, that do not demonstrate IPC, try taking a looj at the benchmarks that they show before the game results, as those are closer to showing actual IPC than the games.

And don't show IPC for games.


They all show measurable IPC. 'Actual IPC', which is really your academic IPC, can't be measured, and you're trying to pick and choose which benchmarks you take in order to make a claim from your own reference that uses the very benchmarks that you are trying to exclude.
 
Every piece of software measures complete system performance in some mix.



And don't show IPC for games.


They all show measurable IPC. 'Actual IPC', which is really your academic IPC, can't be measured, and you're trying to pick and choose which benchmarks you take in order to make a claim from your own reference that uses the very benchmarks that you are trying to exclude.
You want to redefine IPC. Well, actually, it isn't you, as reviewers have mislead and manipulated the general public in believing that game benchmarks indicate IPC, when in reality , IPC is just 1 piece of the puzzle that makes up the end results. Games test every aspect of the complete system where other more reliable benchmarks do not. Example, media encoding is not influenced by the gpu (unless you use the gpu to do the encoding) but does not influence the results when using the cpu for encoding. Even at low resolution the gpu, it's drivers effect the end result in gaming, and that is just one small example of one of the many influences or parts of the puzzle that is part of a game benchmarks results. (There are many more pieces in the puzzle)

But you can keep believing games indicate IPC, and you keep being 100% wrong. That's like saying a cherry pie represents sugar based on the taste of the pie, all though sugar is only part of the recipe that makes up the pie.
 
Last edited:
You want to redefine IPC. Well, actually, it isn't you, as reviewers have mislead and manipulated the general public in believing that game benchmarks indicate IPC

They're measing various workloads per-clock. You can complain about that and move the goalposts in your own argument, but you're not proving your point.
 
I can't believe you guys are still going. We get it, your both pretty, there now can you get over each other? You're arguing semantics with neither being technically wrong. One is theoretical and one is actual. Actual changes depending on what is being tested, theoretical will always (well, unless u can develop a perfect benchmark, but it would be useless for anything but proving the theoretical) be higher than actual.
 
IPC depends on the application being used. There is no true one ipc measurement mapped to a cpu that is application agnostic. That is why we see several application measurements averaged together as an approximation for ipc. This theoretical vs actual argument is pointless. Just bench the apps you are using. The more apps you use to benchmark in your performance average, the closer your ipc approximation will be to reality.

It is clear what cpu has greater ipc overall -

So get over it.
 
They're measing various workloads per-clock. You can complain about that and move the goalposts in your own argument, but you're not proving your point.
I haven't moved the goal posts. You are trying to place every aspect of a computer's performance under IPC. My original statement is AMD has the IPC crown, that is fact,Y and you want to argue argue that fact. You say, not in gaming, that is false, as gaming does not test IPC, it is just part of the equation. Once again, to test IPC you must run ONE set piece of code. Games are not one set piece of code. If anyone has moved the goalposts, it is you. Basically, all you are doing at this point is trolling.
 
This one might cause you to snort your drink out of you nose:
https://nl.hardware.info/nieuws/66737/intel-ceo-bevestigt-plannen-voor-7nm-cpus-in-2021View attachment 175479

Not really sure what 1272 or 1274 is. In any case, it seems Intel has not learned their lesson on keeping goals and expectations reasonable unless 10nm has really turned out to be a sort of 7nm-- and 7nm-.
while many don't realize it; they've changed to perf / W rather than density || transistor per mm^2.
 
while many don't realize it; they've changed to perf / W rather than density || transistor per mm^2.

Please explain this perf / W metric ( Since W is a fantasy number nowadays and perf can be attached to about any favorable benchmark)

And to the other 2 arguing, it's running circle .... ..... yeah people claimed intel IPC king for years and now IPC isn't IPC but it's IPC while not been IPC here and there ... Intel is spinning it and AMD should continue what they're doing. In the end you buy a product for either the best performance in X scenario or the best overall performance and that's where Intel and AMD are different...
 
Please explain this perf / W metric ( Since W is a fantasy number nowadays and perf can be attached to about any favorable benchmark)

And to the other 2 arguing, it's running circle .... ..... yeah people claimed intel IPC king for years and now IPC isn't IPC but it's IPC while not been IPC here and there ... Intel is spinning it and AMD should continue what they're doing. In the end you buy a product for either the best performance in X scenario or the best overall performance and that's where Intel and AMD are different...
I believe this what they've been doing for last couple gens since haswell. Basically wattage over performance. *(sure you'll get more performance like always, but it will be just afterthought.) I think they'll just go further and further ahead into higher clocks.
 
Maybe Intel learned marketing from AT&T and they will call it 7nmE which turns out to really be 10nm but cleverly marketed to confuse consumers into thinking it is actually 7nm.
 

Oooo that clock speed! Overall it is an improvement over 8550U, beating most of the benchmarks while not clocking as high as the 8550U, this will enable Intel to continue dominating the mobile space. In some way I am not surprise by the clock speed given how mature the process is for 14nm. I am skeptical if Intel can get desktop IceLake to 5 Ghz but I have to wait and see.
 
If Intel can keep the performance the same or higher and drop power draw, they have a knockout.

And those gaming benchmarks... man I hope they got VRR into this spin.
 
If Intel can keep the performance the same or higher and drop power draw, they have a knockout.

And those gaming benchmarks... man I hope they got VRR into this spin.

At 15w mode, it looked to trade blows with Whiskey Lake.

Everything about Icelake looks closer to Zen 2 than CFL, which is not a bad thing. Latency, ICP, max frequency, and power look like they will be similiar.

The iGPU looked impressive. Still, by the time this releases, AMD should have their Navi based APUs.
 
At 15w mode, it looked to trade blows with Whiskey Lake.

It's trading blows at 15w with 25w Whiskey Lake- that's a massive improvement.

Everything about Icelake looks closer to Zen 2 than CFL, which is not a bad thing. Latency, ICP, max frequency, and power look like they will be similiar.

Two standouts: the first is the branch predictor- this is massively improved and is really at the root of what makes a CPU good at being a CPU, that is, working through branching logic. The latency numbers appear to have been negated through other core improvements such that in aggregate the cores are improved. This is similar, though at a very different scale, to how AMD shot memory latency to shit with Zen 2, but mitigated most of the effects of that decision by bolting on a ton of cache.

The second standout is power, which runs with the first point- Ice Lake is doing in a 15w envelope things that take Whiskey Lake 25w, and Whiskey lake is already several steps ahead of Ryzen in terms of performance per watt.

The iGPU looked impressive.

Especially given the power usage. That's huge.

Still, by the time this releases, AMD should have their Navi based APUs.

We can expect AMD to bring more GPU performance and perhaps a competent upgrade to the CPU performance from their previous APUs, but I don't expect them to be competitive at all in the 15w space, nor to be competitive in terms of battery life. And I'm being honest here: I hope I'm wrong.
 
Back
Top