Intel 7nm ambitions are...lofty

Right after you explain to us what part of the cpu core is memory and uncore, since IPC has to do with CPU core only.

Wrong. IPC is instructions per clock, and those instructions do not reach the core without memory and uncore. It cannot be calculated without those and obviously the CPU cannot be used without those.

Further, it seems that the natural application of your argument and perhaps the point of making such a fallacious argument would hinge on the potential for AMD to build a Ryzen CPU that was not affected by memory and uncore the way that all iterations have been so far.

And that just falls hilariously flat when you're making arguments about products available and testable today.
 
Right after you explain to us what part of the cpu core is memory and uncore, since IPC has to do with CPU core only.

No it doesn't. IPC is how many instructions a CPU can execute in one clock cycle. To execute an instruction a CPU has to fetch it then store the results to and from memory. There are a lot of tricks both companies use to mask the latency involved with fetching and storing, but eventually the CPU will have to go out to main memory to keep working on anything useful.
 
No it doesn't. IPC is how many instructions a CPU can execute in one clock cycle. To execute an instruction a CPU has to fetch it then store the results to and from memory. There are a lot of tricks both companies use to mask the latency involved with fetching and storing, but eventually the CPU will have to go out to main memory to keep working on anything useful.
Not entirely true. You can operate entirely on data in cache/register memory, but that is very high optimization that generally isn't used in PCs.
 
  • Like
Reactions: N4CR
like this
No it doesn't. IPC is how many instructions a CPU can execute in one clock cycle. To execute an instruction a CPU has to fetch it then store the results to and from memory. There are a lot of tricks both companies use to mask the latency involved with fetching and storing, but eventually the CPU will have to go out to main memory to keep working on anything useful.

You are correct. I wasn't even thinking about the fetching and storing of results, because I have been talking about memory and infinite fabric latency, which is separate from IPC.
 
I think everyone can agree that Ryzen has higher IPC in everything EXCEPT games

I would not agree with that. If you removed the everything part and put more often than not AMD has a slight or small IPC advantage I would agree. But not everything and not just games. There are regular applications that will behave like games.
 
I think everyone can agree that Ryzen has higher IPC in everything EXCEPT games

False, it is not IPC that is causing Ryzen to fall behind in games, it is the memory and infinite fabric latency that is causing it. Lets put it to you a different way.. Lets say you can fold 100 shirts an hour and you can get 100 shirts brought to you every hour, so it would take you 9 hours to fold 900 shirts. Now lets say your mom can fold 110 shirts per hour, but she can only get 100 shirts brought to her every 1.5 hours because her delivery truck that brings the shirts goes slower, so it would take your mom 13.5 hours to do the same 900 shirts. Does that mean you fold clothes faster than your mom? NO, your mom just isn't able to get the same 100 shirts delivered every hour, due to a slower delivery truck, keeping her from working to her potential and out performing you. Same thing here.. Ryzen isn't able to get the equal amount of instructions fast enough due to the latency, which means that there are a lot of idle cycles not being used, meaning the cpu is doing nothing because it is waiting for the work.
 
Last edited:
I think everyone can agree that Ryzen has higher IPC in everything EXCEPT games

No they have higher IPC >.< They have lower clock speed. In almost everything we do with a PC AMD has a higher enough IPC that the clock speed doesn't keep Intel ahead. Games however are unique... they tend to not thread well. (its not a game developer issue) games simply don't thread well because they are highly dependent on multiple moving points of data. If X drops do Y... if AI XXX chooses path 1 then inform AI YYY if not inform AI ZZZ... if player moves mouse left render this, if player moves mouse right render that. Games have 1001 things that don't branch well. A CPU will have a hard time knowing what to split and what not to... what it can predict it will need and what it won't. And it all changes in a millisecond. Most software doesn't work that way. A CPU can easily predict the best way to fill its cache when its being asked to render something over and over... or its being asked to apply the same filter to a million pixels in a photo ect.

Intel still wins in games a bit because yes they have a clock speed advantage right now. That helps when your software runs on a couple cores and latency matters more then raw compute power. (games)

If AMD was able to squeeze another 200-300mhz out of Zen2 Intel wouldn't win anything at all. The next gen Intel chips are also likely to be chiplets and are likely to also give up a bit of clock speed. Intels failure at 10nm on the bright side for them allowed them to produce their top end chips on a very very mature process that they have got running extremely well. It is possible that the current Intel chips may be the highest clock speed chips we see for years to come.
 
No they have higher IPC >.< They have lower clock speed. In almost everything we do with a PC AMD has a higher enough IPC that the clock speed doesn't keep Intel ahead. Games however are unique... they tend to not thread well. (its not a game developer issue) games simply don't thread well because they are highly dependent on multiple moving points of data. If X drops do Y... if AI XXX chooses path 1 then inform AI YYY if not inform AI ZZZ... if player moves mouse left render this, if player moves mouse right render that. Games have 1001 things that don't branch well. A CPU will have a hard time knowing what to split and what not to... what it can predict it will need and what it won't. And it all changes in a millisecond. Most software doesn't work that way. A CPU can easily predict the best way to fill its cache when its being asked to render something over and over... or its being asked to apply the same filter to a million pixels in a photo ect.

Intel still wins in games a bit because yes they have a clock speed advantage right now. That helps when your software runs on a couple cores and latency matters more then raw compute power. (games)

If AMD was able to squeeze another 200-300mhz out of Zen2 Intel wouldn't win anything at all. The next gen Intel chips are also likely to be chiplets and are likely to also give up a bit of clock speed. Intels failure at 10nm on the bright side for them allowed them to produce their top end chips on a very very mature process that they have got running extremely well. It is possible that the current Intel chips may be the highest clock speed chips we see for years to come.

Lol! Did you read my post?
Yes Ryzen 3000 has higher IPC.
Ryzen doesn't clock as high as Intel, so it almost evens out.
 
  • Like
Reactions: ChadD
like this
You are correct. I wasn't even thinking about the fetching and storing of results, because I have been talking about memory and infinite fabric latency, which is separate from IPC.

It's not separate though. If the CPU is waiting on data or instructions from memory, then it's not executing instructions and IPC goes down. Caches, branch predictors, and prefetch algorithms are all means of masking how long it takes to access memory so the execution pipeline can stay full as much as possible. But if you don't have a good cache strategy, your branch predictor isn't accurate enough, or your prefetching algorithm isn't getting enough of the right data, then your IPC is going to be negatively impacted.
 
No they have higher IPC >.< They have lower clock speed. In almost everything we do with a PC AMD has a higher enough IPC that the clock speed doesn't keep Intel ahead. Games however are unique... they tend to not thread well. (its not a game developer issue) games simply don't thread well because they are highly dependent on multiple moving points of data. If X drops do Y... if AI XXX chooses path 1 then inform AI YYY if not inform AI ZZZ... if player moves mouse left render this, if player moves mouse right render that. Games have 1001 things that don't branch well. A CPU will have a hard time knowing what to split and what not to... what it can predict it will need and what it won't. And it all changes in a millisecond. Most software doesn't work that way. A CPU can easily predict the best way to fill its cache when its being asked to render something over and over... or its being asked to apply the same filter to a million pixels in a photo ect.

Intel still wins in games a bit because yes they have a clock speed advantage right now. That helps when your software runs on a couple cores and latency matters more then raw compute power. (games)

If AMD was able to squeeze another 200-300mhz out of Zen2 Intel wouldn't win anything at all. The next gen Intel chips are also likely to be chiplets and are likely to also give up a bit of clock speed. Intels failure at 10nm on the bright side for them allowed them to produce their top end chips on a very very mature process that they have got running extremely well. It is possible that the current Intel chips may be the highest clock speed chips we see for years to come.


Dude I understand, I read all of your previous posts......
You HAVE to look at it as a whole.. They all work together. There is zero way for Ryzen 3000 to get around that Memory/fabric latency in the end.
You can't seperate it like that. IPC is affected by a combination of lots of things. Latency being one of them.
Ryzen DESTROYS Intel at everything except gaming workloads........ enough.
 
I recall reading an article where somebody from Intel said they were trying to more than double (I think it was 2.7x) the transistors (or something like that) with 10nm where they normally had been only doubling it. With 7nm they were going back to just 2x. They blamed the 10nm problems on them trying to do too much.
 
Intel keeps insisting that its process node is more advanced than the competition...it's a way of saying that its 10nm node is roughly on par with AMD's 7nm node...
Considering the gate pitch on Intel's 14nm is still tighter than TSMC's 7nm I think it's safe to say that Intel's process is more advanced.
 
I love the fact that this debate is so heated. As a consumer just buy the best chip you can afford from either camp. No matter what you will have good performance. If you need more than a dozen threads you probably want amd. If not and you only really game then you probably want Intel. But in the end for use experience either are solid choices and the difference in what each is good at is less than 10%.

The fact that what I just stated is basically true, for end users it doesnt perceptibly matter which cpu you choose for real world performance. Those that are compiling video and such know the video card today is more of a accelerator than the cpu. But yea nice 16 core is good if you're gaming and streaming off the same box. Multiple threads per job. But the average consumer doesn't do that. Hell I'm not even average but we knew that because we're on a tech forum. For a defunct website. We my friends are the elite users and system builders of the world today and even we as a community can acknowledge that frame for frame clock for clock we are at an amazing time in the tech industry.

In the past 3 years we have seen Ray tracing hardware introduced to the masses. Core counts triple on average. Power use become. Lower and lower for the hardware. Nvme pcie interfaces for storage and prices per gig drop through the floor on storage and memory. Memory and motherboards become great hardware with an ease of use and appearance that is better than ever. Not to mention frame synced displays. Oh and let's not leave out low. Latency high speed network connections becoming more and more ubiquitous.

Let's shake hands and realize we are at an amazing time. Then let's go back to this debate here and give others the credit due. Our opinions on which device has a less than 5% margin of performance advantage is entertaining. But not a reason to tear eachother down. :)
 
How is this ambitious? This is the minimum to stay competitive.

Well I think a node change every year is pretty ambitious. Don't think anyone has changed nodes every year for 3-4 years like that. Of course they sort of got behind a bit with their 10nm fobiles.
 
Let's shake hands and realize we are at an amazing time. Then let's go back to this debate here and give others the credit due. Our opinions on which device has a less than 5% margin of performance advantage is entertaining. But not a reason to tear eachother down. :)

Think we can all agree that it is great to have real competition again. I am looking forward to Intels proper 10nm parts.... and seeing them hopefully get more aggressive. I have my fingers crossed that they use their 3D stacking on their next 10nm desktop part. I know if its not this one it will be the next... but a big swing for the fences chip next will really shake things up. Hopefully force AMD to battle back with a zen2+ in the spring. The next year or two should be a lot of fun for all of us. CPU battles... Intel entering the GPU ring for real. NV and AMD very likely going to chiplet designs with their next parts. Nice to have new things to argue about. lol
 
I love the fact that this debate is so heated. As a consumer just buy the best chip you can afford from either camp. No matter what you will have good performance. If you need more than a dozen threads you probably want amd. If not and you only really game then you probably want Intel. But in the end for use experience either are solid choices and the difference in what each is good at is less than 10%.

The fact that what I just stated is basically true, for end users it doesnt perceptibly matter which cpu you choose for real world performance. Those that are compiling video and such know the video card today is more of a accelerator than the cpu. But yea nice 16 core is good if you're gaming and streaming off the same box. Multiple threads per job. But the average consumer doesn't do that. Hell I'm not even average but we knew that because we're on a tech forum. For a defunct website. We my friends are the elite users and system builders of the world today and even we as a community can acknowledge that frame for frame clock for clock we are at an amazing time in the tech industry.

In the past 3 years we have seen Ray tracing hardware introduced to the masses. Core counts triple on average. Power use become. Lower and lower for the hardware. Nvme pcie interfaces for storage and prices per gig drop through the floor on storage and memory. Memory and motherboards become great hardware with an ease of use and appearance that is better than ever. Not to mention frame synced displays. Oh and let's not leave out low. Latency high speed network connections becoming more and more ubiquitous.

Let's shake hands and realize we are at an amazing time. Then let's go back to this debate here and give others the credit due. Our opinions on which device has a less than 5% margin of performance advantage is entertaining. But not a reason to tear eachother down. :)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 


Thoughts on thread ^^

I'm not up to snuff on the modern SIMD era but in my old 80386 books you were given a clear list of how many cycles each given instruction took for every cpu. I would imagine even in modern designs IPC would be a variable given the vast amount of instructions. Seems like an argument nobody can win, though in most real applications it seems coffee lake and mattisse are ..very close..
 
Think we can all agree that it is great to have real competition again. I am looking forward to Intels proper 10nm parts.... and seeing them hopefully get more aggressive. I have my fingers crossed that they use their 3D stacking on their next 10nm desktop part. I know if its not this one it will be the next... but a big swing for the fences chip next will really shake things up. Hopefully force AMD to battle back with a zen2+ in the spring. The next year or two should be a lot of fun for all of us. CPU battles... Intel entering the GPU ring for real. NV and AMD very likely going to chiplet designs with their next parts. Nice to have new things to argue about. lol
Intel 10nm is primarily targeted for laptop segment.
When Intel starts talking about 7nm release in 2021, prior to any 10nm desktop parts being available, you can read between the lines that 10nm is complete trash and Intel is just buying time until 7nm is ready.
My prediction: Intel 7nm mobile and Intel GPU chips in 2021, with 7nm Desktop/Server CPU parts in 2022.
 
Do they really care though? They still hold an IPC advantage and one would think once Intel moves to 7nm they would have that plus a new architecture with more than just some 2-3% IPC increase.

AMD has dramatically closed the gap and will win back market share but Intel will swing back before they need to seriously worry about losing major market share.

Unless there is a huge switch to EPYC that is. AMD still has to get their laptop chips out and compete.

If Ryzen 2 can beat cheap Intel chips in laptops while remaining power efficient AND win over the server/data then Intel needs to worry.

IPC is a wash with zen 2. So not really an advantage intel can brag about. Now intel is not expected to have anything until late 2020-2021. 2019 is already almost done. AMD is not really going to be sitting on its ass for the time being. They will probably be close to launch of zen 4 by then. Your comment assumes AMD won't have any improvements for the next two years.
 
The 10nm laptop part was 30% or more down over its 14nm bretheren. Although it was more efficient in one or two scenarios it was not as fast in most common, shorter peak loading tests due to clocks.

This chart is horseshit with an unlabeled Y axis though..

Unlabeled means the axis is dual-unit. All future dates are measured in "hopes and dreams" and all past dates are measured in "unmitigated bullshit".
 
  • Like
Reactions: N4CR
like this
It's not separate though. If the CPU is waiting on data or instructions from memory, then it's not executing instructions and IPC goes down. Caches, branch predictors, and prefetch algorithms are all means of masking how long it takes to access memory so the execution pipeline can stay full as much as possible. But if you don't have a good cache strategy, your branch predictor isn't accurate enough, or your prefetching algorithm isn't getting enough of the right data, then your IPC is going to be negatively impacted.

Yes, and No. You are really making this more technical than I wanted to get. I was just trying to make it simple. I/O speed of memory or the latency does not change based on workload. If the memory latency is X ns, it will always be consistently X ns, no matter the instruction set. This is separate from the number of input and output calls (instruction sets) an application requires (these calls can be from the application itself, keyboard, sound card, GPU, or other pieces of hardware or software), and this is where the latency effects performance, but the latency is not changing between the cpu and memory, as it will always be the consistent X ns, and the Raw IPC of the cpu does not change (hence, why I say they are separate) If The raw IPC of a cpu is 1 million instructions per cycle, that cpu is still capable of doing 1 million instructions per cycle, and that capability does not change due to the latency of the memory. As you add more input and output calls from the application and devices, each one is effected by memory latency, as well as the OS scheduler, priority, cpu time, etc. because the more input/output calls made, the more switching from one set of instructions to another is taking place (starting and stopping the instruction sets), and those hand offs are where the memory and infinite fabric latency is effecting performance. But the Processor is still capable of performing 1 million instructions per cycle, but it is just sitting idle due to the constant switching between instruction sets.

Example (simple math here with simple example): If Intel can do 100 instructions per cycle (per second) of the same instruction set, and it's latency it's memory latency is 1 second, It can chew thru 5800 of those instructions in a 60 second test and is idle for 2 seconds due to latency. If AMD can do 110 instructions per cycle (per second) of the same instruction set, and it's memory latency is 1.5 seconds, it can chew thru 6270 instructions on a 60 second test and is idle for 3 seconds due to latency. Now if you add a second instruction set, that switches every 10 seconds, Intel is only able to chew thru 4800 instructions in 60 seconds and is idle for 12 seconds, not because It's core capabilities have changed as it can still do 100 instructions per second, but because thru put has changed because of having to switch to different instruction sets causing it to be idle (not able to perform any work because it is waiting for the new instruction set). With AMD it will effect it worse do to the higher latency, as it will only be able to chew thru 4620 instructions in 60 seconds and sitting idle for 18 seconds, not because it's capability to do 110 instructions per second has changed (it's IPC) but because the thru put has changed. (In most modern CPUs the instruction cycles are executed concurrently, and often in parallel, through an instruction pipeline: the next instruction starts being processed before the previous instruction has finished, which is possible because the cycle is broken up into separate steps., so my example is not including any delay within the steps)
 
Last edited:
Yes, and No.

Yes.

You cannot test IPC, and thus cannot feasibly discuss IPC, absent the components in and around the CPU.

You're trying to argue an untestable hypothetical and your argument gets more convoluted (and no less wrong) with every revision.
 
Yes.

You cannot test IPC, and thus cannot feasibly discuss IPC, absent the components in and around the CPU.

You're trying to argue an untestable hypothetical and your argument gets more convoluted (and no less wrong) with every revision.

Bullshit! You need to learn how experiments, baselines are set, and calculating results.. Same principle here.
 
Last edited:
Bullshit! You need to learn how experiments and baselines are set.. Same principle here.

Explain how you're going to test this, lol.

And then explain how those results would ever be applicable to the real world.

Yes, I understand the scientific method. Good luck.
 
Really, you just throw out personal attacks with nothing else. Enlighten me.

Only in response to yours and your continued insistence on claiming something counter to your own sources and basic testing principles.

You can keep making your claim- but you'll only look more ridiculous, especially given that the topic is Intel's 7nm release and your argument is how AMD has in 2019 caught up with an Intel design from 2015 on an even older fabrication process.
 
I have educated myself. It appears you are not capable of understanding at such a high level. (see I can throw out personal attacks too). You have not once explained how I am wrong.., you can do so at any time, or are you only capable of making personal attacks?

I don't have to explain how you're wrong. You posted the evidence disproving your own argument.

And now you are trying to argue against it :ROFLMAO:
 
I don't have to explain how you're wrong. You posted the evidence disproving your own argument.

And now you are trying to argue against it :ROFLMAO:

You are slow as shit, you are posting responses to a comments 10 minutes after I delete them.. because I didn't like what I said, so I removed them, to re-do them. (I re added some of them, since you resounded so slowly, hence why the post is after your response) No wonder you seem to believe I don't know what I am talking about, you can't seem to be able to think at that level, or speed.

You say that IPC can't be tested, then say the article I posted disproves what I am saying, because you are trying to use the games to determine IPC to argue with me, so which is it, can you test for IPC or can't you? If you can't, as you say, then how are you using the game results to argue? If IPC is the factor that gives Intel the win in games, why is Intel behind in Multi core and single core performance.. doesn't IPC effect this? I mean if it has nothing to do with what I explained above, or is due to the cores raw IPC and not memory and infinite fabric latency, then go ahead, explain how Intel loses to everything else but gaming.
 
Last edited:
You say that IPC can't be tested, then say the article I posted disproves what I am saying, because you are trying to use the games to determine IPC to argue my point, so which is it, can you test for IPC or can't you? If IPC is the factor that gives Intel the win in games, why is Intel behind in Multi core and single core performance.. doesn't IPC effect this? I mean if it has nothing to do with what I explained above, in my example, please, enlight me.

I'll leave your first line alone, in case you want to delete that too.

But one very clear point you seem to be missing: IPC isn't a single number. It's entirely relative to the workload fed to the CPU.

And again, you can test IPC, but you can't show IPC for a 'core'; the lowest level of testing is whatever can actually be isolated. That means the whole CPU, at very best, compared to other whole CPUs in the same board with the same settings and the same memory and so on.

And if you're comparing CPUs with different platforms, you're going to have to accept an increase in variables.
 
I mean if you look at the benchmarks that are out there it's pretty clear.
Ryzen 3000 has higher IPC in everything except some gaming workloads......
 
I'll leave your first line alone, in case you want to delete that too.

But one very clear point you seem to be missing: IPC isn't a single number. It's entirely relative to the workload fed to the CPU.

And again, you can test IPC, but you can't show IPC for a 'core'; the lowest level of testing is whatever can actually be isolated. That means the whole CPU, at very best, compared to other whole CPUs in the same board with the same settings and the same memory and so on.

And if you're comparing CPUs with different platforms, you're going to have to accept an increase in variables.

You are partially correct when you say IPC isn't a single number, as it has many influences that are caused by NO fault of the cpu. However, There is a maximum IPC that a processor can do, that is a fixed number, and it will never change. IPC is effected by the instructions sets coded for. There are specific instructions sets that only AMD supports, there are specific instruction sets that only Intel supports, and there are instruction sets that both support. Depending on what instruction set used, your IPC will change. (kind of like gasoline for a car, there are different grades of fuel, and each grade effects the millage of a car) Now to test IPC, you have to use ONE set of Code, which should use the same instruction set, but rarely do, as it is normally dependent on which processor it is being ran on. When it comes to games, it is not testing one set of code, but multiple sets and branches of code simultaneously, such as code to for the sound engine, code for the Graphics engine (majority is done with GPU, but not all), as well as code for the AI and other parts that make up the game. Most game are designed to use instruction sets that run on both CPU's (not necessarily ones that are optimal for either side, but it is pretty safe to say it is most likely in Intel's favor) Which means games are not testing actual IPC of a CPU but rather in all reality, they are testing the caveats of IPC more so than IPC itself. The caveats are those things that influence the results and are not fault of the characteristics of the CPU. In the same way mileage in a car is influenced by the route taken, or how many red/green lights it hits. Memory latency and infinite fabric (uncore is your term you like to use here) is part of those caveats, specially when it comes to games due to system memory and GPU Direct memory Access. Lowering the resolution removes the GPU processing power from the equation because the CPU isn't waiting on the GPU to do it's calculations, but it does not remove the memory interaction between system memory and GPU, which has NOTHING to do with instruction cycles. OS, Drivers, and even the game itself are caveats of IPC. Because a games code, can effect it's performance on a cpu. World War Z showed how simple code effects the results with their latest patch. What instruction sets are the games using? Are they Intel optimized, or AMD optimized, or both? Are any of the new games using the new instruction sets that come with Ryzen 2? There are so many variables when it comes to games, that the results are misleading., There are also situations where IPC can be misleading, like an increase of IPC because a program suffers more spin lock contention, and those spin instructions happen to be very fast and may not effect one CPU vs the other, giving inaccurate results. (yes, these things can favor either side).

You can test IPC for a 'core'. You can set which core the process will run on. In fact, in the link that I showed, they turn off 4 of the cores on the 3900x.. This doesn't shut off random cores, it shuts off specific cores. The schedule in windows dictates which core is getting the next instructions. IF you turn off all but a single core in task manager that will leave ONLY 1, windows scheduler won't send code to any of the cores that are disabled. The software can also influence which core it runs on. Not so much now days, but in the earlier days of multi core processors there where many games that would only utilize core 1.

When comparing CPU's with different platforms, you are correct, there are variables, and we don't know if those variables are causing inflated and inaccurate results (either side can suffer from this). However, games vs other workloads increase substantially how those variables influence the results because they are interacting with nearly every aspect of the full system, not just the processor.
 
Last edited:
Back
Top