From Intel Earnings: Intel says accelerating 10nm product transition, 7nm product transition delayed

Not really. It will take a long time of serious decline before there are massive issues.
Almost all of OEM sales and server sales are dominated by Intel. If Intel suddenly ceased to exist then 'the world' would be in serious trouble as quite frankly AMD and other niche manufacturers wouldn't be able to supply the void that Intel would leave behind.
If AMD sells every chip they make or it's possible for them to make, Intel still would have a lion's share of the market.

However, piece by piece the money will start to go other directions. Fujitsu released one of the most powerful super computers all using ARM. Apple is moving to ARM (they're relatively small, but roughly 10-15% of the consumer space). And AMD is continuing to show their server and workstation chips are worthy of real consideration.
Much like IBM, Blackberry, Xerox, etc, it will still take likely 10 years for a full decline like that to happen. Intel should be concerned. They should fix their problems now. Otherwise they have nothing left other than a long slow death. But make no mistake: we're just at the beginning of said long slow death.

I think I completely agree with you accept for Intel being at the start of the slide. I think the real beginning of the end is when they abandoned work or any hope of creating mobile products. (phones not laptops). They botched the Atom stuff up terribly... granted in hind sight and ironicly enough Qualcomm may have used some Intel style tactics to ensure Intel never got a foot hold. Still Intel screwed that up themselves. They also sold off Xscale... they probably should have leaned into that and had their own ARM chips to sell. Qualcomm may have been playing dirty... still if Intel had been putting out a product that was better they would have wiggled their way in. Just even then they where not willing to push through the pain of a few quarters of bad numbers... and just sold xscale off instead.

If we think about it the #1 personal computing market today is mobile. Android dominates and it all runs on ARM chips and not one of them is produced by Intel. Of course AMD isn't there either... but AMD is a much smaller company that wisely decided to focus on their core products. (they at one time had smaller mobile market aspirations as well) Intel could have owned that market... they where developing high performance mobile before the market really settled into the ARM/Android standard of today. If Intels leadership at that point... hadn't been thinking short term write offs they could easily today be producing the fastest ARM designs. (instead of Apple of all companies being the ARM core kings) They may have also been able to further develop Atom and at least keep the market spit up a bit.

As it is now ARM is monster.... not related but I have a feeling if Softbank does sell ARM the price tag is going to be something insane like 60-80 billion.

Intel tapping out of that market in my opinion was the beginning of their slide. Had they dug in with mini x86... or at least positioned themselves as the ultimate ARM design company. What a different place they would be in today. Right now they are getting destroyed in HPC by a ARM design (its not just one of the fastest super computers right now fugaku IS #1.... the planned new #1 Intel based super computers coming up don't have working chips yet) I posit that would not have been impossible if ARM hadn't grown off mobile. The next few years we are going to see a lot more ARM servers... I know its been hyped for ages, but its coming the code issues that have hampered every ARM server so far are being solved. (as fugaku helps demonstrate) Apple leaving x86 is going to have some serious consumer ripples... Intel may well loose a lot of the Laptop market that has been its consumer bread and butter. For sure they loose the 15% of so that was Apple sales... but all indications are Apple has the right chip design team in place to really shake things up. I wouldn't be shocked if Apple grows their laptop market share to 30-40% by 2024 if they price their ARM macbooks to grow share.

Anyway I don't mean to write a book... simply believe Intel has been on a slide for almost a decade now... I remember when IBM fell. It was gradual as you say. However there was also a solid decade where people of the day would have said IBM will never fall they are IBM. However in retrospect everyone knows the day they made their MS OS deal it was all over. IMO Intel signed their sunset papers the day they told investors... Xscale is a looser, and doubled down when they ended Atom development.
 
Last edited:
Didn't they suggest that they were going to consider using other Fabs while they worked out their 7nm difficulties which at this point would be either Samsung or TSMC

FAB space is actually pretty tight was my only point. Its hard to contract TSMC... Apple has most of their production contracted already for years. If they have to build out more production capability Intel will foot the bill. (so it is very possible imo that Intel has reached out and when they got the price tag said... well we simply aren't saving enough for that to make sense)
Samsung is another option... but there also pretty well full up. There is a reason Nvidia for instance has been using both companies for fabrication. Trying to meet your demands with one is hard. They may only be willing to spin your silicon for a few months before they have to switch over to another customers chips.
 
FAB space is actually pretty tight was my only point. Its hard to contract TSMC... Apple has most of their production contracted already for years. If they have to build out more production capability Intel will foot the bill. (so it is very possible imo that Intel has reached out and when they got the price tag said... well we simply aren't saving enough for that to make sense)
Samsung is another option... but there also pretty well full up. There is a reason Nvidia for instance has been using both companies for fabrication. Trying to meet your demands with one is hard. They may only be willing to spin your silicon for a few months before they have to switch over to another customers chips.

It's not just that TSMC is so heavily booked up. It's that even if they were willing to screw all their long term customers they could only pick up a relatively small portion of Intel's needs.

Anyone thinking Intel could easily switch a large chunk of their output to TSMC, is either unaware of how many wafers Intels CPUs use vs anyone else in the industry; or is only looking at total wafer starts company wide. By the latter metric TSMC and Intel are about the same size (I think TSMC does 10 or 20% more). But what that fails to recognize is that almost all of Intels fabs are producing either current or imediate previous generation process parts or in the process of being upgrade to do the next generation. The last numbers I saw were 1 a 10nm, a bunch at 14, and 1 still at 22. Only a small fraction of TSMCs output is leading edge processes of the sort that Amd/Apple/NVidia/Qualcomm/etc are using. The remainder go back over something like 5 or 10 generations of processes to provide smaller customers with simple chip designs design targets that are much easier to target than leading edge processes are; espcially when for simple chips the notional die size savings from smaller processes can't happen because the chip size stops shrinking due to hitting a minimum size possible to cram all its IO and power connections on.
 
  • Like
Reactions: ChadD
like this
True, but the main point is that there's a bigger technology jump going on. TSMC going to their "7nm" process involved successfully making that jump, while Intel has struggled to make that jump going to their 10nm process.

AMD is simply lucky that TSMC chose correctly, and that's not something that AMD had any hand in. Similarly, Intel has stacks of processor architectures waiting for their manufacturing processes to catch up.

AMD partnered with TSMC...nothing to do with luck...you seem like a die hard Intel fan who doesn't seem to want to give AMD any credit...Intel was on top forever but now there's a new king (yes Intel theoretically leads on paper with gaming benchmarks but in practical use it means nothing and Zen 3 should erase that)
 
AMD partnered with TSMC...nothing to do with luck
I'd recommend taking a look at TSMCs past node shrink performance, how that's held back many of their customers in the past, while keeping in mind the technological issue at hand. Everyone has had to wrangle with the 10nm / 7nm transition. TSMC made the right research bet, but it absolutely was a bet. Samsung did okay, Intel is getting there; GlobalFoundries straight up decided to not even try.

So yes, AMD did get lucky, and so did TSMC.
you seem like a die hard Intel fan who doesn't seem to want to give AMD any credit
Do I?

That must be why I recommend AMD CPUs by default...

Again, this isn't r/AMD.
Intel was on top forever but now there's a new king (yes Intel theoretically leads on paper with gaming benchmarks but in practical use it means nothing and Zen 3 should erase that)
AMD is two or three nodes ahead, able to deploy architectures that integrate the lessons everyone learned from the Intel architecture currently in use, and yet Intel is still 'on top'. See, I remember when AMD actually outpaced Intel in performance. No questions, no veiled slights, and Kyle Bennett writing "I don't know why anyone would buy an Intel product" in a review.

And that's just a performance perspective with respect to consumer-oriented benchmarks. In many, many other ways, Intel is still rather decidedly on top.
 
Most of the time, my kid gets my old stuff and I get new stuff.
That doesn't really answer the question though. The only reason to say 'Intel is only good for 1080p low settings' etc. is to link the CPU to the GPU, meaning that one goes out with the other.

That could absolutely be the case for you, but at the same time, it's not likely the case for most. And for those that keep their CPUs longer than their GPUs, those lower-resolution tests are indicative of how well the CPU will scale with a faster GPU in the future.
 
AMD is two or three nodes ahead, able to deploy architectures that integrate the lessons everyone learned from the Intel architecture currently in use, and yet Intel is still 'on top'. See, I remember when AMD actually outpaced Intel in performance. No questions, no veiled slights, and Kyle Bennett writing "I don't know why anyone would buy an Intel product" in a review.

And that's just a performance perspective with respect to consumer-oriented benchmarks. In many, many other ways, Intel is still rather decidedly on top.

Well the product stack has also gotten more diverse, back then you only had 1 or 2 cores. Now it goes up to 64.
If you want the biggest baddest fastest HEDT CPU, you have no reason to buy Intel.
Also, AMD's 3950x sits at the top of everything except in single thread almost in the margin of error
 
That doesn't really answer the question though. The only reason to say 'Intel is only good for 1080p low settings' etc. is to link the CPU to the GPU, meaning that one goes out with the other.

That could absolutely be the case for you, but at the same time, it's not likely the case for most. And for those that keep their CPUs longer than their GPUs, those lower-resolution tests are indicative of how well the CPU will scale with a faster GPU in the future.
Well it does answer your question.

It's just not the answer you were looking for.
 
I think I completely agree with you accept
"Except". Sorry I couldn't help myself.

for Intel being at the start of the slide. I think the real beginning of the end is when they abandoned work or any hope of creating mobile products. (phones not laptops). They botched the Atom stuff up terribly... granted in hind sight and ironicly enough Qualcomm may have used some Intel style tactics to ensure Intel never got a foot hold. Still Intel screwed that up themselves. They also sold off Xscale... they probably should have leaned into that and had their own ARM chips to sell. Qualcomm may have been playing dirty... still if Intel had been putting out a product that was better they would have wiggled their way in. Just even then they where not willing to push through the pain of a few quarters of bad numbers... and just sold xscale off instead.

If we think about it the #1 personal computing market today is mobile. Android dominates and it all runs on ARM chips and not one of them is produced by Intel. Of course AMD isn't there either... but AMD is a much smaller company that wisely decided to focus on their core products. (they at one time had smaller mobile market aspirations as well) Intel could have owned that market... they where developing high performance mobile before the market really settled into the ARM/Android standard of today. If Intels leadership at that point... hadn't been thinking short term write offs they could easily today be producing the fastest ARM designs. (instead of Apple of all companies being the ARM core kings) They may have also been able to further develop Atom and at least keep the market spit up a bit.

As it is now ARM is monster.... not related but I have a feeling if Softbank does sell ARM the price tag is going to be something insane like 60-80 billion.

Intel tapping out of that market in my opinion was the beginning of their slide. Had they dug in with mini x86... or at least positioned themselves as the ultimate ARM design company. What a different place they would be in today. Right now they are getting destroyed in HPC by a ARM design (its not just one of the fastest super computers right now fugaku IS #1.... the planned new #1 Intel based super computers coming up don't have working chips yet) I posit that would not have been impossible if ARM hadn't grown off mobile. The next few years we are going to see a lot more ARM servers... I know its been hyped for ages, but its coming the code issues that have hampered every ARM server so far are being solved. (as fugaku helps demonstrate) Apple leaving x86 is going to have some serious consumer ripples... Intel may well loose a lot of the Laptop market that has been its consumer bread and butter. For sure they loose the 15% of so that was Apple sales... but all indications are Apple has the right chip design team in place to really shake things up. I wouldn't be shocked if Apple grows their laptop market share to 30-40% by 2024 if they price their ARM macbooks to grow share.

Anyway I don't mean to write a book... simply believe Intel has been on a slide for almost a decade now... I remember when IBM fell. It was gradual as you say. However there was also a solid decade where people of the day would have said IBM will never fall they are IBM. However in retrospect everyone knows the day they made their MS OS deal it was all over. IMO Intel signed their sunset papers the day they told investors... Xscale is a looser, and doubled down when they ended Atom development.
I was going to multi-quote this, but really you only have one point, which in short is: mobile.

Mobile only matters in the consumer market. Intel's core competence is OEM and Server which is where the money really is. So while Intel may not be pushing as well into new markets, their billions are still getting fed into them through their long held infrastructure chain in those two key markets.

So while we can make arguments about moves that they "should" make or new markets they "should" be a part of it's not really the things that are going to continue being their core. Intel has had and has spun off multiple things that they've tried and tested. Intel is still (as an example) one of the key players in NICs. They expanded into other forms of communications like 5G and ended up spinning that off (also because of Qualcomm meddling ultimately). And as you've noted they've spun up and then sold off what they were doing with ARM.

Still if Intel was at 7nm now and were producing a chip 30% faster and more efficient than the ones they have now (can you imagine a 10xxx series processor with twice as many cores and 15-20% more clock speed? Even without an IPC gain, they'd be ahead of AMD), we wouldn't be having any discussion about Intel being in decline. Because at that percent, they would still be clearly dominant in their core businesses of server and OEM. They still have most of that business, and like I stated before, they will continue to still have most of that business because there is literally no other option to supply the entire market.

In the future, other things might become bigger cash cows and we're currently in a bit of a battle about whether or not x86 is the future or whether specialized ARM can be more successful. However that decline is like the decline of things you've mentioned like xscale. It will be like the decline of SGI and spark. It's going to take a while for large scale adoption to happen.

It doesn't really matter where you (the proverbial "you" as in all people, not "you" in particular) think we are in the decline. Debating about when it started or how many years "Intel has left" is a waste of time. We might as well argue about what numbers are going to come up in Powerball. I personally think their processor development stalled at Sandy Bridge and we've been stuck where we are ever since (how many years of quad cores on desktop? How many years of 14nm?). But even with that I see we have a long time to go before Intel is no longer a major player. 10 years from now I would say is a minimum. Unless all of a sudden trillions more in fab space opens up and ARM is "decisively" the winner over x86. If only one of those two happen, then Intel will still "need" to be around as like I've said before, there won't be enough processors to serve the world.
 
If their 14nm was so inferior to TSMC's 7nm, AMD CPUs would be way way faster than Intel CPUs. And that isn't the case.

Die size doesn't instantly mean performance. Die shrinks pack more into a smaller area. That can mean more performance if a designer chooses to jam more in. That can mean better thermals if they choose to use the same number of transistors and enjoy the shrink. It doesn't matter if you make a chip on 22nm 14nm 11nm 7nm 5nm... if its rocking 5 million transistors the speeds will not be massively different. Yes you shrunk distance between transistors which gives you some speed boost... but its not like going from 22nm to 7nm would instantly double the performance of the exact same design.

The main advantage for AMD being 7nm vs Intels 14nm is longer term cost savings. Of course at first a new fab process costs more... so this advantage is softened. (the advantage of turning out many more chips per wafer). However by the time that fab process hits a second generation those costs start to ramp up. No doubt AMD is making a lot more money now with the XE parts... as the cost per wafer is reduced. AMD also went to a chiplet design... which made their 7nm cores even smaller. Reducing transistors massively increases yields.

Anyway the thing with die shrinks... it adjusts chip designers envelopes. There is always a choice to make in chip design... go for max transistor count and performance, or design something with a smaller count and more forgiving design that can be easily fabed. This is where Intel fucked up at 10nm... they swing for the fences and tried to cram very dense complicated designs and super small gate sizes into their first designs. The result was 3/4 of the chips on a waffer had major issues. There is a sweet spot you want to find as a designer where the yields are high 90+% working chips on a wafer, but where your not leaving tons of performance on the table either.

This last generation ... AMD got it right, chiplet design and not trying to build a GPU into every chip let them cram a lot into a very small die size. They got probably darn close to the max they could out of first gen 7nm. They ended up with reportely much better then average new process yields, and got got a ton of chips per wafer. Giving them very competitive pricing considering they where on a new expensive process. Intel with 10nm got it very wrong... they tried to cram everything + the kitchen sink into one chunk of silicon... went for a min possible gate size and screwed up. Proof is in the first gen mobile 10nm parts with disabled GPUs... those where the chips on the wafer where the errors where found in the GPUs, all the chips with CPU errors where trash. Those chips cost Intel a ton of money. Their upcoming 10nm is a complete redesign. Their head of chip design is on record saying they swung for the fences and missed terribly.... and they had to throw out all their previous 10nm work and start from scratch. (with a far less ambitious design)

Yes if Intel gets a proper 10nm design to a point where they can fab them and get a decent yield. They may well be able to retain/regain their claim of top performance. We'll have to see how much of a lead Zen 3 brings for AMD (and they will be no doubt in the lead with Zen 3)... and then see what Intels 10nm desktop parts look like at the end of 2021. Perhaps they regain a lead over Zen 3.... of course it sounds like by the time they launch, AMD will be ready with a 5nm Zen3+. So good luck Intel.
 
"Except". Sorry I couldn't help myself.


I was going to multi-quote this, but really you only have one point, which in short is: mobile.

Mobile only matters in the consumer market. Intel's core competence is OEM and Server which is where the money really is. So while Intel may not be pushing as well into new markets, their billions are still getting fed into them through their long held infrastructure chain in those two key markets.

So while we can make arguments about moves that they "should" make or new markets they "should" be a part of it's not really the things that are going to continue being their core. Intel has had and has spun off multiple things that they've tried and tested. Intel is still (as an example) one of the key players in NICs. They expanded into other forms of communications like 5G and ended up spinning that off (also because of Qualcomm meddling ultimately). And as you've noted they've spun up and then sold off what they were doing with ARM.

Still if Intel was at 7nm now and were producing a chip 30% faster and more efficient than the ones they have now (can you imagine a 10xxx series processor with twice as many cores and 15-20% more clock speed? Even without an IPC gain, they'd be ahead of AMD), we wouldn't be having any discussion about Intel being in decline. Because at that percent, they would still be clearly dominant in their core businesses of server and OEM. They still have most of that business, and like I stated before, they will continue to still have most of that business because there is literally no other option to supply the entire market.

In the future, other things might become bigger cash cows and we're currently in a bit of a battle about whether or not x86 is the future or whether specialized ARM can be more successful. However that decline is like the decline of things you've mentioned like xscale. It will be like the decline of SGI and spark. It's going to take a while for large scale adoption to happen.

It doesn't really matter where you (the proverbial "you" as in all people, not "you" in particular) think we are in the decline. Debating about when it started or how many years "Intel has left" is a waste of time. We might as well argue about what numbers are going to come up in Powerball. I personally think their processor development stalled at Sandy Bridge and we've been stuck where we are ever since (how many years of quad cores on desktop? How many years of 14nm?). But even with that I see we have a long time to go before Intel is no longer a major player. 10 years from now I would say is a minimum. Unless all of a sudden trillions more in fab space opens up and ARM is "decisively" the winner over x86. If only one of those two happen, then Intel will still "need" to be around as like I've said before, there won't be enough processors to serve the world.

My point about mobile... is they allowed ARM into the software sphere. Ya it wasn't a big deal 10 years ago for Intel to give up on the space and let ARM have it. It wasn't even a big deal 5 years ago. As the two spaces where so very different... but more and more they are starting to overlap, and at some point they will merge completely.

However now it matters... cause the software world has done the work of making their code architecture agnostic. The fast majority of software is now built on common frameworks that can be recompiled in many cases with absolutely zero additional work. Intel has retained their hold on the server space... mainly because of entrenched software stacks. Well those stacks more and more becoming cross compile. Early experiments by companies like Amazon with Arm based servers for their cloud service have been hit and miss.... mostly due to software issues.

The majority of those software issues have at this point been solved or are well on the way to being solved. Its what has allowed Apple for instance to say... ya we can switch over in 2 years, and also say yes we can recompile x86 software at time of install for ARM. (as long as they where using Apple frameworks). Most popular PC frameworks are in the same place now... where software could be easily recompiled with little to no work by developers.

That I think is where Intel is going to run into trouble. If (and its a big if) Apple starts showing off high end ARM chips... that in some cases with the help of accelerator bits on the silicon or what have you start really performing. The push will be on Microsoft and the PC world to roll out stuff to match. I don't know perhaps Intel is thinking ahead a bit with Alder lake. Perhaps their Big.little chip will have the ability to start rolling in the same types of accelerators. But ya Intel is at a point where AMD or some other ARM MFG may just pull something out for MS that changes the game very very quickly.

Your right though... ya went it started isn't relevant. Hopefully Intel isn't completely out of it and understands somewhat that they need to be ready to deliver the next few years if there going to continue being Intel the way we have known them.
 
Isn't Jim Keller the mastermind behind Ryzen? Isn't he at Intel right now? I know he doesn't design the chips himself, but isn't he the project manager that makes it happen? Where is he in this mix?
 
Anandtech:

Intel 7nm Delayed By 6 Months; Company to Take “Pragmatic” Approach in Using Third-Party Fabs

But even more important than that, the delay has spurred some soul searching within Intel, driving the company to pivot on its manufacturing plans and open the door to using third-party fabs for a much broader segment of its products. Going forward, the company will be taking what CEO Bob Swan and other leadership are calling a “pragmatic” approach, looking at both in-house and third-party fabs and using those fabs that make sense for the company and the product in question.
 
Isn't Jim Keller the mastermind behind Ryzen? Isn't he at Intel right now? I know he doesn't design the chips himself, but isn't he the project manager that makes it happen? Where is he in this mix?
He had nothing to do with zen 2. He left in 2015.
Lisa sue became CEO... and a few months later Jim left.
As far as I know AMD hasn't singled any one specific engineer as being the mind behind Zen 2... chiplet design ect.
As I see it when she came in... she spent the first year cleaning house and stripping AMD down to its core. Then if anyone lead the design of Zen 2 it was her. When you look at her history with IBM and further back to Toshiba. Zen 2 has her finger prints all over it. She was the lead of the design team that built the Cell chip... and is very well versed in Interconnect technology. (she invented the process that allowed Copper interconnect fabrication). Zen 2 is clearly the current generation extension of all the work she has done before. No of course she didn't sit at a drafting table drawing designs. However she clearly steared the design... and she has the chops to understand what can and can't be done process and fab wise.

My guess is Jim knew he was out the min Lisa was named CEO. Just took him a few months to get his Tesla gig lined up. Which is why I joked earlier.... Intel needs someone in charge that can figure out who the dead weight is and make their life unconformable until they land that dream gig at Tesla. lmao Jim has been there at least once before... his value is overrated.

Now having said that;
https://9to5mac.com/2020/06/11/form...-keller-resigns-from-executive-role-at-intel/
Ya he just left... for personal reasons. The last few years he headed the Zen 1 design... great but it wasn't ground breaking. All they really did was back up the actual ground breaking design of the previous gen (yes I know they sucked... but the design was much more forward thinking fabrication was the major issue there) they got out a good product but it was just a solid workman design. He did perhaps something at Apple.. who knows they have so many design rock stars there I feel Jim was just there. (he came to Apple when they purchased PA semi... probably bought them with the understanding they would keep execs on for X amount of time... when that time limit hit he was gone) After AMD he went to tesla and produced more workman chips so tesla could save some money. Then he went to Intel and did a whole lot of nothing.

Perhaps him leaving is a sign that Intel does understand it needs to buckle down and cut dead weight.
 
Well it does answer your question.

It's just not the answer you were looking for.
It was a yes / no / [some period of time] question, not where the hardware goes when you're done with it, so no, not an answer.

Regardless, the point is pretty clear, and apparently has to be repeated here and elsewhere as to why anyone benchmarks at 1080p in the first place.
Well the product stack has also gotten more diverse, back then you only had 1 or 2 cores. Now it goes up to 64.
That's rather much beside the point. I can get 'enough' cores for any common consumer (or office) application from either vendor. The point was that AMD was unequivocally faster. We weren't splitting hairs.

If you want the biggest baddest fastest HEDT CPU, you have no reason to buy Intel.
If 'biggest baddest fastest' is what I solely put on my list of HEDT requirements...
Also, AMD's 3950x sits at the top of everything except in single thread almost in the margin of error
Do you know what most consumers don't need more of?
More cores. More than four, really, with six being the right kind of overkill.
Do you know what they can actually use?
Faster cores.

I could go get a 3950X system right now. I don't because it provably wouldn't be any faster at anything I do where time matters.
That I think is where Intel is going to run into trouble. If (and its a big if) Apple starts showing off high end ARM chips... that in some cases with the help of accelerator bits on the silicon or what have you start really performing. The push will be on Microsoft and the PC world to roll out stuff to match.
To me, the biggest advantage to ARM cores is that they're good enough to tack other fixed- (or more fixed)-function logic on to while also being optimized for power usage. This isn't really any different than SSE / AVX and so on, when talking in terms of compute like Fujitsu is doing, but it also hides the weakness of ARM: general branching compute performance.

Now, Apple seems to think that their ARM CPUs will provide enough branching performance to run their UIs, and I'd even bet that they're right, but at the same time, those workloads that Apple is ignoring aren't going away, and there's also nothing stopping Intel and AMD from doing much the same: well, they actually both already do!
 
Intel is still rather decidedly on top.

Intel is not 'decidedly on top'...you're looking at the only one spot where Intel leads and that's gaming benchmarks...AMD leads in all other aspects and even the gaming numbers are purely numbers driven that no one would notice in a real world gaming scenario...for a guy who claims to recommend AMD 'by default' it sure looks like you're making excuses for Intel and like to tout their gaming benchmark numbers but not overall productivity

if the Zen 3 rumors are correct then Intel's lead in gaming is pretty much going to be erased
 
It was a yes / no / [some period of time] question, not where the hardware goes when you're done with it, so no, not an answer.

Regardless, the point is pretty clear, and apparently has to be repeated here and elsewhere as to why anyone benchmarks at 1080p in the first place.

That's rather much beside the point. I can get 'enough' cores for any common consumer (or office) application from either vendor. The point was that AMD was unequivocally faster. We weren't splitting hairs.


If 'biggest baddest fastest' is what I solely put on my list of HEDT requirements...

Do you know what most consumers don't need more of?
More cores. More than four, really, with six being the right kind of overkill.
Do you know what they can actually use?
Faster cores.

I could go get a 3950X system right now. I don't because it provably wouldn't be any faster at anything I do where time matters.

To me, the biggest advantage to ARM cores is that they're good enough to tack other fixed- (or more fixed)-function logic on to while also being optimized for power usage. This isn't really any different than SSE / AVX and so on, when talking in terms of compute like Fujitsu is doing, but it also hides the weakness of ARM: general branching compute performance.

Now, Apple seems to think that their ARM CPUs will provide enough branching performance to run their UIs, and I'd even bet that they're right, but at the same time, those workloads that Apple is ignoring aren't going away, and there's also nothing stopping Intel and AMD from doing much the same: well, they actually both already do!

To be fair, everyone is hitting a wall as far as clock speed is concerned, so everything is going wider and more IPC.
Not until that wall is broken will we have giant leaps in single threaded performance.
Anything released in the last year from AMD and Intel are within a couple % of each other in single thread, you can measure it, but you won't really notice it in day to day.
Intel has got nothing after 10 cores on mainstream and 18 cores on HEDT.
 
It was a yes / no / [some period of time] question, not where the hardware goes when you're done with it, so no, not an answer.
The answer was most of the time.

Maybe you missed it due to your preconceived arguments which you were trying to bait me in to.

Sorry I fucked with your script.
 
Intel is not 'decidedly on top'...you're looking at the only one spot where Intel leads and that's gaming benchmarks...AMD leads in all other aspects and even the gaming numbers are purely numbers driven that no one would notice in a real world gaming scenario...for a guy who claims to recommend AMD 'by default' it sure looks like you're making excuses for Intel and like to tout their gaming benchmark numbers but not overall productivity

You are almost correct. There are places where Intel's lead matters in gaming. In any CPU limited scenario, Intel is faster. However, there are only a few specific scenarios where that will matter. Specifically, high refresh rate displays at 1920x1080, and games where CPU limitations come into play even at 4K. Ghost Recon Breakpoint is an example where at 4K, it's all GPU limited and all of the tested CPU's are the same. Other games like Destiny 2 can drop to very low digits framerate wise at 4K. CPU matters there.

Generally, I'd agree and for most people it won't matter. However, there are cases where it does and going Intel might make a lot of sense. Of course, on the application front, AMD dominates in many if not most of those.
 
Other games like Destiny 2 can drop to very low digits framerate wise at 4K. CPU matters there.

If that is the case, wouldn't the AMD chips only be single digit percentages slower anyways? So they both get crap framerates? Both would be unplayable.
 
Intel is not 'decidedly on top'...you're looking at the only one spot where Intel leads
I'm looking at marketshare and install base.
it sure looks like you're making excuses for Intel and like to tout their gaming benchmark numbers but not overall productivity
I can be 'productive' with two cores. Or four. Or four hundred.

Yes, that includes content creation and so on.

The basic reality discussed just about everywhere is that additional cores provide diminishing returns. For consumers eight is about it. What starts to matter a whole lot more for consumers is all the other stuff in the computer and outside of it.

To be fair, everyone is hitting a wall as far as clock speed is concerned, so everything is going wider and more IPC.
Sure, but more IPC --> more single-thread performance. There's probably a limit on that but we're not anywhere near close enough for it to matter.

As for 'wider'... the limits on that are pretty rough. Some workloads are just insufferably serial. Some stuff just has to be done 'in order'. Luckily that nut is getting cracked more and more in a broad swath of workloads.
Anything released in the last year from AMD and Intel are within a couple % of each other in single thread, you can measure it, but you won't really notice it in day to day.
And that's pretty much why I recommend AMD by default: unless the hairs that you're splitting are actually important to you, the price of entry is simply cheaper.
 
He had nothing to do with zen 2. He left in 2015.
Lisa sue became CEO... and a few months later Jim left.
As far as I know AMD hasn't singled any one specific engineer as being the mind behind Zen 2... chiplet design ect.
As I see it when she came in... she spent the first year cleaning house and stripping AMD down to its core. Then if anyone lead the design of Zen 2 it was her. When you look at her history with IBM and further back to Toshiba. Zen 2 has her finger prints all over it. She was the lead of the design team that built the Cell chip... and is very well versed in Interconnect technology. (she invented the process that allowed Copper interconnect fabrication). Zen 2 is clearly the current generation extension of all the work she has done before. No of course she didn't sit at a drafting table drawing designs. However she clearly steared the design... and she has the chops to understand what can and can't be done process and fab wise.

My guess is Jim knew he was out the min Lisa was named CEO. Just took him a few months to get his Tesla gig lined up. Which is why I joked earlier.... Intel needs someone in charge that can figure out who the dead weight is and make their life unconformable until they land that dream gig at Tesla. lmao Jim has been there at least once before... his value is overrated.

Now having said that;
https://9to5mac.com/2020/06/11/form...-keller-resigns-from-executive-role-at-intel/
Ya he just left... for personal reasons. The last few years he headed the Zen 1 design... great but it wasn't ground breaking. All they really did was back up the actual ground breaking design of the previous gen (yes I know they sucked... but the design was much more forward thinking fabrication was the major issue there) they got out a good product but it was just a solid workman design. He did perhaps something at Apple.. who knows they have so many design rock stars there I feel Jim was just there. (he came to Apple when they purchased PA semi... probably bought them with the understanding they would keep execs on for X amount of time... when that time limit hit he was gone) After AMD he went to tesla and produced more workman chips so tesla could save some money. Then he went to Intel and did a whole lot of nothing.

Perhaps him leaving is a sign that Intel does understand it needs to buckle down and cut dead weight.

Jim Keller set the ground work for Zen in general, so anything afterwards will use the knowledge they gained from Zen 1.
Without the Team that Jim Keller put together, we wouldn't have Zen 2 in its current form.
So yes he did influence Zen 2, he just wasn't there when they were designing Zen 2 i guess you could say.
Michael Clark is another guy that was present.
 
So, was Intel decidedly on top during Athlon?
Intel was decidely on top in terms of marketshare and install base. AMD had the clear performance lead.
Usually we talk about performance when talking about CPUs...
Let's say Intel makes their 10nm and 7nm transitions, and TSMC botches 5nm for a few years. Not a bet I would make, but also not any less unlikely than Intel failing to get their act together.

What happens then?

Well, had Intel delivered on the potential of their 10nm architecture, Zen would look a whole like Bulldozer did. So what happens when Intel catches up?

Zen is interesting because Intel's fab technology stalled and TSMCs didn't, which is somewhat oppposite of what one would expect when looking at both companies' histories.
 
who's in charge at intel,Rip van Winkle? Somebody needs to start inserting some feet into the appropriate opening. Being nice here.
 
Jim Keller set the ground work for Zen in general, so anything afterwards will use the knowledge they gained from Zen 1.
Without the Team that Jim Keller put together, we wouldn't have Zen 2 in its current form.
So yes he did influence Zen 2, he just wasn't there when they were designing Zen 2 i guess you could say.
Michael Clark is another guy that was present.

There is no way if Jim was in charge that Zen 2 would be what it is. It would be a monolithic Zen 1++. That is what he does.

He hasn't developed one revolutionary chip in his career. Nothing in his history would suggest he would even consider a chaplet (a typo that is too funny to change) design with controller chips being fabed on a different process ect. AMD Apple Tesla... all 3 jobs he has produced workman designs that are nothing special. He is known for taking existing work and producing a solid well refined design. No doubt he is a genius and very good at what he does... but he is no chip visionary that people seem to think he is. Zen 1 was in many ways far less revolutionary then then the AMD chips that came before. The problem AMD had was they needed their stuff refined. Bulldozer with CMT and its interesting predictor and caches ect... was a revolutionary design but it wasn't a tight design and was harder to fab then it needed to be for a #2 company that had to compete on price. Jim solved that issue for them... and I'm not taking anything away from him. He designed a solid well designed easy to fabricate performer.... nothing outlandish nothing revolutionary. He just took what AMD had already done that worked and cut out what didn't... he was the right guy for that job no doubt. For Zen 2 he would have been very much the wrong guy.

He did the same thing at Apple... people make wild claims that without him there would be no A4 or A5 but that is BS. He came to Apple as part of a purchase deal... as Apple bought a head start. He helped them get a basic working solid foundation of a design off the ground. When it came time to innovate he was gone. At Tesla all he did was come in and design something Tesla could make in house and cut some costs. Nothing about his work there couldn't be achieved by Telsa buying chips from someone else. But they saved some $.

He has his place.... but imo he doesn't last anywhere cause people understand what he is. He is the guy you bring in to build something that works.... takes what is existing and tweak it, to max out a design for the current Fab Process ect. If you want someone to work in a new process node... and new tech he is not the guy you want around. If you need a chip that will perform better then the last gen on the same node... Jim is your guy. Him being at intel when they needed a 14nm+++++ chip makes tons of sense. He isn't responsible for the 10nm debacle... he isn't an innovator. He may have been some help in the 10nm fix.... I don't know and that would seem possible. But he is best at taking something and tweaking the crap out of it. Not making it do anything new and cool just getting the most out of a node.
 
Last edited:
If that is the case, wouldn't the AMD chips only be single digit percentages slower anyways? So they both get crap framerates? Both would be unplayable.

1595622564426.png


As you can see, the Core i9-10900K is almost twice as fast as the 3900X. Neither is ideal, but 40FPS is a hell of a lot better than 23.8FPS. In fairness, none of these spend all that much time at their minimum FPS. As you can see, the averages are much closer. With a 3950X and a 2080 Ti (not what was used in this review), I get better results. Essentially, this is an issue of the RTX 2080 Super not being good enough to max out the game at 4K. Even though we are primarily GPU limited here, CPU still matters to a degree.
 
Intel was decidely on top in terms of marketshare and install base. AMD had the clear performance lead.

Let's say Intel makes their 10nm and 7nm transitions, and TSMC botches 5nm for a few years. Not a bet I would make, but also not any less unlikely than Intel failing to get their act together.

What happens then?

Well, had Intel delivered on the potential of their 10nm architecture, Zen would look a whole like Bulldozer did. So what happens when Intel catches up?

Zen is interesting because Intel's fab technology stalled and TSMCs didn't, which is somewhat oppposite of what one would expect when looking at both companies' histories.

everything you're saying is based on What If scenarios...how about instead of talking about AMD being lucky, copying Intel, 10nm= 7nm and all other hopes and dreams you stick with the facts about AMD's performance today compared to Intel...will Intel stay down forever?...of course not...but it's going to take a major screw up by AMD and everything going perfectly for Intel in order for them to surpass AMD in the next few years...I originally said 2023 at the earliest but now after the latest 7nm delay it looks like 2024

Zen is not Bulldozer...Lisa Su has brought new leadership to AMD
 
There is no way if Jim was in charge that Zen 2 would be what it is. It would be a monolithic Zen 1++. That is what he does.

He hasn't developed one revolutionary chip in his career. Nothing in his history would suggest he would even consider a chaplet (a typo that is too funny to change) design with controller chips being fabed on a different process ect. AMD Apple Tesla... all 3 jobs he has produced workman designs that are nothing special. He is known for taking existing work and producing a solid well refined design. No doubt he is a genius and very good at what he does... but he is no chip visionary that people seem to think he is. Zen 1 was in many ways far less revolutionary then then the AMD chips that came before. The problem AMD had was they needed their stuff refined. Bulldozer with CMT and its interesting predictor and caches ect... was a revolutionary design but it wasn't a tight design and was harder to fab then it needed to be for a #2 company that had to compete on price. Jim solved that issue for them... and I'm not taking anything away from him. He designed a solid well designed easy to fabricate performer.... nothing outlandish nothing revolutionary. He just took what AMD had already done that worked and cut out what didn't... he was the right guy for that job no doubt. For Zen 2 he would have been very much the wrong guy.

He did the same thing at Apple... people make wild claims that without him there would be no A4 or A5 but that is BS. He came to Apple as part of a purchase deal... as Apple bought a head start. He helped them get a basic working solid foundation of a design off the ground. When it came time to innovate he was gone. At Tesla all he did was come in and design something Tesla could make in house and cut some costs. Nothing about his work there couldn't be achieved by Telsa buying chips from someone else. But they saved some $.

He has his place.... but imo he doesn't last anywhere cause people understand what he is. He is the guy you bring in to build something that works.... takes what is existing and tweak it, to max out a design for the current Fab Process ect. If you want someone to work in a new process node... and new tech he is not the guy you want around. If you need a chip that will perform better then the last gen on the same node... Jim is your guy. Him being at intel when they needed a 14nm+++++ chip makes tons of sense. He isn't responsible for the 10nm debacle... he isn't an innovator. He may have been some help in the 10nm fix.... I don't know and that would seem possible. But he is best at taking something and tweaking the crap out of it. Not making it do anything new and cool just getting the most out of a node.

Wow....
So you openly admit that the projects he "touched" turned into a success (Apple A4, Zen1) and then say he had nothing to do with how it turned out and how future iterations turned out?

Just wow.....

Companies hire Jim Keller for top level positions for lots of money for a short time, that's not a bad job to have. Clearly not very many people have his expertise and experience.
 
View attachment 264427

As you can see, the Core i9-10900K is almost twice as fast as the 3900X. Neither is ideal, but 40FPS is a hell of a lot better than 23.8FPS. In fairness, none of these spend all that much time at their minimum FPS. As you can see, the averages are much closer. With a 3950X and a 2080 Ti (not what was used in this review), I get better results. Essentially, this is an issue of the RTX 2080 Super not being good enough to max out the game at 4K. Even though we are primarily GPU limited here, CPU still matters to a degree.
Wow that 9900k sucks bad in that chart. That almost seems more like a game issue that needs to be fixed lol
 
Wow that 9900k sucks bad in that chart. That almost seems more like a game issue that needs to be fixed lol

This is where the charts are somewhat deceiving. In truth, this is something that happens in lost sectors (personal instances) and cases where a ton of things happen on screen at once. It manifests as a hitch for a second. The benchmark makes it look worse than it really is. What's funny is that the 9900K used to score better, but performance has gotten worse as Windows gets additional updates. I suspect, security mitigation may be coming into play here.
 
  • Like
Reactions: N4CR
like this
Well, had Intel delivered on the potential of their 10nm architecture, Zen would look a whole like Bulldozer did. So what happens when Intel catches up?

Zen is interesting because Intel's fab technology stalled and TSMCs didn't, which is somewhat oppposite of what one would expect when looking at both companies' histories.
It's not quite the same, but I do see what you are saying.
When AMD's Bulldozer CPUs debuted in 2011, it had around 55% core-for-core and clock-for-clock IPC and general performance as Intel's Sandy Bridge CPUs.

Even if Intel had been on track over the last 3 years, I don't think we would be seeing that massive of a difference in performance, though Intel most likely would have remained on top performance-wise; no one is going to argue their market dominance, and you are right on that - that will take years to break down.
However, this time around, unlike when AMD was leading in the early to mid 2000s with Athlon 64 against Netburst, Intel can't resort to anti-consumer and anti-competitive practices.

Also, the mobile device market and ARM processors now starting to threaten Intel's market share, plus the upcoming loss from Apple moving to their own in-house ARM processors, and all of the security exploits Intel has dealt with over the last two years, are all starting to show the cracks in their proverbial foundation.
Intel is not too big to fail, but they are too big to fail quickly, much thanks to their current market dominance, vendor lock-in, and vast x86/x86-64 software library.

However, AMD, Apple, ARM, the mobile market, and their recent fumbles are all starting to add up, and even a mighty elephant can die by a thousand cuts.
Unless Intel gets back on track, their market dominance, and perhaps even the company itself, won't be a shadow of what it was by 2030.

If ARM continues to proceed to where we are thinking it is going, AMD won't be many years behind...


Do you know what most consumers don't need more of?
More cores. More than four, really, with six being the right kind of overkill.
From this post of yours, I agree with everything else except this. :p
If this were 2017 or 2018, I would agree with you, but even powerful fast-clocked quad-cores are starting to show their age, at least in anything at or above an office environment.

6-core CPUs and above are definitely the sweet spot right now, and don't seem to have as many bottlenecks on even basic productivity software that current quad-core CPUs have.

Do you know what they can actually use?
Faster cores.
I do agree with you on the single-thread performance being important, though - this is one area most individuals here think is a dead technology, but I assure everyone that it is not, and faster cores will always remain important.

OS updates (Windows and *NIX), certain file decompressions, network TCP/IP data transfers, etc. are all single-threaded, and no amount of additional cores are going to help these tasks or functions perform any better.
These are improvements that Intel, AMD, and ARM manufacturers need to always continuously improve upon.
 
Last edited:
everything you're saying is based on What If scenarios
I did make that pretty clear.
how about instead of talking about AMD being lucky
They have been lucky. Perhaps you're unfamiliar with GlobalFoundries, which they spun off, and how their process technology is going...
copying Intel
Going to need a quote for that one.
10nm= 7nm
That's pretty well accepted. Intel's measurement is closer to the actual feature level, while TSMCs is more of a 'marketing' measurement.
you stick with the facts about AMD's performance today compared to Intel
Like how Intel is shipping parts that are using a five year-old architecture, and despite AMD having released three new products in that time, they still haven't really eclipsed Intel's aging tech?
but it's going to take a major screw up by AMD and everything going perfectly for Intel
You seem to be under the impression that this is unlikely, when historically this is the norm. Further, it's not just an own-goal that could cause problems for AMD: a screwup at TSMC would have the same effect.
Zen is not Bulldozer
Relative to Ice Lake, Zen looks very similar to Bulldozer.
Lisa Su has brought new leadership to AMD
It's weird that you feel the need to make a statement like this...
 
Zen 1 was in many ways far less revolutionary then then the AMD chips that came before. The problem AMD had was they needed their stuff refined. Bulldozer with CMT and its interesting predictor and caches ect... was a revolutionary design but it wasn't a tight design and was harder to fab then it needed to be for a #2 company that had to compete on price. Jim solved that issue for them... and I'm not taking anything away from him.
Not quite - Bulldozer and CMT were not "revolutionary", they were a cost-savings architecture employed by AMD at the time to bank on a cheap and effective way to utilize SMP-based multi-threaded software.
However, in 2011, multi-threaded consumer software was not nearly as mature or developed as it is now, and the single-threaded performance of AMD's Bulldozer processors were only 55% clock-for-clock as capable as Intel's Sandy Bridge processors were, so much of the market's consumer software was still single-threaded or lightly-threaded at the time, which gave AMD zero advantage on; in other words, they were too early to the party.

Not to mention, the CMT architecture with it's shared FPU was horribly inefficient, even with heavily multi-threaded workloads, that at most would only reach about 80% of the CPU's full capabilities, and many OS kernel's had to have additional drivers and core written to properly utilize the CMT architecture, which took some time to get going.
I have used quite a few of their FX processors from 2011-2016 across a vast amount of workloads, and Intel (pre-exploit era) crushed them in every single workload, be it gaming, professional, audio rendering, video rendering, VMs, single-threaded, or otherwise.

From what I have seen, Zen 2's chiplet design, while it may not be "revolutionary", was a massively innovative design that has helped to propel AMD forward.
It seems like the only bottleneck at this point is the CCX interconnect design, depending on the workloads.
 
Back
Top