AMD still clawing share from Intel

So, with these gains, significant for AMD, but still very minor overall, do we think that these represent the start of a cascade effect where AMD is a serious competitor (in marketshare), or will AMD simply be a stronger minority player?
 
I think it depends highly on if Intel can get 10nm working in 2019 or if they give up on that and get 7nm working in 2019. If its 2021 Intel is in a world of pain.
 
I think it depends highly on if Intel can get 10nm working in 2019 or if they give up on that and get 7nm working in 2019. If its 2021 Intel is in a world of pain.

My best guess is the latter. 10nm is not working right and likely never will. They'll get some limited parts out on it, but nothing serious. I believe 7nm is where they make their comeback, and everything I have heard - unless they pull a rabbit out of a hat - suggests no sooner than 2021.

AMD's Rome is really going to kick them where it hurts.

That said, when they do come back with 7nm, it will be AMD's turn to need a rabbit in a hat, because right now the only reason they are competitive is the smaller process node. Equalize the process node and Intel is way way ahead.
 
AMD is not 'clawing' away marketshare- Intel is literally giving it away with their 10nm fumble, and AMD is simultaneously quite lucky that TSMC's most recent processes have executed well and that AMD's IP is manufacturable on them.

Had Intel shipped according to their roadmaps, Ryzen would put AMD back into the 'modestly competitive' space that Dozer pushed them out of, rather than being a legitimately better option for a not insignificant number of tasks.

This is not to demean AMD's work with Ryzen; it's just the market truth. AMD having an available process advantage and a modestly competitive product to build with it is a bit of a trend departure. The best bet is that Intel will recover and continue executing, and whether AMD is able to sustain both competitive IP advancements and a process advantage (or at least approximate parity) is a bigger question. I absolutely hope they do. I just see the challenges ahead for them, including when competing with a resurgent Intel, as looming pretty large. They need to be able to continue to compete in terms of raw per-core performance and efficiency while tuning their designs for fabrication processes that they do not control, while actually being able to retain production rates from the fabs that do control said processes.

If TSMC makes adjustments to future processes for whatever reason that put AMD at a disadvantage- say, due to catering to higher-volume customers- AMD will start falling behind.
If TSMC has trouble manufacturing AMD IP at needed volumes, AMD will start falling behind.
If AMD cannot simultaneously address per-core performance and efficiency, whether due to their IP or deficiencies in manufacturing, AMD will start falling behind.

Right now AMD seems to be a bit of a gem at TSMC, at least from an outsider's perspective, but consider: let's say Intel decides to do more production at TSMC. Who has the bigger pull? Let's say that TSMC decides that they need to limit production of AMD IP. Where does AMD go? UMC? Samsung?

Their current reliance on independent fabs seems to be a pretty tenuous spot for AMD.
 
because right now the only reason they are competitive is the smaller process node.

I noted it above, but it is very important to highlight (I think) the link between Intel's 10nm struggles and demand for AMD parts. In some markets this does mean that AMD has a more desirable product, but in others, AMD is being purchased because Intel simply cannot supply the volume and AMD has a part that works.

A big part of this is the issue of moving nodes: Intel has to take at least one fab down in order to stand up a new process. If the new process encounters issues, they're in trouble. If it is nearly half a decade behind... well.
 
I noted it above, but it is very important to highlight (I think) the link between Intel's 10nm struggles and demand for AMD parts. In some markets this does mean that AMD has a more desirable product, but in others, AMD is being purchased because Intel simply cannot supply the volume and AMD has a part that works.

A big part of this is the issue of moving nodes: Intel has to take at least one fab down in order to stand up a new process. If the new process encounters issues, they're in trouble. If it is nearly half a decade behind... well.


Yeah.

There was a rumor a while back that Intel was starting to re-layout some of their designs to be compatible with TSMC's process.

I have no idea if it was true, made up, and actual plan or just a proof of concept, but it might suggest that they are interested in going third party production short term until they get their process in order...

Here's the link.

Maybe it's a short term thing to allow them to take down a fab and retool it for the next gen process?
 
Yeah.

There was a rumor a while back that Intel was starting to re-layout some of their designs to be compatible with TSMC's process.

I have no idea if it was true, made up, and actual plan or just a proof of concept, but it might suggest that they are interested in going third party production short term until they get their process in order...

Here's the link.

Maybe it's a short term thing to allow them to take down a fab and retool it for the next gen process?
That was for their 10nm chipsete, afaiu.
 
Maybe it's a short term thing to allow them to take down a fab and retool it for the next gen process?

I assume that the reduction in net profit is pushing them in this direction, and that the loss of marketshare, which drives future profit, is pushing even harder. If we assume that 10nm is going to be more of a 'on paper' process than a true phase where all of their product lines pass through before the process is retired, bring in TSMC (or anyone) starts to make sense. Today it's CPUs hurting, but Intel makes a lot of products that would benefit from a 'step' between 14nm and 7nm. The 'bleeding edge' stuff will likely stay in their own fabs, but stuff like chipsets? Easily outsourced.
 
I assume that the reduction in net profit is pushing them in this direction, and that the loss of marketshare, which drives future profit, is pushing even harder. If we assume that 10nm is going to be more of a 'on paper' process than a true phase where all of their product lines pass through before the process is retired, bring in TSMC (or anyone) starts to make sense. Today it's CPUs hurting, but Intel makes a lot of products that would benefit from a 'step' between 14nm and 7nm. The 'bleeding edge' stuff will likely stay in their own fabs, but stuff like chipsets? Easily outsourced.



Sounds plausible.

I'm not in the industry but I'd imagine the life of a fab goes something like this:

1.) Fab has brand new latest process node, used for highest end and lowest power CPU's.

2.) Fab is one gen back in process node. Used for less bleeding edge CPU's and some memory products

3.) Fab is in last phase being used for stuff that doesn't require a modern node (Network controllers, chipsets etc.)

Then at EOL a Fab is retooled for the latest process and the cycle repeats?

If so, since 10nm is struggling to come online, 14nm fabs are thus still occupied to the max producing current CPU' and unable to move to chipset duty.

Thus, in order to take current chipset fabs offline and retool them for 7nm (or whatever the next size is) they need to move chipset manuafacturing somewhere...

This is complete guesswork, but it sounds plausible to me.
 
looking at those numbers I'm amazed the company has survived. I thought they'd be north of 20% for each category, but some of those % points were not even in double digits.
 
looking at those numbers I'm amazed the company has survived. I thought they'd be north of 20% for each category, but some of those % points were not even in double digits.

Survival is... almost guaranteed. At least until there's a competitor aside from AMD that can actually present credible competition to Intel in the markets where AMD competes.

Intel would rather pay them to stay in business than face regulatory measures.
 
And then there's this news article :D

Who knows.

Yeah, that was quick.

Shows their "aggressive ramp up" hits 7nm by 2021.

I may not be in the fab industry, but my experience with anything r&d and manufacturing related is that whenever you set aggressive timelines, things inevitably miss and get delayed.

Intel 7nm by 2021 seems like a best case now, not a worst case.
 
Last edited:
Not saying desktops aren't important, but the value you "want to see" is the server one. Like when the Opteron gave us multi-core and AMD was able to capture about 17% of the server market at their peak, that hasn't happened with this new round (not even close).

Intel lays back and yawns over these numbers.

Generally speaking "war" breaks out at the 10% mark. If AMD hits 10% of the server market, look for "tick/tock" to make a comeback.
 
Generally speaking "war" breaks out at the 10% mark.

"War" has essentially already broken out. It isn't AMD; Intel has been competing with themselves for the better part of a decade, and in the last year or so, they started losing. Same goes for Nvidia really, except they haven't lost much ground since Fermi, but could the same way again.

Expect Intel to push to at least get back to executing their product progression. Maybe they won't be as technically ahead of AMD as they have been up until Ryzen, but they'll at least be pushing the products out the door.
 
Generally speaking "war" breaks out at the 10% mark. If AMD hits 10% of the server market, look for "tick/tock" to make a comeback.

My take is that the reason tick-tock ended was not because Intel got too comfortable and decided to rest on their laurels, but rather because when they announced the end of tick-tock in 2016, 10nm was already supposed to be launching, and while qe hade No clue att the time, it was internally apparent they had issues and wouldn't be able to launch it on time.

So instead of publicly announcing that 10nm was in trouble and taking a huge hit on Wall street, they announced a "change of strategy".

Essentially, it was an attempt to hide bad news.
 
I think it depends highly on if Intel can get 10nm working in 2019 or if they give up on that and get 7nm working in 2019. If its 2021 Intel is in a world of pain.

I think people are too quick too quick to write Intel doom stories. They flubbed the process transition, but that probably won't impair end products that much.

Intel's mature 14nm process is VERY good. They will almost certainly still have a clock-speed advantage, so they can have competitive parts for desktop on 14nm. Also Intel has been producing the 14nm process so long that it will be quite low cost (the equipment is depreciated/amortized). So two of the biggest expected advantages of a new process for AMD performance and cost are mitigated. They will probably only lag on perf/watt, but that isn't as critical for desktop.

10nm will be used for laptop/mobile devices and it will run at lower power, lower clock speeds, giving them competitive perf/watt where it is really needed.
 
Wake me up when Intel develops a process that runs in secured and encrypyted memory space to remove the freaking vulnerabilities. Give me that and I will be a happy camper.
 
Wake me up when Intel develops a process that runs in secured and encrypyted memory space to remove the freaking vulnerabilities. Give me that and I will be a happy camper.

We can go back to in-order processors, but till then, out-of-order execution is going be vulnerable. Also I hope you're okay with Atom-level performance. Welcome to the Information Security arms race.
 
I think people are too quick too quick to write Intel doom stories. They flubbed the process transition, but that probably won't impair end products that much.

Intel's mature 14nm process is VERY good. They will almost certainly still have a clock-speed advantage, so they can have competitive parts for desktop on 14nm. Also Intel has been producing the 14nm process so long that it will be quite low cost (the equipment is depreciated/amortized). So two of the biggest expected advantages of a new process for AMD performance and cost are mitigated. They will probably only lag on perf/watt, but that isn't as critical for desktop.

10nm will be used for laptop/mobile devices and it will run at lower power, lower clock speeds, giving them competitive perf/watt where it is really needed.

You're forgetting a few things. First of all, it would have been planned to put most new processors on the new node. This does two things. The new node would allow for more dies per wafer which means it makes the dies cheaper to produce. This also frees up manufacturing space on the old node. This allows for more volume of other products already on that node as well as the ability to move products on older nodes to the newer one. That would likely mean higher production numbers as well as lower power usage for those products. All of this would have been originally figured into all product plans. Since 10nm failed spectacularly it has upset all of those plans.

Additionally, just because 14nm is mature doesn't mean it's a good thing. Your suggestion that going 14nm++++++ is a good thing isn't necessarily so. It requires time, money, effort for each one of those +. There is no guarantee Intel would recoup all the money from having to constantly reinvent the node. Continually refining a node is not a magical process with no downsides or costs attached. If it was we wouldn't be anywhere near where we are now with regards to process nodes. It makes more sense to push on to a new process node than it does to keep refining the same process node repeatedly simply due to diminishing returns. The only reason Intel has constantly refined 14nm is because it has no choice in the matter.

By the way, end products are already affected. Intel cannot produce the correct volume of products at this time. That alone is a huge impairment to end products. If Intel is able to sell every single product they make, it means they aren't making enough in the first place. This is a direct effect from the 10nm debacle. If 10nm had worked out even half as well as it was planned to, Intel wouldn't be seeing product shortages and would have real products out on 10nm and in volume.
 
You're forgetting a few things.

First of all, it would have been planned to put most new processors on the new node. This does two things. The new node would allow for more dies per wafer which means it makes the dies cheaper to produce. This also frees up manufacturing space on the old node. This allows for more volume of other products already on that node as well as the ability to move products on older nodes to the newer one. That would likely mean higher production numbers as well as lower power usage for those products. All of this would have been originally figured into all product plans. Since 10nm failed spectacularly it has upset all of those plans.

I didn't forget, you just disagreed with my point. Old, very mature process, cost much less/wafer to produce, which mitigates the cost difference vs a new node. Especially compared to this node transition which requires Quad patterning or EUV.

Additionally, just because 14nm is mature doesn't mean it's a good thing. Your suggestion that going 14nm++++++ is a good thing isn't necessarily so. It requires time, money, effort for each one of those +. There is no guarantee Intel would recoup all the money from having to constantly reinvent the node.

Disagree completely. These are process refinements it's nothing like the investment in a new node, so it's a small investment, and it isn't constant, as there is only 14nm++, not 14nm++++++. Big exaggerations don't benefit your argument. Also what do you mean no guarantee of recouping? It's a relatively small investment, and as you say below, Intel is selling everything they can build.

By the way, end products are already affected. Intel cannot produce the correct volume of products at this time. That alone is a huge impairment to end products. If Intel is able to sell every single product they make, it means they aren't making enough in the first place. This is a direct effect from the 10nm debacle. If 10nm had worked out even half as well as it was planned to, Intel wouldn't be seeing product shortages and would have real products out on 10nm and in volume.

Some extra sales opportunity lost, but that doesn't make for worse product, which is what I (and most people) mean when talking about the end product.

You also seem to be forgetting that Intel looks to have 10nm coming out for laptop parts about the same time AMD has 7nm CPUs coming out.
 
I didn't forget, you just disagreed with my point. Old, very mature process, cost much less/wafer to produce, which mitigates the cost difference vs a new node. Especially compared to this node transition which requires Quad patterning or EUV.



Disagree completely. These are process refinements it's nothing like the investment in a new node, so it's a small investment, and it isn't constant, as there is only 14nm++, not 14nm++++++. Big exaggerations don't benefit your argument. Also what do you mean no guarantee of recouping? It's a relatively small investment, and as you say below, Intel is selling everything they can build.



Some extra sales opportunity lost, but that doesn't make for worse product, which is what I (and most people) mean when talking about the end product.

You also seem to be forgetting that Intel looks to have 10nm coming out for laptop parts about the same time AMD has 7nm CPUs coming out.
I sorta agree, but remember that not everything they make after releasing a reworked product on the same node...is that reworked product.
 
We can go back to in-order processors, but till then, out-of-order execution is going be vulnerable. Also I hope you're okay with Atom-level performance. Welcome to the Information Security arms race.

You didn't read my post did you. I don't want to remove out of order execution. I want it encrypted so even if Joe Schmoe gets in the data he gets is not of use. Get it?
 
But if you really really really want to slow the CPU down, do encryption. Just saying.
 
I didn't forget, you just disagreed with my point. Old, very mature process, cost much less/wafer to produce, which mitigates the cost difference vs a new node. Especially compared to this node transition which requires Quad patterning or EUV.



Disagree completely. These are process refinements it's nothing like the investment in a new node, so it's a small investment, and it isn't constant, as there is only 14nm++, not 14nm++++++. Big exaggerations don't benefit your argument. Also what do you mean no guarantee of recouping? It's a relatively small investment, and as you say below, Intel is selling everything they can build.



Some extra sales opportunity lost, but that doesn't make for worse product, which is what I (and most people) mean when talking about the end product.

You also seem to be forgetting that Intel looks to have 10nm coming out for laptop parts about the same time AMD has 7nm CPUs coming out.

An old larger node is not cheaper to produce things on. You get fewer items per wafer compared to a newer node. That's a simple fact. You can argue yields but that always improves over time and is an expected and normal cost early on. And it doesn't even remotely mitigate the cost of a new node. If refining a node was as wonderful as you're trying to convince people it is, there wouldn't be a race to a new node. History and facts refute your argument.

Node refinements are extra costs which aren't needed unless there's something wrong with the node or you can't move to a new node. The fact that Intel's 10nm node is a failure has already had consequences and obvious ones. Intel is only refining the 14nm node because there is no other choice right now. Intel would not waste time, money and resources refining a node unless there was no other choice. There is no exaggeration in my argument. The continued node refinements are an additional cost and to think the refinements don't cost a considerable amount to do is laughable. It's not on a scale of a new node but that's irrelevant. It's an additional cost wasted because the 10nm node failed. You obviously don't have a clue about business at all. If you're selling all your product you're losing money because additional volume would net you more money. The 10nm node failure has had a major impact on Intel's revenue simply because they can't produce enough volume. That's about as basic as you can get in economics. If the 10nm node hadn't been a complete failure Intel would be making most of the newer CPUs and such on that node. They'd get more dies per wafer increasing available volume and they'd have additional volume on the old 14nm lines for products to expand which are already on it or add new products to that line. The fact that Intel can't do this means Intel is losing money.

How can you say the products aren't worse? Do you think that if Intel's 10nm node had actually worked that the products would be no better than what they are now? I find that highly unlikely. Based on that assumption by you, why should Intel bother with any new nodes at all?

Attempt to play up 14nm+++++ all you want. It's a losing proposition for Intel and Intel knows this. The failure of 10nm has completely disrupted years of plans and strategies meaning plenty of lost money.
 
Intel's mature 14nm process is VERY good. They will almost certainly still have a clock-speed advantage, so they can have competitive parts for desktop on 14nm.
Clock speed advantage? We have no evidence that we are moving beyond 5GHz. Unless AMD isn't showing its cards with its lower W part and hiding some juice, it appears 5GHz is it. For everyone! Just more cores.
Intel hasn't increased clocks in a significant level since, forever!
 
Last edited:
  • Like
Reactions: N4CR
like this
Clock speed advantage? We have no evidence that we are moving beyond 5GHz. Unless AMD isn't showing its cards with its lower W part and hiding some juice, it appears 5GHz is it. For everyone! Just more cores.
Intel hasn't increased clocks in a significant level since, forever!

I never said Intel was increasing clock speed. Do we have evidence that AMD is reaching 5GHz?
 
An old larger node is not cheaper to produce things on. You get fewer items per wafer compared to a newer node. That's a simple fact. You can argue yields but that always improves over time and is an expected and normal cost early on. And it doesn't even remotely mitigate the cost of a new node. If refining a node was as wonderful as you're trying to convince people it is, there wouldn't be a race to a new node. History and facts refute your argument.

I never said it was "cheaper to produce things on", that is your reading comprehension problem.

I said "cost much less/wafer to produce which mitigates the cost difference vs a new node". Maybe you need to visit dictionary.com?
 
That said, when they do come back with 7nm, it will be AMD's turn to need a rabbit in a hat, because right now the only reason they are competitive is the smaller process node. Equalize the process node and Intel is way way ahead.

Is 12nm really that much better than 14nm??? I'd say that the whole Ryzen/Threadripper/Epyc lineup from top to bottom is currently on par with intel on everything except 5ghz single core speed.
They are just as fast or faster in everything except gaming in each price bracket.

7nm will not be just "competitive" I'm betting it'll be outright faster in basically everything (Except some people who can clock their 9900k's to 5.3Ghz and run single threaded tasks all day) especially Epyc, since they are already ahead in power efficiency there.

I do hope that AMD keeps this up though, I don't want Intel monopoly stagnation again..........
 
I do hope that AMD keeps this up though, I don't want Intel monopoly stagnation again..

There is a big question on what happens after nodes can no longer be reduced in size. Unless some new materials are found we may end up with a day where performance can no longer be improved by anyone.
 
You didn't read my post did you. I don't want to remove out of order execution. I want it encrypted so even if Joe Schmoe gets in the data he gets is not of use. Get it?

You have to decrypt it to process it. Since the exploits you're talking about use the operation of the CPU itself to reveal the data it's working on, have it encrypted in the memory chips will only serve to slow everything down.

There is a big question on what happens after nodes can no longer be reduced in size. Unless some new materials are found we may end up with a day where performance can no longer be improved by anyone.

Performance can always be improved; it'll just come at higher costs (both initial and operating).
 
You have to decrypt it to process it. Since the exploits you're talking about use the operation of the CPU itself to reveal the data it's working on, have it encrypted in the memory chips will only serve to slow everything down.

I thought this was common knowledge for people here... lol!
It's like trying to open a jpeg inside an ecrypted zip folder...... It don't work..........OS doesn't know what's inside it.
Can't process encypted information without decrypting it......
 
Performance can always be improved; it'll just come at higher costs (both initial and operating).

Yes, but it will be hard to justify for PC consumers. Really I bet in a couple of years we will be back to stagnation on the PC CPU market, though we are probably already past overkill for most people.
 
I would expect Encryption to be extremely slow and power hungry. Think of how fast L1 and L2 caches are.
 
Performance can always be improved; it'll just come at higher costs (both initial and operating).

What happens if it takes 10 years to get a 3% total improvement and that 3% costs billions of dollars per year?
 
I would expect Encryption to be extremely slow and power hungry. Think of how fast L1 and L2 caches are.

The caches would become pretty pointless. The crypto processes would take longer to perform than reading from or writing to main memory.

What happens if it takes 10 years to get a 3% total improvement and that 3% costs billions of dollars per year?

It'll get amortized. Things will become more centralized. We'll go back to renting computer time and accessing our data with dumb terminals.
 
It'll get amortized. Things will become more centralized. We'll go back to renting computer time and accessing our data with dumb terminals.

That's assuming people will need vast amounts of computer time beyond future desktop CPUs, which I doubt.

Right now what are most people Taxing CPUs with?

I am still running an ancient C2Q, and outside of playing games and doing an occasional video encode, it is usually idling below 10% usage, surfing the web, watching videos, doing taxes, or personal productivity (LibreOffice).

I really only need a new CPU to play more modern games.
 
It'll get amortized. Things will become more centralized. We'll go back to renting computer time and accessing our data with dumb terminals.

No way, they are never taking my PC and local processing power. From my cold, dead hands!
 
Back
Top