Intel CPU Shortages Are Expected to Worsen in Q2

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Intel is suffering from a 14nm silicon shortage right now, and some industry figures think that boxed DIY processor sales are getting hit particularly hard. Last year, analysts expected the shortage to persist well into the first half 2019, but now, Digitimes Research believes that the supply issues are only going to get worse in Q2. While industry supply gaps are expected to drop from over 10% in Q4 2018 to 2-3% in Q1 2019, that gap is expected to grow 1-2 points in Q2 2019, without a significant increase in total shipments. Quad-core Kaby Lake-R silicon bound for Core i5 models was particularly hard to get in Q4 2018, and some white box manufacturers in China have reportedly been "denied any supply of Intel's entry-level processors since September 2018," but Coffee Lake-based i5s and Chromebook processors are supposedly seeing the worst supply shortfalls right now. Intel is working on bringing more 14nm capacity online to alleviate the supply issues by the 3rd quarter of this year, but until then, AMD's market share in worldwide notebook shipments is expected to increase as laptop makers search for alternatives to mainstream Intel CPUs. AMD reportedly has a 15.8% share of notebook shipments in Q1 2019, and it's expected to peak at 18% in Q2 before dropping in subsequent quarters.

Of course, given the age of Intel's 14nm process, one of the biggest questions hanging in the air is how, and when, Intel will move production to smaller nodes. At CES, Intel committed to shipping 10nm mobile processors in 2019. However, Digitimes Research's supply chain sources claim that "there are still many issues with the CPU giant's mass production schedule for 10nm process," and the researchers themselves think that Intel "could shift its investments directly to 7nm process development, skipping 10nm."

Apollo Lake- and Gemini Lake-based processors for the entry-level segment were second worst in terms of shortages as Intel had shifted most of its capacity to make high-end processors that offered better profit. Lenovo, which primarily focuses on mid-range and entry-level models, had a supply gap of hundreds of thousands CPUs in the second half of the year. White-box players in China have even been denied any supply of Intel's entry-level processors since September 2018. Apple's latest MacBook Air released at the end of October 2018, which exclusively uses Intel's 14nm Amber Lake processor, was reportedly also a victim of the CPU shortages. With the notebook market entering the slow season in the first quarter of 2019 and many vendors having increased their adoption of AMD's solutions, the overall CPU supply gap in the notebook market is expected to shrink to around 3%. Taiwan vendors are still seeing their gaps at above 5%, but HP, Dell and Lenovo's percentages will drop dramatically. Dell has even freed itself from the shortage issue... Intel is expected to have new 14nm capacity join production in the second half of 2019. Intel's existing 14nm fabs are mainly located in the US and Ireland and the newly expanded capacity in Arizona, the US is expected to begin volume production in July or August, to boost Intel's overall 14nm capacity by 25% and completely resolve the shortage problem.
 
Good. It's about fricken' time. XD

jptr5cq73q221.png
 
Intel is selling every last one of their overpriced 14nm CPUs, mark my words. They'll be ready with 10nm goodies in Q3, or they may even stretch it to Q4 with something mildly new to counter AMD. I wonder if they will keep X299. Dare I ask if Z390 will get another go around? Knowing Intel, if they have something halfway decent on 10nm, then they will have mildly new sockets, so we'll get mildly new chipsets that do about the same thing. Consequently, we'll get new motherboards. If you guys think I'm wrong, then I'd love to hear from you :D
 
Intel is suffering from a 14nm silicon shortage right now, and some industry figures think that boxed DIY processor sales are getting hit particularly hard. Last year, analysts expected the shortage to persist well into the first half 2019, but now, Digitimes Research believes that the supply issues are only going to get worse in Q2. While industry supply gaps are expected to drop from over 10% in Q4 2018 to 2-3% in Q1 2019, that gap is expected to grow 1-2 points in Q2 2019, without a significant increase in total shipments. Quad-core Kaby Lake-R silicon bound for Core i5 models was particularly hard to get in Q4 2018, and some white box manufacturers in China have reportedly been "denied any supply of Intel's entry-level processors since September 2018," but Coffee Lake-based i5s and Chromebook processors are supposedly seeing the worst supply shortfalls right now. Intel is working on bringing more 14nm capacity online to alleviate the supply issues by the 3rd quarter of this year, but until then, AMD's market share in worldwide notebook shipments is expected to increase as laptop makers search for alternatives to mainstream Intel CPUs. AMD reportedly has a 15.8% share of notebook shipments in Q1 2019, and it's expected to peak at 18% in Q2 before dropping in subsequent quarters.

Of course, given the age of Intel's 14nm process, one of the biggest questions hanging in the air is how, and when, Intel will move production to smaller nodes. At CES, Intel committed to shipping 10nm mobile processors in 2019. However, Digitimes Research's supply chain sources claim that "there are still many issues with the CPU giant's mass production schedule for 10nm process," and the researchers themselves think that Intel "could shift its investments directly to 7nm process development, skipping 10nm."

Apollo Lake- and Gemini Lake-based processors for the entry-level segment were second worst in terms of shortages as Intel had shifted most of its capacity to make high-end processors that offered better profit. Lenovo, which primarily focuses on mid-range and entry-level models, had a supply gap of hundreds of thousands CPUs in the second half of the year. White-box players in China have even been denied any supply of Intel's entry-level processors since September 2018. Apple's latest MacBook Air released at the end of October 2018, which exclusively uses Intel's 14nm Amber Lake processor, was reportedly also a victim of the CPU shortages. With the notebook market entering the slow season in the first quarter of 2019 and many vendors having increased their adoption of AMD's solutions, the overall CPU supply gap in the notebook market is expected to shrink to around 3%. Taiwan vendors are still seeing their gaps at above 5%, but HP, Dell and Lenovo's percentages will drop dramatically. Dell has even freed itself from the shortage issue... Intel is expected to have new 14nm capacity join production in the second half of 2019. Intel's existing 14nm fabs are mainly located in the US and Ireland and the newly expanded capacity in Arizona, the US is expected to begin volume production in July or August, to boost Intel's overall 14nm capacity by 25% and completely resolve the shortage problem.

No kidding, it took like a month to get my brither his i7-9800X which seemed like a paper launch. OEM had this processor and few people supposedly sold it way overpriced and still didn't have in stock. It's still not available in most placed which older gen or much more expensive higher end models are. What a bunch of bullshit with Intel and recent gen. Don't even get me started on a mess that VROC was. So I guess at this rate they won't be getting my money any time soon.
 
No kidding, it took like a month to get my brither his i7-9800X which seemed like a paper launch. OEM had this processor and few people supposedly sold it way overpriced and still didn't have in stock. It's still not available in most placed which older gen or much more expensive higher end models are. What a bunch of bullshit with Intel and recent gen. Don't even get me started on a mess that VROC was. So I guess at this rate they won't be getting my money any time soon.

Is the 9800X at the very least a decent overclocker?
 
Intel 10nm will be just a low power mobile chip node? All this time and money spent on 10nn just to further expand 14nm wafer production by 25% by Q3-19?
Intel previously said that its new 14nm Comet Lake would have a performance not worse than Ice Lake. Digitimes Research believes Intel could shift its investments directly to 7nm process development, skipping 10nm.
No new performance parts until 7nm?
I guess Intel thought that without previous competition from AMD, 14nm would live on forever.
14nm for life yo!
 
I am a fanboi in turmoil: :ROFLMAO:

The Intel fan in me is going "damn that bites!"

The AMD fan in me is going "HA HA!"
Same, but the corporate IT guy in me just wants it to end. We had a 6 week lead time with replacement PC orders before the shortages hit. Because of the shortages, HP added another 8-10+ weeks to our schedule. It actually got to the point where we stopped replacing PCs unless they absolutely needed to be done instead of our normal refresh cycle. It's just a nightmare for my group in the company.
 
Same, but the corporate IT guy in me just wants it to end. We had a 6 week lead time with replacement PC orders before the shortages hit. Because of the shortages, HP added another 8-10+ weeks to our schedule. It actually got to the point where we stopped replacing PCs unless they absolutely needed to be done instead of our normal refresh cycle. It's just a nightmare for my group in the company.

I'm glad in our business we decided to go with the HP AMD based Ryzen laptops starting as of maybe 6~8 months ago which was offered for a cheaper price. These shortages had killed us, so I'm happy that decision was made back then knowing how the situation is now afterwards, we didn't know about Intel shortages then yet so yea it was a lucky move. :) Then again we did have issues in regards to drivers & Win 10 support again (new AMD laptops doesn't like LTSB 2016 very much, various issues like pixelized random color fullscreen mode and sleep /hibernate mode wake issues, I suppose due AMD not testing for old Win 10 builds and seems like needs Win 10 170x or newer build to be fully compatible) but it works fine with latest CB or LTSC 2019 though.
 
Is the 9800X at the very least a decent overclocker?

I honestly didn't care too much, it is the latest crop of HEDT processors and least expensive one with more than 16 PCIe lanes (if you don't count Xeon but then if you want 8 cores and high clock speed it will cost way more than this, according to msrp on Intel ARK i9-9900k supposed to cost just slightly less than i7-9800x). Also seems to have the highest clock speed behind i9-9900k if you don't need a lot of cores. It went into a workstation build so stability is key. From what I've heard it does OC well but I didn't try myself nor done any research in that department. It did run cold as a rock. I put it under a Corsair H150i pro which is a triple 120 rad AIO and processor was going in low 20c. Very quiet and powerful setup. With a big rad you might even be able to run possibly with no rad fans at all.
 
Intel is selling every last one of their overpriced 14nm CPUs, mark my words.

They are certainly capacity constrained at 14nm. They are selling everything they can make. They have increased die size over the 6th generation to 9th generation parts and this is playing into the capacity issues. Also the binning from a MHz and power draw has also ate into yields. Good chips that are either too slow or consume too much power to bin into a SKU.

The question I ask is why is AMD not grabbing more market share in this environment.
 
Do we even know WHAT is Intel launching in the future? same shit different node doesn't count as 'new'.
I am thinking what and when is their next 'tock', after 10 years of 'ticks'... where is the big 'tock'?
 
Same, but the corporate IT guy in me just wants it to end. We had a 6 week lead time with replacement PC orders before the shortages hit. Because of the shortages, HP added another 8-10+ weeks to our schedule. It actually got to the point where we stopped replacing PCs unless they absolutely needed to be done instead of our normal refresh cycle. It's just a nightmare for my group in the company.
Stop supporting a company that can't deliver what you need then and buy from AMD?
It's not rocket science. Yeah sure your lackeys have to do more qualification work but at least you'll have a choice then.
 
Do we even know WHAT is Intel launching in the future? same shit different node doesn't count as 'new'.
I am thinking what and when is their next 'tock', after 10 years of 'ticks'... where is the big 'tock'?

The rumors are alleging 10c and still 14nm sometime in 2019 for comet lake (which might make sense for the shortage if they're ramping up production on new release chips).
The next big jump would be the speculated skipping 10nm and jumping to 7nm.
Its my belief they underestimated AMD bring their A-game to the desktop market and AMD absolutely destroying Intel in the performance to price ratio (especially if the Ryzen 3XXX end up being 8c/16t like the speculations show).
Intel is now fighting to play catch up and struggling to do so.

Stop supporting a company that can't deliver what you need then and buy from AMD?
It's not rocket science. Yeah sure your lackeys have to do more qualification work but at least you'll have a choice then.
Its not always that simple with corporate bureaucracy, and if you have to do extensive testing prior to new equipment rollouts a major core change like that can eat into time resources quickly.
 
The rumors are alleging 10c and still 14nm sometime in 2019 for comet lake (which might make sense for the shortage if they're ramping up production on new release chips).
The next big jump would be the speculated skipping 10nm and jumping to 7nm.
Its my belief they underestimated AMD bring their A-game to the desktop market and AMD absolutely destroying Intel in the performance to price ratio (especially if the Ryzen 3XXX end up being 8c/16t like the speculations show).
Intel is now fighting to play catch up and struggling to do so.


Its not always that simple with corporate bureaucracy, and if you have to do extensive testing prior to new equipment rollouts a major core change like that can eat into time resources quickly.
10 core of the same warmed over architecture?
Well, I am just saying, 'cause there these other threads about ARM, and x86 being, you know, dead architecture walking...
I mean is it AMD the one carrying the torch? Really?
AMD is moving to chiplets, which seems like ho hum, since we talked about it a lot, but step back, and it is a big deal, and it couldn;t have been a simple thing either.
Plus the fact that as they dissect their CPUs into pieces they are basically making them modular.. I don't know but I can see AMD mixing and matching whatever they want with their architecture.. perhaps even K12 ARM CPU chiplets.. but for sure HBM2, GPUs, x86 CPUs of course, perhaps GDDR lanes for the GPU chiplets... a lot of flex there.
Intel? nothing? another tick? no rumours at all? not super powerful x86, ARM killer? Bueller?
 
This is absolutely a nightmare for purchasing departments.

I have several orders sitting in limbo right now because the back order on large vendors is 2-4 weeks for the products needed. I just waited a month to receive in some Dell mini desktops that were ordered and the vendor wasn't expecting them for another 2 weeks, so I guess we were lucky.

There are no competing AMD products in some of these lines that the vendors have not integrated, so AMD's share increase is really being limited by the adoption of new hardware being pushed by the larger vendors.
 
I mean is it AMD the one carrying the torch? Really?
Pretty much, as much as I love Intel, they're making my upcoming refresh decision to move to AMD easier and easier.

Its my belief they underestimated AMD bring their A-game to the desktop market and AMD absolutely destroying Intel in the performance to price ratio (especially if the Ryzen 3XXX end up being 8c/16t like the speculations show).
I actually made a snafu about the 3XXX series, the 8c/16t for AMD is the lower end, the mid range is 12c/24t, and the high end 16c/32t (basically 37.5% more cores compared to intel's suspected flagship for likely a similar price point with clock speed TBD)
https://www.techradar.com/news/amd-ryzen-3rd-generation

I think that capability is due to the dual die in a single processor design they went with IIRC.
 
Its not always that simple with corporate bureaucracy, and if you have to do extensive testing prior to new equipment rollouts a major core change like that can eat into time resources quickly.

This guy gets it... nothing moves remotely quick in a Corporate bureaucracy, and I'm sure as hell not going to through out new models into the environment without putting them through their paces first.

We're going through the same shortage everyone else is, been waiting over a month for our standard mobiles and higher end workstation laptops to arrive from Dell. Luckily we order in bulk so we do still have stock of some models, but the shops who order as-needed aren't doing to well with these shortages.

We're still trying to get everyone moved to Win10 by January 2020 as well, and this shortage sure isn't helping with that either.
 
Pretty much, as much as I love Intel, they're making my upcoming refresh decision to move to AMD easier and easier.


I actually made a snafu about the 3XXX series, the 8c/16t for AMD is the lower end, the mid range is 12c/24t, and the high end 16c/32t (basically 37.5% more cores compared to intel's suspected flagship for likely a similar price point with clock speed TBD)
https://www.techradar.com/news/amd-ryzen-3rd-generation

I think that capability is due to the dual die in a single processor design they went with IIRC.


I struggle with that the distinction between "low end" and "high end" necessarily is a core cunt thing.

I'd argue that there are very many applications in which a higher clocked 8C/16T would VASTLY outperform a lower clocked 16C/32T chip, inside the same thermal envelope.

Is certainly pay more for a top bin very high clock 8C/16T part than I would for a low clocked 16C/32T part. At least on the desktop. (Servers are a completely different world)


I love AMD and all, but we really don't need this:

1466983305_intel-vs-amd_o_873261~2.jpg
 
Last edited:
I struggle with that the distinction between "low end" and "high end" necessarily is a core cunt thing.

I'd argue that there are very many applications in which a higher clocked 8C/16T would VASTLY outperform a lower clocked 16C/32T chip, inside the same thermal envelope.


I love AMD and all, but we really don't need this:

View attachment 147431


LOL! COUNT. Core COUNT...
 
Is the 9800X at the very least a decent overclocker?
I have one with a Corsair H110i Pro and its on 4.7GHz. This is a temporary setup, once i go custom loop im going to try and get as close to 5.0GHz as i can.
 
  • Like
Reactions: STEM
like this
I love AMD and all, but we really don't need this:

View attachment 147431

Come to think of it; if AMD hadn't stepped up in 2017 and put the hurt on Intel, we would be debating the merits of deliding the $2000 Core-i7 7950XE 10-core CPU while running it on the latest ASUS motherboard with an 8-phase VRM. I don't know if you know this, but Intel is a very arrogant corporation. You would think that with all the industrial espionage they knew that Zen and Threadripper were coming. And probably they knew, they didn't believe that Zen would be any good. Intel never intended for Socket 2066 / X299 to host more than 10-core CPUs. That is why we now have two generations of X299 motherboards: those with wimpy VRMs and those with upgraded VRMs.
 
I have one with a Corsair H110i Pro and its on 4.7GHz. This is a temporary setup, once i go custom loop im going to try and get as close to 5.0GHz as i can.

Thank God you have a motherboard with a decent VRM. Leakage at 5.0GHz is going to be terrible. If it were my CPU, I'd aim for 4.8GHz on water and call it a day though 4.7GHz on an AIO is good. Can you sustain all core loads at that speed, other than AVX2 and AVX512 torture tests? I know that you have to keep AVX512 under 4.0GHz if you are even remotely concerned about your hardware.
 
Come to think of it; if AMD hadn't stepped up in 2017 and put the hurt on Intel, we would be debating the merits of deliding the $2000 Core-i7 7950XE 10-core CPU while running it on the latest ASUS motherboard with an 8-phase VRM. I don't know if you know this, but Intel is a very arrogant corporation. You would think that with all the industrial espionage they knew that Zen and Threadripper were coming. And probably they knew, they didn't believe that Zen would be any good. Intel never intended for Socket 2066 / X299 to host more than 10-core CPUs. That is why we now have two generations of X299 motherboards: those with wimpy VRMs and those with upgraded VRMs.

considering AMD was even hiding the final specs from their own motherboard AIB's up until the last minute, my guess is intel didn't have a clue what ryzen performance was going to be til it was to late.
 
Thank God you have a motherboard with a decent VRM. Leakage at 5.0GHz is going to be terrible. If it were my CPU, I'd aim for 4.8GHz on water and call it a day though 4.7GHz on an AIO is good. Can you sustain all core loads at that speed, other than AVX2 and AVX512 torture tests? I know that you have to keep AVX512 under 4.0GHz if you are even remotely concerned about your hardware.
4.7 on this cooler stays under 200F running Cinebench. AVX offset is 3 on the multi. Im sure with Intel BIT it will hit 205-210 but i never see those kinds of loads so i clocked it at 4.7. Tried 4.8 and it just got to hot for comfort. I think im severely limited by the AIOs waterblock heat dissipation. 4.7 is the ragged edge on a dual 120 rad AIO. I think a larger AIO will have the same waterblock issues. If this was for someone else id back it down to 4.6 and leave it be.
 
  • Like
Reactions: STEM
like this
Isn't all this kerfuffle over "supply issues of 14nm silicon" kind of.... odd? I mean, the oft cited reasoning for the shortage seems to be that modern automobiles are using lots more silicon these days... But how many Motorola/Freescale 9S12x micro controllers (the dominant uC used in nearly every automotive device) are needed to create such a shortage? Why does this "shortage" really only seem to affect Intel and their fabs/foundries, while all the others that allege to be affected by it (Samsung/TSMC), really only seem to be using it as a reason to increase their prices?

Perhaps it really IS the supply of wafers that is a problem. Perhaps they are the ones looking to gouge for profits... they are the lowest-end of the microchip totem-pole, and their product is significantly cheaper than what the fabs make from it; it'd be totally understandable if they wanted a bigger piece of the multibillion-dollar pie. But I think there's a bigger underlying problem....

This whole shortage thing just seems to REEK of "artificial scarcity." I mean, what would be the best way to stretch outdated technology as long as possible while making more profits off of less sales? The "shortage" offloads the problems from the company directly, places it exclusively on the supply chain, keeps investors quiet, greatly increases profit margins off of existing fab processes, and gives the entire "Intel industry" a reason to increase prices, release mediocre product iterations, and generally engage in "corporate douchebaggery." There's a good reason the CEO (Krzanich) was fired, and there was an equally good reason there was a power vacuum for so long... someone's got a "hot potato" and nobody wants to deal with it.

I'd theorize that the "hot potato" seems to be some sort of "glass ceiling" regarding x86/x64 performance and clock frequency. I'd wager that the ONLY real way to dramatically increase performance is through SIMD vectors (all those fancy MMX/SSE/AVX/TSA/whatever), which require specially optimizing software and programming techniques to use properly. Per-core, performance of the 2019 chips is (and I'm 100% fabricating this number) only around 15% faster than it was in 2009 (WITHOUT THE USE OF SIMD VECTORS; obviously utilizing the vectors will dramatically increase performance, as does better/faster memory interfaces, clock frequencies; you guys know this stuff....) Intel knows this, and they know that the market is going to be saturated with chips that all perform very identically other than clocks/cores/SIMD, thus negating the "demand" for their products. They've effectively worked themselves out of a job, in this scenario.

But I digress...
 
Isn't all this kerfuffle over "supply issues of 14nm silicon" kind of.... odd? I mean, the oft cited reasoning for the shortage seems to be that modern automobiles are using lots more silicon these days... But how many Motorola/Freescale 9S12x micro controllers (the dominant uC used in nearly every automotive device) are needed to create such a shortage? Why does this "shortage" really only seem to affect Intel and their fabs/foundries, while all the others that allege to be affected by it (Samsung/TSMC), really only seem to be using it as a reason to increase their prices?

Perhaps it really IS the supply of wafers that is a problem. Perhaps they are the ones looking to gouge for profits... they are the lowest-end of the microchip totem-pole, and their product is significantly cheaper than what the fabs make from it; it'd be totally understandable if they wanted a bigger piece of the multibillion-dollar pie. But I think there's a bigger underlying problem....

This whole shortage thing just seems to REEK of "artificial scarcity." I mean, what would be the best way to stretch outdated technology as long as possible while making more profits off of less sales? The "shortage" offloads the problems from the company directly, places it exclusively on the supply chain, keeps investors quiet, greatly increases profit margins off of existing fab processes, and gives the entire "Intel industry" a reason to increase prices, release mediocre product iterations, and generally engage in "corporate douchebaggery." There's a good reason the CEO (Krzanich) was fired, and there was an equally good reason there was a power vacuum for so long... someone's got a "hot potato" and nobody wants to deal with it.

I'd theorize that the "hot potato" seems to be some sort of "glass ceiling" regarding x86/x64 performance and clock frequency. I'd wager that the ONLY real way to dramatically increase performance is through SIMD vectors (all those fancy MMX/SSE/AVX/TSA/whatever), which require specially optimizing software and programming techniques to use properly. Per-core, performance of the 2019 chips is (and I'm 100% fabricating this number) only around 15% faster than it was in 2009 (WITHOUT THE USE OF SIMD VECTORS; obviously utilizing the vectors will dramatically increase performance, as does better/faster memory interfaces, clock frequencies; you guys know this stuff....) Intel knows this, and they know that the market is going to be saturated with chips that all perform very identically other than clocks/cores/SIMD, thus negating the "demand" for their products. They've effectively worked themselves out of a job, in this scenario.

But I digress...
Yes new instructions, why not? Any leaks on that though?
I mean if the x86 software environment gets off their ass, shouldn't be an issue to reap the gains though.
 
I struggle with that the distinction between "low end" and "high end" necessarily is a core cunt thing.

I'd argue that there are very many applications in which a higher clocked 8C/16T would VASTLY outperform a lower clocked 16C/32T chip, inside the same thermal envelope.

Is certainly pay more for a top bin very high clock 8C/16T part than I would for a low clocked 16C/32T part. At least on the desktop. (Servers are a completely different world)


I love AMD and all, but we really don't need this:
I just use the distinction for clarification, there are many applications that benefit from the higher individual clock speeds.
At the end of the day if the application supports it the additional cores can/will benefit too though, all depends on your use case and purpose.

The 16c/32t is just two of the 8c/16t dies in one chip supposedly, and expected to be higher binned/clocked to boot. (3600X: 8-cores, 16-threads, clocked at 4.0GHz to 4.8GHz ~ 3850X: 16-cores, 32-threads, clocked at 4.3GHz to 5.1GHz is the current rumor)
So yes "a higher clocked 8C/16T would VASTLY outperform a lower clocked 16C/32T chip" but the 16c is expected to be clocked higher (at stock anyway) no one knows on OC potential.
 
4.7 on this cooler stays under 200F running Cinebench. AVX offset is 3 on the multi. Im sure with Intel BIT it will hit 205-210 but i never see those kinds of loads so i clocked it at 4.7. Tried 4.8 and it just got to hot for comfort. I think im severely limited by the AIOs waterblock heat dissipation. 4.7 is the ragged edge on a dual 120 rad AIO. I think a larger AIO will have the same waterblock issues. If this was for someone else id back it down to 4.6 and leave it be.

Then 4.7 is where you hit the minimum leakage point (don't know if it's a thing, it's what I call it). From here on voltage will scale worse and worse with CPU speed. The hotter the CPU gets, the more power it needs, and the more power it needs, the hotter it gets. It's kind of like an Ouroboros effect. That being said, leakage starts for Skylake-X CPUs were all-core turbo speeds ends. I'm not talking about that "All Core Enhancement" implemented by manufacturers, which doesn't work well at all on Skylake-X as it tries to hit the Turbo 3.0 speed on all cores, I'm talking about Intel's factory implementation for each CPU. For a 7900X it's 4.0GHz on all 10 cores. As soon as you take the CPU past that, it gets worse and worse with every 100Mhz you add to all the cores. I have a 7900X and I run it at 4.7GHz on 4 cores, 4.5GHz on 6 cores, 4.3GHz on 8 cores and 4.0GHz on 10 cores. That way I can hit those 193 single threaded scores in Cinebench R15 that some reviewers hit with their Intel Skylake-X CPUs running stock :D
 
I'd theorize that the "hot potato" seems to be some sort of "glass ceiling" regarding x86/x64 performance and clock frequency. I'd wager that the ONLY real way to dramatically increase performance is through SIMD vectors (all those fancy MMX/SSE/AVX/TSA/whatever), which require specially optimizing software and programming techniques to use properly. Per-core, performance of the 2019 chips is (and I'm 100% fabricating this number) only around 15% faster than it was in 2009 (WITHOUT THE USE OF SIMD VECTORS; obviously utilizing the vectors will dramatically increase performance, as does better/faster memory interfaces, clock frequencies; you guys know this stuff....) Intel knows this, and they know that the market is going to be saturated with chips that all perform very identically other than clocks/cores/SIMD, thus negating the "demand" for their products. They've effectively worked themselves out of a job, in this scenario.

Well, not many common applications need the heavy duty floating point math that AVX512 can deliver. In fact, it's hard to even come up with examples. Hell, if you really really, and I mean really really need AVX512 for your application, you might as well do your math on the GPU and actually get some real performance out of it. AVX512 doesn't have much use on most chips, consumer or enterprise, but it sure as hell takes up a lot of silicon real estate, eats power like it's nothing and makes chips more expensive.
 
My wife's rebuild will be an AMD Ryzen 5 2600X (coming from a 4650k). She won't be missing Intel in the slightest..

Then my kids will get rebuilds of the same 2600X's (replacing 2500k's). Again, they won't know the difference. Meanwhile I'll be saving a bundle.
 
To all the corporate guys and gals who think that "no one ever got fired for recommending Intel", please don't forget to tell your bosses the security trainwreck that Intel CPUs are, along with the loss of performance nightmare that these CPUs will unleash upon you once you installed all the patches on your state of the art Intel-powered virtualization servers that you waited on for 3+ months to be delivered. Huh, that was a mouthful...

meltdown.jpg
 
Indeed, who would have thought that anyone accused AMD of that?

As they say, "put your money where your mouth is," and give us some valid examples.

Eric Raymond wrote something that converts software repositories in older packages like RCS to Git. Obviously that's pretty specialized but he says for very large repos, you can't meaningfully throw more threads at the problem because the in-memory structures are too large (IIRC--I'm probably oversimplifying) and that higher single-core speeds are more important. He was in the process of uplifting something huge like GCC or Bash last year; whatever it was it had something like a quarter of a million commits over a couple of decades, and one run would take 8+ hours.

Admittedly that's really specialized, but it's still a thing.

I just checked--it was GCC. 259K commits. The process uses 60+GB of RAM. At http://esr.ibiblio.org/?p=8161, he discusses why he had to abandon Python.
 
Eric Raymond wrote something that converts software repositories in older packages like RCS to Git. Obviously that's pretty specialized but he says for very large repos, you can't meaningfully throw more threads at the problem because the in-memory structures are too large (IIRC--I'm probably oversimplifying) and that higher single-core speeds are more important. He was in the process of uplifting something huge like GCC or Bash last year; whatever it was it had something like a quarter of a million commits over a couple of decades, and one run would take 8+ hours.

Admittedly that's really specialized, but it's still a thing.

I just checked--it was GCC. 259K commits. The process uses 60+GB of RAM. At http://esr.ibiblio.org/?p=8161, he discusses why he had to abandon Python.

When I think about my Threadripper 1950X, or a similar CPU, I look at it something along the lines of using six cores for one task, two cores for something else, eight cores running some heavy duty app. It's a crude example, but having a maximum of 8 cores to run an OS plus your heavy duty app is not the same as having eight available cores. The GHz gap between the eight core and 16 core CPUs has to be big enough for the eight core CPU to make up the performance difference lost due to running the other tasks. I hope that it makes sense what I'm saying here.

If you want to talk about software that's bad at using multiple threads, I don't need to go any further than PHP. They never got multithreading working right for PHP. So it's still a resource hog, so bad in fact, that Facebook made their own PHP interpreter to run Facebook on.
 
When I think about my Threadripper 1950X, or a similar CPU, I look at it something along the lines of using six cores for one task, two cores for something else, eight cores running some heavy duty app. It's a crude example, but having a maximum of 8 cores to run an OS plus your heavy duty app is not the same as having eight available cores. The GHz gap between the eight core and 16 core CPUs has to be big enough for the eight core CPU to make up the performance difference lost due to running the other tasks. I hope that it makes sense what I'm saying here.

If you want to talk about software that's bad at using multiple threads, I don't need to go any further than PHP. They never got multithreading working right for PHP. So it's still a resource hog, so bad in fact, that Facebook made their own PHP interpreter to run Facebook on.

Threading wasn't really his problem, supposedly, but running out of RAM on a 64GB system. According to him, the reason he couldn't really multithread was the nature of the data structure led to it not being easily divisible between multiple threads.
 
Isn't all this kerfuffle over "supply issues of 14nm silicon" kind of.... odd? I mean, the oft cited reasoning for the shortage seems to be that modern automobiles are using lots more silicon these days... But how many Motorola/Freescale 9S12x micro controllers (the dominant uC used in nearly every automotive device) are needed to create such a shortage? Why does this "shortage" really only seem to affect Intel and their fabs/foundries, while all the others that allege to be affected by it (Samsung/TSMC), really only seem to be using it as a reason to increase their prices?

Perhaps it really IS the supply of wafers that is a problem. Perhaps they are the ones looking to gouge for profits... they are the lowest-end of the microchip totem-pole, and their product is significantly cheaper than what the fabs make from it; it'd be totally understandable if they wanted a bigger piece of the multibillion-dollar pie. But I think there's a bigger underlying problem....

This whole shortage thing just seems to REEK of "artificial scarcity." I mean, what would be the best way to stretch outdated technology as long as possible while making more profits off of less sales? The "shortage" offloads the problems from the company directly, places it exclusively on the supply chain, keeps investors quiet, greatly increases profit margins off of existing fab processes, and gives the entire "Intel industry" a reason to increase prices, release mediocre product iterations, and generally engage in "corporate douchebaggery." There's a good reason the CEO (Krzanich) was fired, and there was an equally good reason there was a power vacuum for so long... someone's got a "hot potato" and nobody wants to deal with it.

I'd theorize that the "hot potato" seems to be some sort of "glass ceiling" regarding x86/x64 performance and clock frequency. I'd wager that the ONLY real way to dramatically increase performance is through SIMD vectors (all those fancy MMX/SSE/AVX/TSA/whatever), which require specially optimizing software and programming techniques to use properly. Per-core, performance of the 2019 chips is (and I'm 100% fabricating this number) only around 15% faster than it was in 2009 (WITHOUT THE USE OF SIMD VECTORS; obviously utilizing the vectors will dramatically increase performance, as does better/faster memory interfaces, clock frequencies; you guys know this stuff....) Intel knows this, and they know that the market is going to be saturated with chips that all perform very identically other than clocks/cores/SIMD, thus negating the "demand" for their products. They've effectively worked themselves out of a job, in this scenario.

But I digress...
I think this SemiAccurate article is worth a read. Less conspiracy and more real-world ups and downs of the chip biz.
https://semiaccurate.com/2019/01/25/why-semiaccurate-called-10nm-wrong/
 
Back
Top