An Analysis of the Z390 Socket Proves the Extra Pins Aren't Necessary

...
Best way to think about it is, you probably can get away with replacing all your 15/20 amp breakers with 30/40 amp breakers in your house without going from 14/12 gauge wire to 10/8 gauge wire, but I doubt you'll be able to get your home owners insurance to cover you with it.

A better analogy, but why then did Intel not make 1151v3 with even more power pins for the 9900k if these margins are so critical?

Except that doesn't actually prove anything. When he steps up to cover peoples warranties when they do it, maybe he'll have an argument.

He proved what he set out to do. The pins can handle WAY more power than they need to.
 
Brilliant. Another spot on analogy by the IDF. I suppose increasing the current per pin by around 10% even though each pin can handle like 500% would have the same long term effect as running an extra 200 hp in some aftermarket tuned engine.

10% is rather significant in terms of di/dt, thermal heating, em fields, etc. You seem to think that just because 20 gauge doesn't melt with 40As, that it is perfectly fine to use it for 15A wall wiring completely ignoring that there are fairly legit reasons why any house built like that won't meet code or ever get insurance.
 
He proved what he set out to do. The pins can handle WAY more power than they need to.

No they can't, not within spec. And that's not a spec that Intel sets, that is a spec that the socket designers and manufacturers set. Once again, lots of things can do more than spec in specific environments for specific time tables at specific probabilities. But those aren't the environments, time tables, or probabilities they are designed for.
 
You do realize that being an overclocking "expert" really doesn't mean all that much, right? Its not like one needs an EE background to overclock. Overclocking is primarily about cash at the competitive levels. Christ, there a multitude of reasons that people don't put 5A though an LGA pad on any design in existence as spec. Like, you can put 40A though 18 gauge if you really want to, but I'd never recommend that anyone actually do it.

To be at Der8auer's level of overclocking, you need to know A LOT about the hardware and not just some schmuck messing with the bios. If you go to his channel, you will see this.

Your house wiring analogies are not impressing as well. You are comparing the potential burning down of a house with sub-par wiring to a possible minuscule reduction hardware life of your pc.
 
To be at Der8auer's level of overclocking, you need to know A LOT about the hardware and not just some schmuck messing with the bios. If you go to his channel, you will see this.

Compared to an EE designing chips and socket? LOL.

Your house wiring analogies are not impressing as well. You are comparing the potential burning down of a house with sub-par wiring to a possible minuscule reduction hardware life of your pc.

They are literally the same issue and can have the same result.
 
No they can't, not within spec. And that's not a spec that Intel sets, that is a spec that the socket designers and manufacturers set. Once again, lots of things can do more than spec in specific environments for specific time tables at specific probabilities. But those aren't the environments, time tables, or probabilities they are designed for.

What is your point exactly? Most POS B-350 boards were not spec'd for a 2700x or the upcoming R9, but that is not stoping those people from upgrading their cpus to something better.


Compared to an EE designing chips and socket? LOL.

They are literally the same issue and can have the same result.

Yes, wiring your house with smaller wire that could destroy everything you hold dear is the exact same as 10% more current to a Z-170. (Even though a 9900k pretty much does that to a Z-370)
What a joke.
 
He first showed what the overclocked current would be on a Z-170:
View attachment 132623

He then ran 5 amps through a SINGLE PIN with no signs of damage.

View attachment 132625
Finally reduced the pins until he was down to 69 power pins were blocked and ran prime95 overnight while oveclocked. Going from Z-370 to Z-170 only reduces the the power pins by 18.
But muh Intel
 
He's not a EE. The course overlap between a ME and a EE is basically non-existent.

This. I don’t know who Roman is, but I’m actually an EE by education and I can confirm there is very little overlap between ME and EE courses. You take the same prerequisite math and science classes, a couple of the same engineering classes, and that’s it.

I have to just shake my head at this thread and especially one comment I read, which was something along the lines of “Well, he ran 5A through a pin and there was no sign of damage.” LOL. Is that the new basis for proof? Some guy runs 5A through a pin for a few minutes, the thing doesn’t shoot sparks, so it must be valid? LOL. As an engineer, I’ll trust Intel’s engineers on this one.

Now, that doesn’t mean Intel doesn’t deserve some criticism. They had to know the direction CPUs were headed so I’m a bit surprised they didn’t anticipate the need for additional power and ground pins and include them in the initial spec.
 
Last edited:
Color me just a tad skeptical on the "cash grab" argument for the 300 series requirement. What percentage of revenue does Intel expect and actually get from people upgrading their systems, vs purchasing altogether new systems? If you are purchasing (or assembling) a brand new system, it doesn't matter if there is a new chipset requirement, you'll just purchase what is required and move on. While I do love the upgrade path that AMD affords us, I doubt Intel would lose much profit at all if they allowed more generations of CPU's in older generations of chipsets.
 
Color me just a tad skeptical on the "cash grab" argument for the 300 series requirement. What percentage of revenue does Intel expect and actually get from people upgrading their systems, vs purchasing altogether new systems? If you are purchasing (or assembling) a brand new system, it doesn't matter if there is a new chipset requirement, you'll just purchase what is required and move on. While I do love the upgrade path that AMD affords us, I doubt Intel would lose much profit at all if they allowed more generations of CPU's in older generations of chipsets.

Exactly. The "cash grab" by Intel, if that's what it is, would result in very marginal gains. The vast majority of their CPUs are sold in prebuilt boxes to businesses or consumers through channels like Best Buy and Dell. People in enthusiast communities often forget how small of a market they represent. And let's face reality here - if I'm a hardcore PC enthusiast and a new CPU comes out, I'm probably going to upgrade my board anyway.
 
"The conclusion is that the LGA-1151v2 is absolutely unnecessary."

Anyone shocked by this?

Just like it was unnecessary to lock 7700K users out of going to a future motherboard. Or future motherboard users being able to use a Kaby Lake in an emergency... All marketing and $$$ decisions. Sad thing is it backfires by making it more likely for upgraders to cross shop and jump to AMD since they lock you into doing motherboard and CPU at the same time.
 
Glad to see there are some actual engineers here who "get it" and don;t just jump on the "Intel is trying to bilk us" bandwagon. And no, EE and ME coursework do not overlap, despite the old joke that ME is just EE without the imaginary numbers.

A car analogy that might work. So the car manufacturer says you need to use a 250gph fuel pump, but in testing you find that out of the box, the car never needs more than 100gph. But there's this handy knob on the dash that can turn up the turbo boost (this IS a K series CPU, so unlocked for anything from running totally stock to a mild overclock to extreme stuff). Even turning the knob all the way up, it only needs 200gph. RIp off, you cry, the 250gph pump is $100 more than the 200gph pump. But the 200gph pump would be working at 100% capacity, the 250gph pump would still have some margin. Mechanical wear vs electrical, but what's going to last longer, the smaller one runnign flat out or a slightly oversize one running more conservatively, assuming manufacturing quality was otherwise identical?

Or a real world example. Fairbanks-Morse opposed piston diesel engines powered many USN surface ships and submarines during WWII. Generally praised for reliability and performance. So after the war, FM got into the railroad locomotive business, a great opportunity to maintain production of their diesel engines. And they more or less failed miserably. Mostly reliability issues. Say what? Well, in a ship at sea, the engines rarely ran flat out - absolute top speed was reserved for emergencies, maximum sustained cruise was always something less than outright maximum speed. But in locomotive use, especially when pulling a hill, the engines had to constantly run flat out under maximum load. There was less cooling capacity - limited to what radiators and water tanks were on the locomotive, while the seagoing versions had a virtually limitless supply of cold seawater for cooling, if even indirectly through water to water heat exchangers. And because of the design of the opposed piston engine, when a failure occurred in the lower piston, which was usually the case because his is where the exhaust ports were, the whole thing had to be disassembled to get to the bad cylinder. Much more difficult than repairing a failed cylinder in any of the other competing designs. In the shipboard environment, they just didn;t have these types of failures with any frequency, so ease of repair wasn;t a prime consideration.

Bottom line - does it work with a bunch of power pins taped off? Obviously. Does it meet specs, or is it going to be just as reliable long term? Odds are against it. Is it going to blow up within a week? Doubt it. But by operating right at spec with no headroom, or even over spec, it WILL shorten the life expectancy. It's that same cringe factor I have when I see a hobbyist justify dropping resistor values by saying "well, the LED can handle 25ma, so.." NO! You don't design to run at maximum 100% of the time. The NEC specifies less than the absolute maximum of the fuse rating for household branch circuits, too. Same reason, Running at 100% all the time leaves absolutely no safety margin if something should ever change.
 
Not sure that a few extra pins are cause for any uproar, when we do not have any visibility to the actual design documents.
As for the "we should all use AMD" , until AMD can hit single core performance numbers of comparable (price wise) Intel processors, I just cannot justify going AMD. None of the games I play seem to leverage the multiple cores, and when I am playing something, I am usually not doing much else on that computer that would take advantage of the multiple cores.
I really do want to try AMD, but every time i start looking at a build, single core performance seems to be the snag.
 
No one should take this amateur analysis seriously. Did he analyze the socket and CPU under elevated temperature conditions? Did he artificially age the socket and CPU to introduce oxide, sulfide, or nitride contaminates at the socket/CPU interface? Did he run the test for years? Did he factor in the potential impact of socket or CPU warp over time or from manufacturing variations, which might cause some pins to lose or have reduced contact? Did he look at how much extra power was wasted by the increases heat generated by reducing the number of pins? Did he consider the effect of accelerated reaction rates due to pin heating on long term reliability?

And let's crank the numbers: the revised LGA 1151 adds 18 more power pins (going from 128 to 146).
The video demonstrates that each bin has a little more than 0.045 ohms resistance (0.23V voltage drop at 5A).
That makes the aggregate power pin resistance for 1151V1 14% higher (0.35 milliohms) the 1151V2 (0.31 milliohms).
Doesn't sound like much, BUT those Coffee Lake processors are pulling 138A through those pins.
The extra power pins reduce the resulting voltage drop across the pins (not to mention the drop across the wiring connected to those pins) from 48mV to 42mV, with a corresponding decrease in power dissipated just by the pins of almost 1 Watt. And that's without overclocking. Sure, it's only 1W, but it's 1W at a place that's hard to get extra cooling to -- what the thermal conductance of the material they make sockets from, anyway?

So it's one thing to say "there's no need for these extra pins if you're just going to run your system for a week."
But it's an entire different matter when your customers expect the socket-CPU interface to not be an issue for three years.
Or when your customers expect to be able to feed 150A to an overclocked CPU without the socket melting.
It is disappointing when a well-respected voice in the community results to clickbait for views. Is revenue from Youtube really drying up that badly?
I always want the features from a new motherboard everytime I get a new CPU anyways, this isn't an important revelation to me. When intel was talking about going BGA I was fine with that too...

Maybe I don't buy often enough to see this as a problem.
My thoughts exactly. A lot of people on this very forum brag about how their Nehalem or Sandy Bridge rigs are still going strong. Guess what? That means you haven't upgraded your CPU in 8-11 years. Are you seriously wanting to put the latest and greatest CPU into an 8-11 year old motherboard?
 
Same reason, Running at 100% all the time leaves absolutely no safety margin if something should ever change.
And something always does change: change is the only constant.
For example, passing electrical current through a metal-to-metal contact can cause reactions between the two metals and between those metals and the gases around them. The more current you run, the higher the temp, and the higher the contact pressure, the faster these reactions occur. These reactions can increase the contact resistance, which increases the heat generated at the contact, which increases the reaction rate, starting a "death spiral." Eventually, the metal-to-metal contact can become cold-welded together.

I've actually seen this happen, back in the 1990's on PCBs with gold-plated edge connectors going into gold-plated sockets. It's far worse when dissimilar metals or even alloys of the same metal are used for the socket and pin -- remember the tin-plated-DRAM-in-gold-plated-sockets problem back in the day?

A good Mechanical Engineer really ought to be aware of this effect, BTW.
 
Not sure that a few extra pins are cause for any uproar, when we do not have any visibility to the actual design documents.
As for the "we should all use AMD" , until AMD can hit single core performance numbers of comparable (price wise) Intel processors, I just cannot justify going AMD. None of the games I play seem to leverage the multiple cores, and when I am playing something, I am usually not doing much else on that computer that would take advantage of the multiple cores.
I really do want to try AMD, but every time i start looking at a build, single core performance seems to be the snag.

This is a good point. Intel may have those pins there for future CPU's or any number of reasons we aren't privy to. Intel has had extra pins in sockets before which would later be used for newer CPUs which didn't exist when the motherboard / sockets came out. Socket 7 being the best example of this. Sometimes upgrade paths are built into hardware, even if they ultimately go unused.

When it comes to chipsets, I don't think Intel thinks about us quite as much as they think of motherboard manufacturers as their customers. We are further down the line. We don't buy chipsets in their raw form. Motherboard manufacturers do. Intel no doubt marketed Z390 as a deal compared to Z370+Titan Ridge or Z370+ASM1143 or something like that. It probably costs the manufacturers more than Z370, but less than Z370 and an ASMedia 1143 USB 3.1 controller. It's all about cheaper integration.

Unfortunately, it isn't as if motherboard manufacturers really have much of a choice as to whether or not they'll design and build motherboards with the newer chipsets. However, they can entice them to go all in on more models. The motherboard manufacturers are well aware what chips will work with a given chipset and whether or not Z390 was or is absolutely needed for given CPU. So, Intel can't snow them on that the way they can with the general populace.

So, cheaper integration was probably the key selling point. Of course, if motherboard manufacturers want to stay on good terms with Intel, then they do what Intel tells them to anyway. However, manufacturers often have a choice from a technical perspective so long as skus of specific chipsets are available. Again, Intel will do things to entice manufacturers into buying up the old stock or switching to newer parts which make Intel more profit.

Intel is far more manipulative than people are generally aware. But, when it comes down to it Intel is just trying to sell parts to its customers (motherboard manufacturers), whether they need them or not. Its not unlike dealing with a car dealer who sees you come in for an oil change with a two year old car trying to sell you on a newer vehicle that does the exact same things yours does. You know you don't need it, but they make you want it or make you feel good about buying it even though you knew you didn't need it.
 
Last edited:
For as long as I can remember intel has been requiring a new socket with each new generation or two. AMD tends to use the same socket across 3+ generations. I'm sure from intel's point of view it's less validation, verification, etc, and then you have no existing constraints that you're forced to live with and you sell more motherboards.

I've upgraded my cpu a couple times in the same computer. Socket 7, AMD K6 200 to K6-2 300, and I sold the old K6 on a local classified site. A phenom x3 to a phenom x4 and I don't recall what I did with the x3. I think I also upgraded an Athlon 64 cpu once. If the Ryzen 3700x is what the rumors say and it's compatible with older Ryzen boards I'll consider upgrading a couple Ryzen machines in my house.

With my intel boxes I've just upgraded motherboard and CPU together and it's been fine. It's a bit more expensive than upgrading just the cpu but a left over cpu is useless without a motherboard so you end up either selling it on it's own or putting it in a box somewhere or buying a new board anyways.
 
Just like it was unnecessary to lock 7700K users out of going to a future motherboard. Or future motherboard users being able to use a Kaby Lake in an emergency... All marketing and $$$ decisions. Sad thing is it backfires by making it more likely for upgraders to cross shop and jump to AMD since they lock you into doing motherboard and CPU at the same time.

So much this! This locking out policy only seems to hurt that can't afford to spend a lot. Like many others, I like to upgrade my motherboard when I get a new CPU as well.

Apparently, no one can make the pin argument other than Electrical Engineers from Intel. Everyone that tries is a hack followed by some car or wiring analogy.

That's fine, but why in the hell can't someone with a great working 7700k (possibly scored for cheap on ebay), use a descent Z-370 board? It would have also been great if 6th gen Celeron/I3 users could have kept their cpus as a backup/troubleshoot for their new systems. Can any of you Engineers explain that one?

Like I said, it only hurts non-enthusiasts (which is probably why most here brush it off).
 
So why does Intel then need new boards if its not for money? AMD can run a mb through many years and tons of upgrades, but the giant company Intel can barely do 2 upgrades within a year or so? Intel needed a new MB just to go 2 more cores?
 
For as long as I can remember intel has been requiring a new socket with each new generation or two. AMD tends to use the same socket across 3+ generations. I'm sure from intel's point of view it's less validation, verification, etc, and then you have no existing constraints that you're forced to live with and you sell more motherboards.

I've upgraded my cpu a couple times in the same computer. Socket 7, AMD K6 200 to K6-2 300, and I sold the old K6 on a local classified site. A phenom x3 to a phenom x4 and I don't recall what I did with the x3. I think I also upgraded an Athlon 64 cpu once. If the Ryzen 3700x is what the rumors say and it's compatible with older Ryzen boards I'll consider upgrading a couple Ryzen machines in my house.

With my intel boxes I've just upgraded motherboard and CPU together and it's been fine. It's a bit more expensive than upgrading just the cpu but a left over cpu is useless without a motherboard so you end up either selling it on it's own or putting it in a box somewhere or buying a new board anyways.

What people don't seem to understand are the inherent difficulties in maintaining an existing socket design over a prolonged period of time. It does increase QVL testing time. It creates issues with UEFI BIOS and firmware as they have to support a larger range of processors. You also end up having to design the successor CPUs to work with older VRM designs. You also end up capped on what features you can implement because your using a dated platform. This is what the AMD crowd often doesn't understand. I agree that there is a sweet spot for consumers and manufacturers which is somewhere in between AMD's excessively long socket compatibility and Intel's changing of sockets or platforms every single generation or two in the case of HEDT.

Back in the day when processor performance doubled every 18 months and there were performance benefits to changing motherboards I wouldn't have minded as much. Unfortunately, motherboard technology has been somewhat stagnant as far as the broad feature set. Motherboard manufacturers try to innovate as much as they can to entice users to buy newer motherboards more often than they already have to, but that's a tougher sell when going back 4 or 5 years nets you pretty much the same features we have today.
 
So much this! This locking out policy only seems to hurt that can't afford to spend a lot. Like many others, I like to upgrade my motherboard when I get a new CPU as well.

Apparently, no one can make the pin argument other than Electrical Engineers from Intel. Everyone that tries is a hack followed by some car or wiring analogy.

Aren't you the guy who said "He ran 5 A on a pin and there was no sign of damage, so it was OK!" If you were the guy who said it, you clearly don't understand how engineering works.

This has been explained multiple times by engineers in this thread, who are not hacks. Go back and reread the posts from Megaslug and pcgeekesq. You don't engineer products to ride along the very edge of a specification. You engineer margins into your products to account for transient conditions or situations as well. This is exactly what Intel's engineers did and exactly what I did when I engineered products as well. Unlike der8auer (or whatever his name is), Intel has to warranty these products for years and are financially responsible for replacing them, so of course they're going to engineer them to last.

Like I said, it only hurts non-enthusiasts (which is probably why most here brush it off).

How does it hurt non-enthusiasts? How many non-enthusiasts are trying to upgrade just their CPU? Non-enthusiasts are your Best Buy and Dell users and only a tiny fraction of them would try swapping their CPUs. I've been an enthusiast for nearly 40 years and have swapped CPUs on motherboards maybe 3 times in all of that time. It is amazing the number of people in this thread who not only don't have a grasp of engineering practices, but also don't understand the PC market.
 
Last edited:
So why does Intel then need new boards if its not for money? AMD can run a mb through many years and tons of upgrades, but the giant company Intel can barely do 2 upgrades within a year or so? Intel needed a new MB just to go 2 more cores?

Except they didn't need a new MB when going to 2 more cores for the 9900k. Nor did they need a new motherboard when they decided to increase LGA 2066 to 18 cores when they got wind of the 1950x.

But go ahead and believe the condescending Engineers here that will make some analogy about water pumps or something and say that Intel did nothing wrong and it was absolutely necessary.
 
Everything is designed for way overkill power delivery because it is a critical link that forward flows into all the margins everywhere in the chip design. It affects di/dt curves, etc. All that leads to margin assumptions in power management, decap, transistor performance ratios, etc. All of which leads into FIT rates for the product. And those FIT rates leads into overall reliability for the system. Sure, you probably can run the pins at 5A, but then your pin FIT rates jump through the roof, not to mention all your di/dt margining goes into the toilet also increasing FIT.

And all this is based off of probabilities. The probability of a pad failing to contact in the socket, the probability of an arc, metal fatigue, etc.

Best way to think about it is, you probably can get away with replacing all your 15/20 amp breakers with 30/40 amp breakers in your house without going from 14/12 gauge wire to 10/8 gauge wire, but I doubt you'll be able to get your home owners insurance to cover you with it.

Its actually more like an argument that 11 gauge wire would work when the spec "requires" 10 gauge (14% difference), after someone demonstrated even 16 gauge didn't cause problems (50% difference).
 
Notice that none of the Engineers here have a lick of common sense. They have just perfected the are of Arm-chair Quarterbacking other peoples work.

I am sure all of those people running a 2600k at over 4.6 GhZ for 5+ years are all within Intel spec as well.

Its actually more like an argument that 11 gauge wire would work when the spec "requires" 10 gauge (14% difference), after someone demonstrated even 16 gauge didn't cause problems (50% difference).

Also, without the scare tactic of burning your house down with no insurance. Of course, even us non-engineers know that length is a factor in determining wire gauge. "Look everyone, I know what guage wire you need for a 30 Amp breaker!"
 
Notice that none of the Engineers here have a lick of common sense. They have just perfected the are of Arm-chair Quarterbacking other peoples work.

I am sure all of those people running a 2600k at over 4.6 GhZ for 5+ years are all within Intel spec as well.

Your lack of self-awareness is truly breathtaking. You're Arm-chair Quarterbacking Intel's decisions and don't even see it! LOL!!

Regarding overclocking, people are accepting that risk on their own accord. You're free to try installing an 8700K on a Z170 board - don't let us stop you! Just don't complain if something goes wrong and expect Intel to fix it.


Also, without the scare tactic of burning your house down with no insurance. Of course, even us non-engineers know that length is a factor in determining wire gauge. "Look everyone, I know what guage wire you need for a 30 Amp breaker!"

So, where is your EE degree from?
 
People are accepting that risk on their own accord. You're free to try installing an 8700K on a Z170 board - don't let us stop you! Just don't complain if something goes wrong and expect Intel to fix it.

What would you expect to happen? They are both 90-95W TDP processors running on a 14nm process when running stock speeds. Overclocking is always YMMV. The answer has been demonstrated repeatedly....NOTHING WOULD HAPPEN except people on older boards could have a 6 core processor. Instead these people (I am one of them) went to a Ryzen system rather than play Intel's fuck fuck games.
 
What would you expect to happen? They are both 90-95W TDP processors running on a 14nm process when running stock speeds. Overclocking is always YMMV. The answer has been demonstrated repeatedly....NOTHING WOULD HAPPEN except people on older boards could have a 6 core processor. Instead these people (I am one of them) went to a Ryzen system rather than play Intel's fuck fuck games.

You're confusing "something working" with "something working long term with no ill effects," which is what every single engineer in this thread has been saying. It is entirely possible an average user could pop an 8700k/9700k into a Z170 and Z270 and not see issues. It's also possible that after a couple of years or less, weird issues could arise. You must engineer products with safety margins in mind in order to withstand transient conditions or other potential complications which may arise. Intel and AMD both have multimillion dollar simulation systems which can simulate, in incredible detail, all sorts of complications, transient conditions, etc. That's why engineers in this thread have been trying to illustrate this using multiple analogies, most of which are 100% correct. All of us have, on occasion, cheaped out on something and cut corners to make something "work" which may be out of spec. Many times, it works well enough for as long as we need it but oftentimes, it blows up in our faces and we have to spend the money to do it right.

As I said, where Intel does deserve some scorn is because they really should have better anticipated the direction processors would go and engineered the socket correctly in the first place for multiple generations. I can assure you, THAT wasn't a decision an engineer made - that was probably a cost decision and maybe a marketing decision as well. EDIT: And let's be honest - when the 1151 socket was introduced, Intel had no clue it would last through 4 chipsets. They thought Ice Lake would've been released well before now, so they didn't worry about making a multi-generational socket. That blew up in their face.

At any rate, the contention that this is some sort of incredible, secret money grab scheme on Intel's part is LAUGHABLE given the number of people who would actually swap CPUs. Enthusiasts would buy new boards to begin with in all likelihood. Non-enthusiasts get their stuff from Dell and Best Buy and only a tiny percentage would ever contemplate swapping CPUs. There is plenty of material with which to jab Intel and this "cash grab" is not one of them. But, like I told the guy a few posts earlier - no one is stopping you from sticking an 8700k+ into your Z170 board. Be my guest. The guy's overclocking example was off-base as well, because every single person who overclocks knows that they're running out of spec and accepts that risk. You're free to take that same risk with a 9700k and a Z170 board as well.
 
Last edited:
Dang, the IDF is steaming now. The last guy asked where my EE degree from even though he just quoted me saying I am not an engineer. Too hot-headed to even read. Seems unsafe.

Now they are deflecting debate with the wall of academia. We all know only those with economic degrees can talk about money. Only those with business degrees can talk about running a business...
 
You're confusing "something working" with "something working long term with no ill effects," which is what every single engineer in this thread has been saying. It is entirely possible an average user could pop an 8700k/9700k into a Z170 and Z270 and not see issues. It's also possible that after a couple of years or less, weird issues could arise. You must engineer products with safety margins in mind in order to withstand transient conditions or other potential complications which may arise. Intel and AMD both have multimillion dollar simulation systems which can simulate, in incredible detail, all sorts of complications, transient conditions, etc. That's why engineers in this thread have been trying to illustrate this using multiple analogies, most of which are 100% correct. All of us have, on occasion, cheaped out on something and cut corners to make something "work" which may be out of spec. Many times, it works well enough for as long as we need it but oftentimes, it blows up in our faces and we have to spend the money to do it right.

As I said, where Intel does deserve some scorn is because they really should have better anticipated the direction processors would go and engineered the socket correctly in the first place for multiple generations. I can assure you, THAT wasn't a decision an engineer made - that was probably a cost decision and maybe a marketing decision as well.

At any rate, the contention that this is some sort of incredible, secret money grab scheme on Intel's part is LAUGHABLE given the number of people who would actually swap CPUs. Enthusiasts would buy new boards to begin with in all likelihood. Non-enthusiasts get their stuff from Dell and Best Buy and only a tiny percentage would ever contemplate swapping CPUs. There is plenty of material with which to jab Intel and this "cash grab" is not one of them. But, like I told the guy a few posts earlier - no one is stopping you from sticking an 8700k+ into your Z170 board. Be my guest. The guy's overclocking example was off-base as well, because every single person who overclocks knows that they're running out of spec and accept that risk. You're free to take that same risk with a 9700k and a Z170 board as well.

Respectfully, I disagree. I see the engineers posts who just say it might not work without any real data. Then, I see a video where a guy makes it work and describes why it works, and why it would continue to work, and I don't buy the engineers assessments. Intel can go from 65nm to 45nm, 32 to 22, etc. in the same socket. Update the ME via bios updates from generation to generation. 14nm chips run from 2C/2T celerons to 4C/8T i7's pulling 150Ws when OC'd. Now magically, they can't do that within their prescribed power envelope right at the same time they would be looking at a new chipset revenue stream when they didn't change the power envelope or the process they make the chips on? I don't buy it. There is absolutely zero reason to think that this wouldn't work long term other than Intel telling you so while they pocket money from chipset sales.

Intel historically allows 2 generations of CPU's on a chipset. Suddenly their 10nm parts aren't ready for the mainstream and they are extending their 14nm product line...BUT they still want that chipset revenue even though there is no reason to upgrade the chipset. Tada...a new 1151v2 socket! There are no such things as coincidences.
 
So why does Intel then need new boards if its not for money? AMD can run a mb through many years and tons of upgrades, but the giant company Intel can barely do 2 upgrades within a year or so? Intel needed a new MB just to go 2 more cores?

Unfortunately, I think many people view the situation as you do. I'm not trying to be mean, but this thought process is due to being woefully uninformed about how these things work. There are several technical reasons why never updating your platform is bad and several reasons why its good. Yes, selling additional motherboards and chipsets is part of the strategy, but its not the only reason for this behavior. Let me be clear on one thing that many of you aren't going to like. AMD doesn't stick with a given platform because they like you or because they are the people's champion or any other ideological nonsense people attribute to the company. AMD does what it does because of its position in the market and for budgetary reasons.

Let's go back to when Bullsh....or Bulldozer launched. It was launched with the 990FX chipset. This was a slight tweak on the older 890FX chipset which launched a year or so earlier. AMD knew it had a loser and sales sucked. It suffered for years with a design that failed to meet expectations and future iterations were an exercise in turd polishing. AMD didn't keep using the 990FX chipset and AM3+ socket for all those years because they were the good guys. AMD didn't do it because newer chipsets weren't necessary. AMD might have wanted to update the platform but chose not to spend the money on doing so. It did this because spending the money in R&D to make a new platform for a turd of a CPU that didn't sell worth shit made no sense. Keep in mind, Bulldozer generally only sold for one or more of three reasons:
  • It was bought because of its absolute bottom dollar pricing. That is, the cost of motherboard+CPU was absolutely lower than that of a comparable Intel setup. AMD doesn't like to position its products this way, but often does because it has to.
  • It was sold to existing AM3 users, who didn't want to upgrade motherboards due to the cost of doing so.
  • It was sold to rabid evangelical AMD fanbois who would buy literally anything so long as it wasn't an Intel product.
When you have a lack luster processor, spending money on updated platforms to keep pace with your competition makes no sense. All that would do is increase losses and cut profits even further. AMD knew that no one was going to buy motherboards with newer chipsets for a second rate processor. It didn't make any sense. Motherboard manufacturers would also have had no incentive to produce such motherboards either. If they were lucky, AMD would have sold chipsets for one or two models with each vendor, but it was unlikely to recoup development costs for newer chipsets. Motherboard manufacturers might not have bought the chipsets at all given their costs for producing newer motherboards given that board prices were going to rise to recoup development costs on their part. 990FX was a rebadged 890FX with many of the motherboards of that era being copies of the previous generation, so nothing new was needed.

For those who do not remember, 990FX launched before Bulldozer did. Even thought it was largely a refresh of 890FX with only a new C-state being added, the VRMs had to be altered as did much of the firmware so that it could support an even wider range of CPU's. 990FX's launch was an absolute shit show, the likes of which wouldn't be repeated until socket AM4 launched with its X370 chipset. What seemed like minor changes resulted in a mess of boards that were so bad we gave up on 990FX testing entirely for quite some time. I had one PR person from one of the major manufacturers call me crying because the review was so negative she was afraid she'd lose her job. As I said, even minor changes required to support a new generation of processors increases the complexity of things and it creates a ton of problems. It seems nice for the consumer, but it just isn't that simple.

One other key factor many of you don't seem to understand is that performance per watt is king now. While largely irrelevant in the desktop arena so many of us live in, its not irrelevant for mobile and server applications. Like it or not, all of our CPU's are modified versions of processors designed for those two markets. Part of how performance per watt is achieved, is by revamping the overall platform to be more efficient. You can't always do that if newer CPU's have to utilize older, less efficient VRM's. You can't always support newer power savings technology on older electrical designs. You can't necessarily build a processor to meet certain TDP requirements if your using a VRM implementation that's 5 years old. CPU compatibility on Intel's side is simple as hell. Buy the latest chipset and match the socket to the processor you are buying. On the AMD side its relatively simple now, but back in the AM2/AM3+ days it wasn't. You had motherboards which supported certain TDP's and some CPUs didn't work on some motherboards as a result. It was also problematic upgrading processors on certain motherboards because you would often have to keep or even borrow an older processor to flash a BIOS to support that newer CPU. That's a pain in the ass when often times people sell their older hardware to reduce the costs of their upgrades. Problems like these are what you get when you stick with a given platform for longer periods of time than you should.

Its easy to vilify Intel. Believe me, I know more than most just how bad they can be. I know just how much influence the company has over its motherboard makers and its awful. But the reality is that Intel often does what it does for technical reasons, which aren't always apparent. It usually (although, not always) makes for a better product. As others have said, just because it may be possible to make a 9900K work on a Z270 board or whatever, doesn't mean its a good idea over a long period of time. And again, those extra pins may be necessary for a newer product we haven't seen yet. AMD has forced Intel out of its apathetic state and these newer designs may have some future proofing built in to allow it to counter its reinvigorated rival should the need arise. Business isn't just about the best products, its also about strategy. Intel kicked the shit out of AMD even when the latter offered a better processor than anything Intel had in its lineup. Its business practices were certainly questionable, but its strategy ultimately won in terms of sales figures.

That's a harder game to play now and I don't think that's the route it will go in the near future. If you notice, Intel sort of had this "shotgun" approach to its processor lineup after Ryzen 7 and Threadripper launched. It made a bunch of shit and threw it all out there to see what stuck. That Intel Core i7 7740X Extreme was such a product. It was literally the same chip we had on Z170 / Z270, and then placed into an LGA2066 package with its iGPU disabled. Intel may be positioning its hardware to facilitate the launch of more "knee jerk" reaction products in the future. Its quite possible that extra ground pins or whatever are there in case it needs to drop a 10-core CPU or whatever into Z390. Intel has also used the same physical sockets with minor or even major electrical changes in the past. Those of you who remember the socket 370 celeron only motherboards know what I'm talking about. It may have designed LGA 1151 to work that way, to reduce the time it takes for its future motherboards using that socket to reach the market. So a theoretical LGA 1151 v3 might use the same hardware as v2, or v1, but not necessarily be compatible with those CPUs electrically. This means that suppliers like Lotes and Foxconn only need to adjust the plastic "keys" on the socket and nothing else (if that) to give motherboard makers what they need to produce future motherboards.

The truth is, we don't know why Intel does all the things that it does. Saying that the socket update wasn't necessary because one 24 hour test isn't 100% conclusive. Its interesting, but I wouldn't say its concrete proof that the change wasn't necessary. Having said that, I have been doing reviews for more than a decade and over the years I've asked about things like this in the past and I've received answers off the record on such matters. All I will say is that the changes were not always necessary and they were made because Intel said so. I won't divulge when this was the case, but I can tell you that I haven't any idea whether or not this is true with Z390. I haven't asked that question of anyone at the time of this writing. I probably won't get an answer on this point until after the fact when the product is no longer relevant.
 
Last edited:
Dang, the IDF is steaming now. The last guy asked where my EE degree from even though he just quoted me saying I am not an engineer. Too hot-headed to even read. Seems unsafe.

You're not an engineer - you just play one on the forums, and badly at that. Go ahead and call me an IDF member - just don't tell my multiple Ryzen systems.

Now they are deflecting debate with the wall of academia. We all know only those with economic degrees can talk about money. Only those with business degrees can talk about running a business...

If you're going to badmouth engineers (as you have several times in this thread), you'd better have some credentials.
 
You're not an engineer - you just play one on the forums, and badly at that. Go ahead and call me an IDF member - just don't tell my multiple Ryzen systems.



If you're going to badmouth engineers (as you have several times in this thread), you'd better have some credentials.

I'm not watching the stream, so I can't speak to what the engineers are saying. However, as I've done so many times in the past, I've questioned engineers on firearm designs and motherboards or CPUs and I know when someone is evading my questions. You do not have to be an engineer to know when this is happening.
 
Respectfully, I disagree. I see the engineers posts who just say it might not work without any real data.I don't Then, I see a video where a guy makes it work and describes why it works, and why it would continue to work, and I don't buy the engineers assessments. Intel can go from 65nm to 45nm, 32 to 22, etc. in the same socket. Update the ME via bios updates from generation to generation. 14nm chips run from 2C/2T celerons to 4C/8T i7's pulling 150Ws when OC'd. Now magically, they can't do that within their prescribed power envelope right at the same time they would be looking at a new chipset revenue stream when they didn't change the power envelope or the process they make the chips on? I don't buy it. There is absolutely zero reason to think that this wouldn't work long term other than Intel telling you so while they pocket money from chipset sales.

I don't know how much better I can explain it. I thought my comment: You're confusing "something working" with "something working long term with no ill effects" explained it pretty well. der8aeur is not a EE. He does not have hundreds of millions of dollars in test equipment and simulators at his disposal. He has not tested the system with crappy boards and power supplies in a variety of conditions. Intel's engineers HAVE TO take all of that into account.

Intel historically allows 2 generations of CPU's on a chipset. Suddenly their 10nm parts aren't ready for the mainstream and they are extending their 14nm product line...BUT they still want that chipset revenue even though there is no reason to upgrade the chipset. Tada...a new 1151v2 socket! There are no such things as coincidences.

I'll repeat it again. Chipset revenue from people who would otherwise just upgrade their CPU is MINISCULE. It's a rounding error for Intel and Intel doesn't care. Intel cares about the Dells, HPs, etc. of the world. This point is so blatantly obvious that it is completely shocking to me that it is debatable.
 
I'm not watching the stream, so I can't speak to what the engineers are saying. However, as I've done so many times in the past, I've questioned engineers on firearm designs and motherboards or CPUs and I know when someone is evading my questions. You do not have to be an engineer to know when this is happening.

So, talk to Intel's engineers then. Let's revisit what the premise to this entire argument is - we have people in here claiming that Intel is apparently reaping huge revenue from making people upgrade chipsets in order to upgrade their CPUs and because of that, they're lying about needing extra power and ground pins in the socket to force people to upgrade past Z270. They're claiming that there are apparently millions and millions of people who would simply drop in a new CPU if only Intel would "let" them. None of this is true and shows a huge lack of awareness of the PC market. And if these folks don't want to believe Intel's engineers, they can go ahead and try what der8auer showed - no skin off my back.

Intel's engineers would likely say what every single engineer in this thread has said - it might work, but there are certain conditions which would make the system unstable more than if you just used a 6700K in a Z170 or a 7700K in a Z270 (for example).
 
Last edited:
You know, having two testicles isn't necessary but I'm sure glade I have a spare.


Think about it.
 
Back
Top