An Analysis of the Z390 Socket Proves the Extra Pins Aren't Necessary

This thread


And why shoud they? How dare they question Intel if they don't have an EE degree? Why should anyone question what Ford does if they don't have a ME degree??

Dan_D makes a great point. Perhaps there is some other reason Intel did what they did, relating to power states or whatever.

And that's fine, good for Intel.

But man, did this thread expose who the new Shintels are. Using academic alcolades as they only grounds for a debate... your Professors would be proud.

Hilarious that actual engineering knowledge is so triggering....

And if you are going to question about engineering issues, you should probably have an actual understanding of the engineering issues.
 
So the people who run INTEL and probably a lot of the motherboard manufacturers are all a bunch of greedy dickheads. Why is this not a surprise to any of us. Now I feel good about keeping my Sandybridge procs & Z68 & Z77 motherboards so long.
 
So the people who run INTEL and probably a lot of the motherboard manufacturers are all a bunch of greedy dickheads. Why is this not a surprise to any of us. Now I feel good about keeping my Sandybridge procs & Z68 & Z77 motherboards so long.

Why? You bought the best product on the market at the time it was available. It served you well for many years. This saved you money in the long run vs. the old days of needing to upgrade more frequently. I think people get caught up in their perceptions of a company's "personality." The reality is that AMD isn't more benevolent than Intel is. They are just in a different market position and lack the influence and leverage of its rival. Shrewd business tactics are one of the main reasons why companies like Intel and NVIDIA are able to afford the R&D needed to stay ahead of their respective competitors. As I've said, business isn't just about having the best products. Its about making the right strategic moves at the right time.

Intel is a publicly traded company. This means that they are going to be driven to please its shareholders and do things to keep stock prices high. However, you should also be mad at AMD for the mismanagement that led to their dire financial situation and nearly a decade of sub-par products and an inability to compete with Intel on any serious level.
 
Just found some Asus Z390 boards have a hidden vcore offset.... so your golden 9900K might be just average XD

 
I think it also needs to be said that while I'm sure it's quite likely there are pins on AMD sockets over the years that have also been completely useless and unneeded, on the flip side, their sockets (and motherboards) have spanned many generations of processors. So in those instances it's more likely a case of future-proofing that ended up not being used, than ass-hattery. Granted, who's to say that an Athlon 64 on S939 couldn't have functioned just as well in an AM3+ 980FX, but fact does remain that if they had added DDR2 slots to a 980FX board, well then AM2 chips could've easily worked still :p
And they did!
ASRock A785GM-LE AM3/AM2+/AM2 AMD 785G + SB710 Micro ATX AMD Motherboard: https://www.newegg.com/Product/Product.aspx?Item=N82E16813157274
 
He's still not an EE and still doesn't have access to hundreds of millions of dollars in simulators and test equipment.

irrelevant to the topic at hand. besides, he himself says that he expected the setup to go up in flames.

oh and also: your "millions of dollars in simulations and test equipment" mean jack shit when it's not the engineers who make marketing and product stack descicions, but the marketers and the bean counters.

Yes, no overlap. Either an actual EE or not. His degree is basically an ME with electives. He might of taken some 101 classes but those are basically useless in this area that relies on 300-400 level emag course work.

ah, i see you have no idea what coursework his degree consists of and how much EE (and incidentally, semiconductor design and integrated circuit production) he took. i mean, the coursework is publicly available as a *.pdf. but again, google is too hard to use, i guess.

i can't believe i have to continue defending some youtuber.
 
I would cut Intel some slack on this one... if this isn't pretty much what they have been doing since Slot 1 replaced Socket 7.

I think we all know that move was mostly to keep AMD and Cyrix chips out of their boards.

Intel has kept doing the same thing basically every single generation since. Of course their are reasons to move sockets when DDR generations get a bump ect. Still Intel has been a one and done CPU to Motherboard company since PII.

As others have suggested there is really no upside for Intel to allowing people to easily upgrade their CPU 2-3 years down the line. In general most people if they are in for a penny are in for a pound. So few upgraders looked at a new board as a deal breaker. Intels big OEM customers the HPs Dells ect are quite fine with people looking at complete new MB/CPU/RAM packages if they wish to upgrade all the more likely people just buy a new machine. So with both of those truths being truths why exactly would Intel purposely over engineer a slot with the intention of making the same chipset last a few generations to save a handful of upgraders a few bucks. Would it generate more sales then the loss of sales of the new chipsets ? I doubt it... and the Slot 1 forward one and done status keeps OEMs happy.

On the other hand AMD... has honestly very few OEM clients. Even though Ryzen has been kicking backside... most mass market OEMS that ship AMD are still shipping budget systems. So OEMS are unlikely to be to bent up about the potential loss of sales potential from someone dropping a new chip into a two year old machine. On the other hand AMDs biggest source of chip sales is likely tinker [H] types that are attracted by an over engineered slot designed to be used for a few years or at least long enough to potentially allow an inexpensive chip upgrade down the road. So there is actual upside for AMD sticking to one platform.

Anyway at the end of the day. The Intel fanatics are still going to buy their Intel MB and CPUs together... and if you mention the socket stuff their response is almost always "Cry some more you poor" lol I joke not every Intel fanatic sounds like an annoying spoiled teen... I guess. :) I built my first AMD machine in a long time not long ago... nothing fancy just a Ryzen 2200g for my teen daughter. I have been a little shocked at how well it handles 1080p gaming... I really wasn't expecting to see solid 50-60fps at 1080p with medium and high settings, but there it is. Its made me excited to see what AMD has to talk about at CES. I can't believe I'm saying it... it sounds dirty, but I may just end up building some form of Ryzen 3000g chip system for myself. If the GPU performance is even just a bit better then my daughters 2200g it might do me for a good while before I bump the chip (and potentially GPU) up.
 
Only six hours testing and single model is both statistically irrelevant and weak testing that only proves that some people doesn't understand even the basic stuff.
 
irrelevant to the topic at hand. besides, he himself says that he expected the setup to go up in flames.

oh and also: your "millions of dollars in simulations and test equipment" mean jack shit when it's not the engineers who make marketing and product stack descicions, but the marketers and the bean counters.

You’re wrong, and hilariously so. It makes all the difference. Intel’s engineers had their reasons for changing the socket and various engineers have explained why in this thread. You can continue operating under the notion that a marketing suit ran down to engineering and told them to add more ground and power pins, but that isn’t how it works.

You can maintain your position that Intel did this as a cash grab, because you’re entitled to your opinion, no matter how wrong it is. As has been pointed out repeatedly, Intel is gaining very little increased revenue from this change due to being simply wanting to upgrade their CPUs. And the fact that point has been brought up several times by many posters and none of the “Intel cash grab” advocates have addressed it is very telling.
 
Last edited:
Hilarious that actual engineering knowledge is so triggering....

And if you are going to question about engineering issues, you should probably have an actual understanding of the engineering issues.

Careful now, employing that sort of fancy engineering logic may push some of these folks over the edge. How DARE us for obtaining EE degrees and education and giving feedback.
 
Careful now, employing that sort of fancy engineering logic may push some of these folks over the edge. How DARE us for obtaining EE degrees and education and giving feedback.
But your like.... Not an Intel engineer so I'm going to have to disqualify you from having any real input because if Intel's engineers didn't say it then it isn't real.

Lol the amount of people dismissing deb8uer because he isn't an Intel engineer who then explain why Intel isn't wrong and to believe those pro Intel posters who are also not Intel engineers.
 
Hilarious that actual engineering knowledge is so triggering....

And if you are going to question about engineering issues, you should probably have an actual understanding of the engineering issues.
Unless you worked on Intel CPU's and have 1st hand knowledge, any engineering knowledge is worthless. Only they know why.
Your guess is good as any persons off the street.
 
Honestly in some ways I don't see the point of a CPU only upgrade, in the sense that I usually build the computer, and used it as it for longer than it would make sense to make CPU-only upgrade... does that make any sense?
 
But your like.... Not an Intel engineer so I'm going to have to disqualify you from having any real input because if Intel's engineers didn't say it then it isn't real.

Lol the amount of people dismissing deb8uer because he isn't an Intel engineer who then explain why Intel isn't wrong and to believe those pro Intel posters who are also not Intel engineers.

Actually, none of us were the ones who brought up der8auer's engineering background - a "cash grab theorist" (my new name for you guys) brought it up in an attempt to shut down the argument and it backfired. At any rate, we're not dismissing der8auer's input. What we're saying is that just because he got it to work doesn't mean Intel is on a cash grab, as people here are taking his word as gospel. Big difference, and it is obviously lost on many folks in this thread. People here are pretending that because der8auer has it working that it MUST mean Intel is cash grabbing, which is a logical fallacy. If der8auer overclocked a 9900K to 6 Ghz, are you going to claim that all 9900Ks can clock to 6 Ghz and when you get one that doesn't, blame Intel for a "defective" product? Of course not.

Also, FYI - the practice of labeling anyone "pro Intel" because they point out the gaping holes in the "cash grab" theory shows a lack of critical thinking. I find it interesting and telling that not a single one of you "cash grab theorists" - not one - has been able to answer 1) How requiring new sockets hurts non-enthusiasts, as was claimed in this thread earlier 2) How Intel would magically make so much money by requiring an incredibly tiny percentage of people to buy new boards to upgrade their CPUs. The closest someone came was "Intel will do anything for a few bucks," and that was easily disproven.

Your guess is good as any persons off the street.

That's a silly statement. A theory based on education and experience is obviously better than a random guy's opinion. If you're having stomach pains and encounter a mechanic and a doctor in a parking lot, who are you going to listen to if they tell you what it MIGHT be without any further testing or diagnosis? The reverse is true - if your car doesn't start in a parking lot and you see a doctor and a mechanic, the mechanic is probably the guy you're going to believe.
 
Now, that doesn’t mean Intel doesn’t deserve some criticism. They had to know the direction CPUs were headed so I’m a bit surprised they didn’t anticipate the need for additional power and ground pins and include them in the initial spec.
To be fair when they had the Z170s they were probably road mapping something like this:

(2016) Quad core>Quad core>Quad core>Quad core>Quad core>Quad core>Quad core>Quad core>Quad core (2030)

Anyone remember the 270>170 sticker that was being sold?
 
Actually, none of us were the ones who brought up der8auer's engineering background - a "cash grab theorist" (my new name for you guys) brought it up in an attempt to shut down the argument and it backfired. At any rate, we're not dismissing der8auer's input. What we're saying is that just because he got it to work doesn't mean Intel is on a cash grab, as people here are taking his word as gospel. Big difference, and it is obviously lost on many folks in this thread. People here are pretending that because der8auer has it working that it MUST mean Intel is cash grabbing, which is a logical fallacy. If der8auer overclocked a 9900K to 6 Ghz, are you going to claim that all 9900Ks can clock to 6 Ghz and when you get one that doesn't, blame Intel for a "defective" product? Of course not.

Also, FYI - the practice of labeling anyone "pro Intel" because they point out the gaping holes in the "cash grab" theory shows a lack of critical thinking. I find it interesting and telling that not a single one of you "cash grab theorists" - not one - has been able to answer 1) How requiring new sockets hurts non-enthusiasts, as was claimed in this thread earlier 2) How Intel would magically make so much money by requiring an incredibly tiny percentage of people to buy new boards to upgrade their CPUs. The closest someone came was "Intel will do anything for a few bucks," and that was easily disproven.



That's a silly statement. A theory based on education and experience is obviously better than a random guy's opinion. If you're having stomach pains and encounter a mechanic and a doctor in a parking lot, who are you going to listen to if they tell you what it MIGHT be without any further testing or diagnosis? The reverse is true - if your car doesn't start in a parking lot and you see a doctor and a mechanic, the mechanic is probably the guy you're going to believe.
I didn't bring up the cash grab theory. But you lumped me in good
 
Glad to see there are some actual engineers here who "get it" and don;t just jump on the "Intel is trying to bilk us" bandwagon. And no, EE and ME coursework do not overlap, despite the old joke that ME is just EE without the imaginary numbers.

A car analogy that might work. So the car manufacturer says you need to use a 250gph fuel pump, but in testing you find that out of the box, the car never needs more than 100gph. But there's this handy knob on the dash that can turn up the turbo boost (this IS a K series CPU, so unlocked for anything from running totally stock to a mild overclock to extreme stuff). Even turning the knob all the way up, it only needs 200gph. RIp off, you cry, the 250gph pump is $100 more than the 200gph pump. But the 200gph pump would be working at 100% capacity, the 250gph pump would still have some margin. Mechanical wear vs electrical, but what's going to last longer, the smaller one runnign flat out or a slightly oversize one running more conservatively, assuming manufacturing quality was otherwise identical?

Or a real world example. Fairbanks-Morse opposed piston diesel engines powered many USN surface ships and submarines during WWII. Generally praised for reliability and performance. So after the war, FM got into the railroad locomotive business, a great opportunity to maintain production of their diesel engines. And they more or less failed miserably. Mostly reliability issues. Say what? Well, in a ship at sea, the engines rarely ran flat out - absolute top speed was reserved for emergencies, maximum sustained cruise was always something less than outright maximum speed. But in locomotive use, especially when pulling a hill, the engines had to constantly run flat out under maximum load. There was less cooling capacity - limited to what radiators and water tanks were on the locomotive, while the seagoing versions had a virtually limitless supply of cold seawater for cooling, if even indirectly through water to water heat exchangers. And because of the design of the opposed piston engine, when a failure occurred in the lower piston, which was usually the case because his is where the exhaust ports were, the whole thing had to be disassembled to get to the bad cylinder. Much more difficult than repairing a failed cylinder in any of the other competing designs. In the shipboard environment, they just didn;t have these types of failures with any frequency, so ease of repair wasn;t a prime consideration.

Bottom line - does it work with a bunch of power pins taped off? Obviously. Does it meet specs, or is it going to be just as reliable long term? Odds are against it. Is it going to blow up within a week? Doubt it. But by operating right at spec with no headroom, or even over spec, it WILL shorten the life expectancy. It's that same cringe factor I have when I see a hobbyist justify dropping resistor values by saying "well, the LED can handle 25ma, so.." NO! You don't design to run at maximum 100% of the time. The NEC specifies less than the absolute maximum of the fuse rating for household branch circuits, too. Same reason, Running at 100% all the time leaves absolutely no safety margin if something should ever change.

This is the EXACT reason I bought the Seasonic 1000W platinum PSU that Jonnyguru raved about in his review, when it first came out, and people all over the overclocking world were flaming and verbally harassing me hardcore, saying I wasted my money since all I needed for that system was a 600W maximum.

Internet know it alls..I used to call them "teenagers" but now I know older adults can be just as stupid and selfish and entitled. Now I just call them "jerks".
That 1000W PSU is pouring nice steady voltage into my 9900K and Vega 64 space heater right now, more than 6 years later.

Why the hell does everyone these days want to just buy the bare minimum to get by, and insult anyone who wants a little extra safety net?
 
This is the EXACT reason I bought the Seasonic 1000W platinum PSU that Jonnyguru raved about in his review, when it first came out, and people all over the overclocking world were flaming and verbally harassing me hardcore, saying I wasted my money since all I needed for that system was a 600W maximum.

Internet know it alls..I used to call them "teenagers" but now I know older adults can be just as stupid and selfish and entitled. Now I just call them "jerks".
That 1000W PSU is pouring nice steady voltage into my 9900K and Vega 64 space heater right now, more than 6 years later.

Why the hell does everyone these days want to just buy the bare minimum to get by, and insult anyone who wants a little extra safety net?

Yeah, I have a Seasonic 1KW power supply in my Vega 56 System and an 850 Watt one in my Powercolor RX 580 system. No point in going low ball on the power and also, I have seen cheap power supplies die a flaming death, never again. Both power supply are a few years old now. (I bought the Seasonic one when I had 2 x Furies and an overclocked FX 8350 back at the end of 2016.)
 
Dang, the IDF is steaming now. The last guy asked where my EE degree from even though he just quoted me saying I am not an engineer. Too hot-headed to even read. Seems unsafe.

Now they are deflecting debate with the wall of academia. We all know only those with economic degrees can talk about money. Only those with business degrees can talk about running a business...

And how old are you anyway?
You're acting like one of those LTT forum groupies who are just out of highschool being a hotshot pro in college, or possibly spazzing online because your AP Bio(s) or Chem or Calculus is too much for you.
Yes I'm in full ad hominem mode now because, no offense--you are a freaking troll. Or just some clueless "adult" teenager (or pure teenager).
 
Just found some Asus Z390 boards have a hidden vcore offset.... so your golden 9900K might be just average XD


This chart is COMPLETE UTTER TOTAL BULLSHIT.
And I'm sorry if I'm breaking any rules by cussing.
There is *NO* 100mv offset. I spoke to Elmor in PM (a former Asus employee and an engineer) about this. The "offset" claim is once again by CLUELESS people who don't do their research (e.g., the fake know it alls in this thread). Asus redesigned their SIO resistors to properly show TRUE CPU vcore. True CPU vcore hasnt been shown on-die properly by sensors in many years, if EVER. That 1.30v load vcore you thought you were getting on your Z270 and Z170 and Z68 boards, etc? It was more like 1.20v-1.25v (the higher the current draw/amps, the lower the true voltage)! Remember the fake rumor about "too high loadline calibration causing load voltage to boost higher than idle voltage?" *HOGWASH. That would imply a NEGATIVE RESISTANCE (negative loadline) which is NOT possible on any known voltage controller! The sensors were simply being affected by ground plane and power plane impedance ! They've been misreporting the voltage like this for years.
(Note I am NOT talking about transient voltage overshoots! Those can be dangerous with a 0 mOhm loadline and you NEED a good oscilloscope to measure those; sensors can't pick these up).

Proof?
https://www.overclock.net/forum/27686004-post2664.html

Compare the SIO voltage readout with the CPU ON-DIE sense voltage? Oh look where that 100mv "rumor" came from?
 
Last edited:
This chart is COMPLETE UTTER TOTAL BULLSHIT.
And I'm sorry if I'm breaking any rules by cussing.
There is *NO* 100mv offset. I spoke to Elmor in PM (a former Asus employee and an engineer) about this. The "offset" claim is once again by CLUELESS people who don't do their research (e.g., the fake know it alls in this thread). Asus redesigned their SIO resistors to properly show TRUE CPU vcore. True CPU vcore hasnt been shown on-die properly by sensors in many years, if EVER. That 1.30v load vcore you thought you were getting on your Z270 and Z170 and Z68 boards, etc? It was more like 1.20v-1.25v (the higher the current draw/amps, the lower the true voltage)! Remember the fake rumor about "too high loadline calibration causing load voltage to boost higher than idle voltage?" *HOGWASH. That would imply a NEGATIVE RESISTANCE (negative loadline) which is NOT possible on any known voltage controller! The sensors were simply being affected by ground plane and power plane impedance ! They've been misreporting the voltage like this for years.
(Note I am NOT talking about transient voltage overshoots! Those can be dangerous with a 0 mOhm loadline and you NEED a good oscilloscope to measure those; sensors can't pick these up).

Proof?
https://www.overclock.net/forum/27686004-post2664.html

Compare the SIO voltage readout with the CPU ON-DIE sense voltage? Oh look where that 100mv "rumor" came from?

Good to know, thanks for clarifying my fakenews lol
 
So he solders a giant wire to the surface contact end of a pin and ramps it up to 5a in open air and says it can handle it. Then he overclocks ONE processor in ONE motherboard with a bunch of power pins taped off and proclaims there's no need for the extra pins.

What about there being ground and power pins being less than a millimeter away from each other in the actual socket with a 95ºC processor sitting above it? Could there be an arc if there's 5a running through them?

What about the CPU side? How much material is there between the pad and the power/ground planes in the substrate? Can it handle 5a safely?

Did he read the socket specs to see what the average pin resistance has to be (0.019Ω) and check his setup to make sure that wasn't exceeded.
Did he conduct his test across a wide variety of motherboard, CPUs, and environmental conditions.
Does he take into account the effects of aging, poor quality materials, power transients, inductance, resistance, capacitance, noise, dirt, corrosion, physical damage, etc?

If it was a simple cash-grab, then moving the CPU id pin would be enough. Adding extra power pins is an engineering decision and was done for a reason. Now that reason may only be because probabilities didn't look good, but it was for a reason.
 
So he solders a giant wire to the surface contact end of a pin and ramps it up to 5a in open air and says it can handle it. Then he overclocks ONE processor in ONE motherboard with a bunch of power pins taped off and proclaims there's no need for the extra pins.

What about there being ground and power pins being less than a millimeter away from each other in the actual socket with a 95ºC processor sitting above it? Could there be an arc if there's 5a running through them?

What about the CPU side? How much material is there between the pad and the power/ground planes in the substrate? Can it handle 5a safely?

Did he read the socket specs to see what the average pin resistance has to be (0.019Ω) and check his setup to make sure that wasn't exceeded.
Did he conduct his test across a wide variety of motherboard, CPUs, and environmental conditions.
Does he take into account the effects of aging, poor quality materials, power transients, inductance, resistance, capacitance, noise, dirt, corrosion, physical damage, etc?

If it was a simple cash-grab, then moving the CPU id pin would be enough. Adding extra power pins is an engineering decision and was done for a reason. Now that reason may only be because probabilities didn't look good, but it was for a reason.

The 5a test was just to show how robust the pins were on their own. That was not the end of the test.

He then blocked off 69 pins instead of the 18 reduced in the Z170.

I suppose he could have ran the test for 5 years, left it outside in harsh weather to get corrosion, have it sent to a war zone, and the get a real EE degree he could report on his findings.

Then and only then would it not be a "cash grab".
 
The 5a test was just to show how robust the pins were on their own.
Except it doesn't show anything if he soldered the connection to the surface contact.
A mere metal-to-metal contact has substantially different electrical characteristics that a soldered joint.
Plus he had the pin in free air, not inside an insulating socket block. It matters.

And he used DC current, when the real power-delivery problem with modern CPUs is keeping the supply voltage stable at the chip in the face of very fast and large current draw changes and non-trivial package wiring inductance. Extra pins help reduce the effect of that inductance (parallelling inductors reduce the total inductance.)

The whole video is amateur-hour hogwash. It proves nothing.
 
Last edited:
Except it doesn't show anything if he soldered the connection to the surface contact.
A mere metal-to-metal contact has substantially different electrical characteristics that a soldered joint.
Plus he had the pin in free air, not inside an insulating socket block. It matters.

And he used DC current, when the real power-delivery problem with modern CPUs is keeping the supply voltage stable at the chip in the face of very fast and large current draw changes and non-trivial package wiring inductance. Extra pins help reduce the effect of that inductance (parallelling inductors reduce the total inductance.)

The whole video is amateur-hour hogwash. It proves nothing.

Ok, then ignore the entire 5A part if you want. He still taped the pins and tested again under load.

Ryan975 just mentioned that and you literally just quoted my response.
 
Ok, then ignore the entire 5A part if you want. He still taped the pins and tested again under load.
You say that like it means something, as if one guy running one board on a test bench for a week has relevance for people running mission-critical CPUs in hot environments for years.

I have at times run my car tires at 20PSI for longer than I should have,
but it would be stupid for the manufacture to spec that (instead of 33 psi) as the recommended inflation pressure.

If you still don't get the point ... then I hope you're not involved in a decision-making capability in the making of anything I rely on.
 
Except it doesn't show anything if he soldered the connection to the surface contact.
A mere metal-to-metal contact has substantially different electrical characteristics that a soldered joint.
Plus he had the pin in free air, not inside an insulating socket block. It matters.

And he used DC current, when the real power-delivery problem with modern CPUs is keeping the supply voltage stable at the chip in the face of very fast and large current draw changes and non-trivial package wiring inductance. Extra pins help reduce the effect of that inductance (parallelling inductors reduce the total inductance.)

The whole video is amateur-hour hogwash. It proves nothing.
Weird. I thought the point of the video was he had the cpu working in the Mb that Intel said it would not run in? Are you saying it was a fake? To me it proves it can work. Long term? Well, that hasn't been tested yet.
 
Weird. I thought the point of the video was he had the cpu working in the Mb that Intel said it would not run in?
Wrong. der8auer was running a z390 with a i9-9900K. Intel says that CPU will run in that motherboard.

He claims (incorrectly) to be "simulating a Z270" by taping off 18 pins. Note that he did not tape off any of the additional ground pins that the i9-9900K uses: just the power pins. So he at best only half-simulated Z270 power delivery. Any CPU packaging designer (or anyone who understands why package inductances are such a pain for high-speed digital circuits) can tell you that ground pins matter a lot, both for power deliver and signal integrity.

Seriously, did you even watch the video?
 
Wrong. der8auer was running a z390 with a i9-9900K. Intel says that CPU will run in that motherboard.

He claims (incorrectly) to be "simulating a Z270" by taping off 18 pins. Note that he did not tape off any of the additional ground pins that the i9-9900K uses: just the power pins. So he at best only half-simulated Z270 power delivery. Any CPU packaging designer (or anyone who understands why package inductances are such a pain for high-speed digital circuits) can tell you that ground pins matter a lot, both for power deliver and signal integrity.

Seriously, did you even watch the video?

/thread
 
Wrong. der8auer was running a z390 with a i9-9900K. Intel says that CPU will run in that motherboard.

He claims (incorrectly) to be "simulating a Z270" by taping off 18 pins. Note that he did not tape off any of the additional ground pins that the i9-9900K uses: just the power pins. So he at best only half-simulated Z270 power delivery. Any CPU packaging designer (or anyone who understands why package inductances are such a pain for high-speed digital circuits) can tell you that ground pins matter a lot, both for power deliver and signal integrity.

Seriously, did you even watch the video?

Why does it matter if he went after the positive or negative?

Long ago we guessed wrong and thus in conventional current the positive flows to the negative. Symbols and diagrams are drawn backwards to this day because of this.

Reality is it doesn't matter where you squeeze the pipe as long as it get squeezed. The whole point was to test how much current each pin could take.

IMO his test and methodology are sound.
 
Why does it matter if he went after the positive or negative?
Long ago we guessed wrong and thus in conventional current the positive flows to the negative. Symbols and diagrams are drawn backwards to this day because of this.
Reality is it doesn't matter where you squeeze the pipe as long as it get squeezed. The whole point was to test how much current each pin could take.
IMO his test and methodology are sound.
1) ground matters because signal levels are referenced to ground, not the supply voltage. As a result, when your ground voltage, at the chip, bounces because of a current spike causing a voltage drop across the inductance of a package lead, everything goes to hell, because the signals coming into the chip weren't referenced to that bouncing ground voltage when they were generated.

2) der8auer ran his long-run test by reducing the supply pins to the Z270 number but left all the grounds. Therefore, even from a DC power delivery view, he only simulated half the restriction on power delivery running in a Z270 would impose. It may not matter where the pipe gets squeezed, but when the pipe gets squeezed in two places, and you only replicate one of those squeezes, you're not replicating the entirety of the squeezing.

The fact that you didn't realize the above two things makes it clear you're not even an electrician, much less a EE, and makes your opinion of der8auer's methodology one that should be taken very lightly and with a pound of salt.
 
1) ground matters because signal levels are referenced to ground, not the supply voltage. As a result, when your ground voltage, at the chip, bounces because of a current spike causing a voltage drop across the inductance of a package lead, everything goes to hell, because the signals coming into the chip weren't referenced to that bouncing ground voltage when they were generated.

2) der8auer ran his long-run test by reducing the supply pins to the Z270 number but left all the grounds. Therefore, even from a DC power delivery view, he only simulated half the restriction on power delivery running in a Z270 would impose. It may not matter where the pipe gets squeezed, but when the pipe gets squeezed in two places, and you only replicate one of those squeezes, you're not replicating the entirety of the squeezing.

The fact that you didn't realize the above two things makes it clear you're not even an electrician, much less a EE, and makes your opinion of der8auer's methodology one that should be taken very lightly and with a pound of salt.

It already has about 3 times as many ground pins as power pins:
pins.PNG


Deb8uer was correct in restricting the power pins as I doubt the ground path was 3x as long.
 
It already has about 3 times as many ground pins as power pins:
Deb8uer was correct in restricting the power pins as I doubt the ground path was 3x as long.
Nonsense. He should have restricted both power and ground. And why didn't he? Did he run out of tape?

Besides, his stress test was "small FFT" -- hardly a typical use of a CPU, and unlikely to generate the inter-chip signal traffic and resulting current spikes of real applications. It wouldn't surprise me if his "small FFT" workload ran entirely out of on-chip cache, and never stressed I/O signal integrity.

His testing proved nothing about how necessary the extra power and ground pins are for reliable long-term execution of all applications in the full range of specified environmental conditions the chip is spec'd for. And that's what Intel is after, not "run small FFT at room temp for a night." Any HW validation engineer who did what he did and represented the results to his boss as being meaningful would probably and rightfully be fired.

It's as if he ran his car tires at 20 psi in the spring for a day with just one passenger for a load, and is now telling everyone that the extra 13 psi the manufacturer calls for is unnecessary.

Like you, der8auer doesn't seem to understand all the problems the extra pins may be intended solve. It may not just be supply current magnitude, as he seemed to think. It may instead be providing a sufficiently low inductance path that worst-case supply current dI/dt (change in current over time) doesn't cause big voltage bounces to both Vcc and ground, which can play havoc with reliability. And it may not have been just for this generation of CPUs, which are widely considered emergency stop-gap products developed only because the 10nm CPUs weren't ready for volume production when planned.
 
Last edited:
1) ground matters because signal levels are referenced to ground, not the supply voltage. As a result, when your ground voltage, at the chip, bounces because of a current spike causing a voltage drop across the inductance of a package lead, everything goes to hell, because the signals coming into the chip weren't referenced to that bouncing ground voltage when they were generated.

Having any reference is just the potential of the circuit.

The chip wouldn't even function if there wasn't a plane between the pins and the chip.

Voltage bounces which is why capacitors exist.

2) der8auer ran his long-run test by reducing the supply pins to the Z270 number but left all the grounds. Therefore, even from a DC power delivery view, he only simulated half the restriction on power delivery running in a Z270 would impose. It may not matter where the pipe gets squeezed, but when the pipe gets squeezed in two places, and you only replicate one of those squeezes, you're not replicating the entirety of the squeezing.

Anyone who's built a circuit understands a sacrificial component. 2 circuit breakers in series does nothing to improve the circuit.

The fact that you didn't realize the above two things makes it clear you're not even an electrician, much less a EE, and makes your opinion of der8auer's methodology one that should be taken very lightly and with a pound of salt.

Yeah I'm neither, but you seem to go after the messenger than the message. If you're going to go after qualifications please state your qualifications.
 
I was worked as electrical and computer systems engineer for three decades, including 15 years at Intel designing CPUs.
As an engineer, I became a named inventor on dozens of patents, which led me to becoming a patent attorney.
Now I work with clients on a range of technologies, including CPUs and other high-speed digital circuits.

So, STFU.

But since this is the internet, I could be a genetically engineered smooth-coated collie with a genius IQ and the strength of 100 men. How would you know?

That said, I think my posts here provide adequate evidence of my claim that I posses pertinent engineering expertise.
Yours don't, and neither does de8auer's pin-taping video.

That's really impressive. Thank you for that.

With all that, I'm really surprised you can't see what this video is and what it isn't.

Yes it isn't perfect but it also isn't without merit.

My posts are not up to yours, however you are quite dismissive of what someone with even a basic understanding of circuits would see.

I'm sure both you and intel know exactly how much each pin on each side of the circuit could take. I wonder if that known tolerance would allow the chips to run on both boards?

In the end. Should everyone without an EE degree just STFU and leave the board or are we allowed to use what knowledge we have and participate in the conversation.
 
I was worked as electrical and computer systems engineer for three decades, including 15 years at Intel designing CPUs.
As an engineer, I became a named inventor on dozens of patents, which led me to becoming a patent attorney.
Now I work with clients on a range of technologies, including CPUs and other high-speed digital circuits.

So, STFU.

But since this is the internet, I could be a genetically engineered smooth-coated collie with a genius IQ and the strength of 100 men. How would you know?

That said, I think my posts here provide adequate evidence of my claim that I posses pertinent engineering expertise.
Yours don't, and neither does de8auer's pin-taping video.

This, my friends, is called ownage. The armchair engineers in here arguing to the contrary (you know who you are) have made fools of themselves by arguing with actual EEs, at least two of whom worked at Intel....on CPU designs. But hey, a YouTube celebrity who allegedly has this vast CPU experience (who only graduated in 2016, by the way) obviously knows more. LOL.

“Daaammnn,” indeed. This thread should probably be locked out of mercy. :D
 
Last edited:
Back
Top