An Analysis of the Z390 Socket Proves the Extra Pins Aren't Necessary

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,060
Der8auer on YouTube (Roman Hartung) has performed an in-depth analysis of the Z390 socket where he physically extracted the pins from a dead Z370 board and measured how much of an load each pin can take. After applying up to 5 amperes of power to the pin, he concluded that it can easily withstand the rigors of a daily 1.01 amperes load. Then he taped off 18 pins on an i9-9900K to simulate it running in a Z270 motherboard. This increased the load on each pin, but no adverse changes were observed in the socket or the board. He then taped off varying amounts of pins and tested the i9-9900K with up to 69 pins taped off which created a 1.92 ampere per pin load. Again, no changes were observed during testing that exceeded 6 hours. The conclusion is that the LGA-1151v2 is absolutely unnecessary.

Roman Hartung has proven very clearly that the LGA-1151v2 socket is basically completely unnecessary. The pins withstand even a very limited power supply without any problems, there is no damage to the mainboard, the socket or the processor. This also proves once more that Intel probably didn't allow compatibility with the old motherboards for sales reasons.
 
It's important for videos like this to be created so that there is clear proof that Intel is purposely penalyzing their own customers for buying their products. Yes they can argue that there are more features on new boards that are why they make the changes as well, but everyone knows that they are just shamelessly torturing their own users. If AMD is able to drop the miracle of Zen3, we may finally have a leader in the CPU market everyone can get behind. CES can't come soon enough.
 
This is ancient news. AMD got big as a second source licensed x86 clone for IBM. Once AMD started selling enough 486 CPUs (often dropped into Intel chipset boards in IBM clones) that it was taking a noticeable bite out of their pocket books, Intel decided to become a complete chipset platform company. It has been making it difficult to not have to change to a matching chipset with new processor releases ever since.
 
I think it also needs to be said that while I'm sure it's quite likely there are pins on AMD sockets over the years that have also been completely useless and unneeded, on the flip side, their sockets (and motherboards) have spanned many generations of processors. So in those instances it's more likely a case of future-proofing that ended up not being used, than ass-hattery. Granted, who's to say that an Athlon 64 on S939 couldn't have functioned just as well in an AM3+ 980FX, but fact does remain that if they had added DDR2 slots to a 980FX board, well then AM2 chips could've easily worked still :p
 
Damn, well I wonder how many times an actual jump in pins was a must versus backwards compatibility with an older chiplet.
 
Last edited:
Damn, well I wonder how many times an actual jump in pins was a mist versus backwards compatibility with an older chiplet.

With Intel? Probably only necessary every couple generations (significant changes to architecture / features). Any incremental improvement releases shouldn't need a new socket and it's purely a revenue grab.
 
For those too lazy to read, the assertion here is that even the latest i9 9900k should be capable of running on the Z170 chipset (with some modification, of course).

I am never surprised by unlimited greed. I wouldn't be surprised if it was the motherboard manufacturers who begged Intel to go down this path just so they could rehash the same shit with more LED bling. Whatever keeps the cash flowing.
 
I skimmed the video. He taped/cut off power/ground pins and found it still works?

If that is all he did that is not quite enough for me. There is a difference between designing something to a spec and testing to see if something works.
 
Fry-Im-Shocked-Futurama.gif
 
Didn't watch the video.. but would taping off said pins effectively reduce the number of power supply phases that normally feed power to the CPU?

If so, I bet it will have an effect on stability, especially when overclocking and will also reduce the lifespan of the phases still in use since they will be worked harder?

Personally, I think that up to a point, the more power and ground pins the better.

Even though those "extra" pins are not necessary for the CPU to function, I kinda think that they are there for a purpose.

What about redundancy if nothing else?

What about testing the same theory on the previous boards? How much redundancy and/or power draw load balancing is there?

I would love for Intel to come back with an article explaining their design decisions.
 
No one should take this amateur analysis seriously. Did he analyze the socket and CPU under elevated temperature conditions? Did he artificially age the socket and CPU to introduce oxide, sulfide, or nitride contaminates at the socket/CPU interface? Did he run the test for years? Did he factor in the potential impact of socket or CPU warp over time or from manufacturing variations, which might cause some pins to lose or have reduced contact? Did he look at how much extra power was wasted by the increases heat generated by reducing the number of pins? Did he consider the effect of accelerated reaction rates due to pin heating on long term reliability?

And let's crank the numbers: the revised LGA 1151 adds 18 more power pins (going from 128 to 146).
The video demonstrates that each bin has a little more than 0.045 ohms resistance (0.23V voltage drop at 5A).
That makes the aggregate power pin resistance for 1151V1 14% higher (0.35 milliohms) the 1151V2 (0.31 milliohms).
Doesn't sound like much, BUT those Coffee Lake processors are pulling 138A through those pins.
The extra power pins reduce the resulting voltage drop across the pins (not to mention the drop across the wiring connected to those pins) from 48mV to 42mV, with a corresponding decrease in power dissipated just by the pins of almost 1 Watt. And that's without overclocking. Sure, it's only 1W, but it's 1W at a place that's hard to get extra cooling to -- what the thermal conductance of the material they make sockets from, anyway?

So it's one thing to say "there's no need for these extra pins if you're just going to run your system for a week."
But it's an entire different matter when your customers expect the socket-CPU interface to not be an issue for three years.
Or when your customers expect to be able to feed 150A to an overclocked CPU without the socket melting.
 
With Intel? Probably only necessary every couple generations (significant changes to architecture / features). Any incremental improvement releases shouldn't need a new socket and it's purely a revenue grab.

Yeah, I wonder though I mean launching a new chipset and then going through marketing ECT .... ECT... Wouldn't that hurt your bottom line especially if it's a lower selling stop gap product. Mobo manufacturers can always inflate profits from premium prosumer boards and lines like Asus with the ROG lines, but Intel still can only really sell 2 specific lines of Chiplets a b tier and an a tier,............I might be missing alot here in between the lines.
 
Didn't watch the video.. but would taping off said pins effectively reduce the number of power supply phases that normally feed power to the CPU?

If so, I bet it will have an effect on stability, especially when overclocking and will also reduce the lifespan of the phases still in use since they will be worked harder?

Personally, I think that up to a point, the more power and ground pins the better.

Even though those "extra" pins are not necessary for the CPU to function, I kinda think that they are there for a purpose.

What about redundancy if nothing else?

What about testing the same theory on the previous boards? How much redundancy and/or power draw load balancing is there?

I would love for Intel to come back with an article explaining their design decisions.

They did load testing to prove the pins could handle the load without the additional ones. This makes it seem like Intel wired in a thermostat for a furnace with jumper cables.

Edit: nevermind, I'm way off base. Seems like an increase of 14% of pins, which doesn't sound unreasonable for doubling core count over Skylake. Unless Skylake was already way overkill for power delivery.
 
Last edited by a moderator:
I think it also needs to be said that while I'm sure it's quite likely there are pins on AMD sockets over the years that have also been completely useless and unneeded, on the flip side, their sockets (and motherboards) have spanned many generations of processors. So in those instances it's more likely a case of future-proofing that ended up not being used, than ass-hattery. Granted, who's to say that an Athlon 64 on S939 couldn't have functioned just as well in an AM3+ 980FX, but fact does remain that if they had added DDR2 slots to a 980FX board, well then AM2 chips could've easily worked still :p

Socket specs are rather similiar yes, they're AM1,2,3 which shares socket design pin count +/- 1,2 or so
But Phenom II had DDR2 and DDR3 memory controllers which let you use two sockets with one cpu, a fun trivia making the possibility of my rather peculiar setup

Phenom II 1055T.
DDR2 8gb
Nforce 3 Chipset.
Voodoo 2 AGP or 9800 pro AGP in Win 98SE :) or win7, perfectly compatible with anything.
 
It's nice when proves the obvious in a formal manner. The only Intel product I have any interest in is 3d xpoint but Micron is expected to release their version this year, so yeah. Fuck intel.
 
We all knew this the second Intel announced it. Funny thing is, if I didn't need to replace my mobo, I would have considered getting the 9900k, replacing my 7700k. Looks like Intel shot themselves in the foot here..

Same here. I would be interested in getting some more cores, even though I don't need them. But I'm not going to replace a perfectly good, premium mainboard, especially one that's only been artificially limited.
 
This is ancient news. AMD got big as a second source licensed x86 clone for IBM. Once AMD started selling enough 486 CPUs (often dropped into Intel chipset boards in IBM clones) that it was taking a noticeable bite out of their pocket books, Intel decided to become a complete chipset platform company. It has been making it difficult to not have to change to a matching chipset with new processor releases ever since.

Still it would be nice if AMD and Intel had nothing to do with the chipset platform but instead someone like IBM would create the platforms and all the specs, It's interesting with several YouTubers a few months back in discord I brought up the point there was no real way to compare CPUs and the differences were arbitrary because you can't test them correctly as with scientific method you have to eliminate variables for testing, meaning for aaproper comparison only the CPUs should be different, but in the case of CPUs you are really testing the ecosystem not individual CPUs unless they are on the same platform with the same constants. Hence comparing Ryzen 2700 to a 9900k is a moot point as there are way to many variables to account for the result.
 
I can disconnect four of the eight sparkplugs on my v8 and still make it home, doesn't make the extra cylinders a cash grab.

That is a terrible analogy.

It would be closer to saying that you need a new truck frame because you want to install a V-8 after your V-6 died.

The new truck frame has a few more brackets that are not even needed to install the V-8
 
That is a terrible analogy.

It would be closer to saying that you need a new truck frame because you want to install a V-8 after your V-6 died.

The new truck frame has a few more brackets that are not even needed to install the V-8

They literally reduced the amount of power pins and said intel is screwing the customer because it works without them. This is exactly the same as dropping cylinders. Just because it works doesn't mean it's right.

Maybe Intel is jacking us around and it's a blatent 'fuck you, because we can' or maybe there is some obscene edge case that caused the v1 socket to burst into flames.
 
Socket specs are rather similiar yes, they're AM1,2,3 which shares socket design pin count +/- 1,2 or so
But Phenom II had DDR2 and DDR3 memory controllers which let you use two sockets with one cpu, a fun trivia making the possibility of my rather peculiar setup

Phenom II 1055T.
DDR2 8gb
Nforce 3 Chipset.
Voodoo 2 AGP or 9800 pro AGP in Win 98SE :) or win7, perfectly compatible with anything.
Indeed. A friend and I were trying to figure out why... I think it was his 1090T, wouldn't work in his old AM2+ board. I told him that mine was reporting (via AIDA64) to indeed have the DDR2 and DDR3 controller, so technically it should have worked fine. I'm sure it was all down to a BIOS incompatibility for his. Even the Athlon II, which were still PhII based, had the DDR2 controller intact.

Here's another similar tidbit that I had just came across recently that wouldn't surprise me if no one knows...
The Intel DDR4 CPUs still support DDR3, up until the 8th gen at least.
Examples: 6700K, 7700K both list DDR3 for "Memory Types" :D
 
This is ancient news. AMD got big as a second source licensed x86 clone for IBM. Once AMD started selling enough 486 CPUs (often dropped into Intel chipset boards in IBM clones) that it was taking a noticeable bite out of their pocket books, Intel decided to become a complete chipset platform company. It has been making it difficult to not have to change to a matching chipset with new processor releases ever since.

Well, I think the point here is that previously they would actually change the entire pinout, moving the location of many pins, as well as physically adding or removing a couple, so that you physically couldn't use a different-generation processor. This time literally all they did was unreserve 32 pins and use them for power (and add blocking in the bios). Had they been less lazy and added an extra pin and moved a few dozen around, nobody would be able to do this.
 
They literally reduced the amount of power pins and said intel is screwing the customer because it works without them. This is exactly the same as dropping cylinders. Just because it works doesn't mean it's right.

Maybe Intel is jacking us around and it's a blatent 'fuck you, because we can' or maybe there is some obscene edge case that caused the v1 socket to burst into flames.
Actually it is a bad analogy because in his testing there isn't performance loss. You have lost 50% of work in yours.
 
Indeed. A friend and I were trying to figure out why... I think it was his 1090T, wouldn't work in his old AM2+ board. I told him that mine was reporting (via AIDA64) to indeed have the DDR2 and DDR3 controller, so technically it should have worked fine. I'm sure it was all down to a BIOS incompatibility for his. Even the Athlon II, which were still PhII based, had the DDR2 controller intact.

Here's another similar tidbit that I had just came across recently that wouldn't surprise me if no one knows...
The Intel DDR4 CPUs still support DDR3, up until the 8th gen at least.
Examples: 6700K, 7700K both list DDR3 for "Memory Types" :D

Please do take a look at Apple's laptops, they've been using DDR3LP on the latest and greatest intel chips.


I've seen them too, many are related to bad connection to the socket, and not overcurrent per se, too little pressure..
A little too careful friend had this issue 2 times before I stepped in and said be a man when installing coolers and since his rig has been running - that was stock 2600K!!

I've also seen many no post due to too tough seating of cooler on LGA.. tbh LGA is a bitch cause even when the cpu is seated it's a rather big pain in the ass while PGA is just be careful before installing then it's ok.
 
Back
Top