How Intel's changing the future of power supplies with its ATX12VO spec

AMD is pushing HBM, metal APIs, chiplets, HSA, etc etc. Intel's got to do something innovative right?

Dell, HP, and presumably other major OEMs have been doing proprietary versions of this for years. Finally producing a standard version instead of every OEM doing it's own proprietary non-sense while everyone else is stuck with a fat 24 wire cable full of mostly useless wires is a step forward even if it's not quite my preferred design.

If I was designing the ATX v.next spec I'd've probably gone with a 12 wire cable instead of 9 adding 1x5v 1x3.3v and a 4th ground, or possibly a10 wire one adding a 3.3v and making the 5v one do both normal and standby power. I liked the 12 wire setup better because it's similar enough to the existing spec that you could mix old/new parts with an passive adapter plug (combining the 2 5v rails would require some electronics in the plug to keep expected behavior on both sides). Mostly I wanted to keep the 3.3/5v on the PSU side because I trust the PSU vendors to use quality high efficiency parts for the 3.3/5v hardware because they're rated on DC efficiency for 80+ and most reviewers hook up oscilloscopes to monitor voltage stability. OTOH for mobo vendors the new DC-DC converters are an extra cost item, and I suspect many of them will cheap out with low efficiency junk that only just meets minimum ATX voltage regulation to save a few pennies.
 
They accept it because that is what current PS supply. But how many devices USE 12V? Memory is often 1.5V or less. CPU voltages are getting lower as transistor gaps shrink. Do SSDs use 12v or mostly ignore those wires in favor of the 5v lines? Do GPUs use 12V as is or spend card space for voltage regulators to drop it to the voltage needed? At some point, the space needed for motherboard voltage converters and the requisite heat sinks will get close to the space needed for the actual computing parts.

I guess my question is does it save MB space and heat to feed 12 (or 24/48V) to the MB and let the MB convert to 3.3 and lower voltages or is it better to let the PS do the conversion to 3.3 or 1.5 and use larger wires to feed the power to the MB? If we are redesigning the power standard, nothing says we can't have several two or four pin low voltage plugs that connect to the MB adjacent to the parts that use that voltage. Might be some space and trace savings over the current 20/24 pin standard near the edge of the MB.

The potential users are rather spread out though. 5V is used by USB ports in the IO panel on the back of the board, along the bottom of the board for USB 2 headers, and often on the front edge for USB3 headers. 3.3V is mostly used by the PCIe slots where 10 of the 25W a 1x card or 75W a 16x card can draw from the mobo is 3.3V instead of 12v. (As of a few years ago when there was a stink about some AMD cards trying to draw 100W through the mobo testing showed a decent number of GPUs - IIRC about half - were taking the 10W of 3.3 from the mobo in addition to the 65W of 12V.)

3.3/5V is also used by a few other odds and ends, eg if you have a business class machine with a TPM that runs on 3.3v, and the AFAIK SuperIO (combining ps2, serial, parallel, and floppy controls onto a single chip was a wow feature in the early 90s) chip running the PS2 port a decent number of mobos still offer runs on 5V because it's a 20-30 year old design. I don't think any of these draw enough power to really matter in terms of distribution preferences.

I don't think the location of the sata power connectors is specced out; but odds are it'd be somewhere on the front or bottom edges of the board since that's where all the other little cables coming off it are.
 
Using the motherboard for power distribution will drive increased layer count. Motherboard manufacturers won't like that one bit. Most of the PCIe products I design don't even use the 3.3V from the motherboard anymore, only +12V. Distributing only +12V makes sense, but having the motherboard do that has me scratching my head just a little. A nice, small +12V line with two pins makes good sense for external drives.
 
Oh, and a quick search of USB chipsets shows +5.0V is antique. I know most chipsets used to have 3.3V as a common I/O voltage (GPIO, NVM, etc.) but more and more, it's 2.5 / 1.8V Short answer is +5.0V is not the requirement it once was. Distributing +12V and doing POL voltage regulation makes good sense on lots of levels.....
 
Using the motherboard for power distribution will drive increased layer count. Motherboard manufacturers won't like that one bit. Most of the PCIe products I design don't even use the 3.3V from the motherboard anymore, only +12V. Distributing only +12V makes sense, but having the motherboard do that has me scratching my head just a little. A nice, small +12V line with two pins makes good sense for external drives.

Doing full system power distribution through the motherboard definitely doesn't make sense. But, delivering the motherboard only 12 or 24V does since motherboards already have regulators onboard for every voltage needed. The change would simply be swapping them out with ones that accept a higher input voltage. It's really not a big deal.

Like you said, USB chipsets don't really use +5V any more. This goes to zero for USB3 because of the updated power requirements. A 24V supply would almost be ideal as it would enable a simple step down converter for the USB-PD 20V option.

+5 and 3.3 made a lot of sense when CPUs were 33Mhz and every component was wave soldered. In today's systems, those supply rails often don't even get used by the components that use those voltages because the output from onboard regulators is so much cleaner.
 
One, slightly clunky, alternative:

Feed the motherboard with the 10 pin 12V cable. It just feeds the board and shit on it. (RAM, CPU, NVME, USB etc.) The GPU will continue to be powered by a stand-alone 12V Cable. (or two)

Modern computers don't need anything else in the case.

If you're an enthusiast, and need spinning disks? They can easily make a 3.5" HDD Sized power converter box. 12V Input, and SATA Output.

The 10W per drive, isn't much. So, a 50W-60W converter would be able to power 4-5 Spinning drives, or 10+ SSDs. Probably would need minimal cooling, too.

If you're running more than that, obviously, that's much more of an outlier than needed for most, even enthusiast builds.
 
like i said in the last thread;.

dell is down to 8 in their motherboard connection. that's perfect adopt that.

fuck intel and their attempt to reinvent the wheel.
 
This isn't really a big deal except for GPU's that already exceed the PCIe slot spec for max power, hence the need for the additional cables. So as long as there are also additional cables for that from the PSU. The downside is all the legacy peripherals... so really, I would prefer a hybrid solution, one that implements the new spec, but also allows the existing 12v,5v, and 3.3v outputs to continue to be available directly from the PSU.
 
This isn't really a big deal except for GPU's that already exceed the PCIe slot spec for max power, hence the need for the additional cables. So as long as there are also additional cables for that from the PSU. The downside is all the legacy peripherals... so really, I would prefer a hybrid solution, one that implements the new spec, but also allows the existing 12v,5v, and 3.3v outputs to continue to be available directly from the PSU.
how bout we just leave it and trim down that 24 pin to something manageable legacy be damned.
 
Too long as Intel's ATX standard format reigned supreme, like the AT and XT formats before it, it's time has come and it too will go the way of the dodo.
 
how bout we just leave it and trim down that 24 pin to something manageable legacy be damned.

I simply wouldn't worry about legacy support. When I do new builds, I buy new parts. It really is that simple. I don't see any rational people bitching about how DDR4 doesn't work in their DDR3 boards.

If I'm running 20yo museum hardware, then I already expect some headaches when I need to replace a component. That's just part of the lifestyle of owning antiques.

Besides, even if a new standard were implemented tonight, it would still be a decade before PSUs on the old standard showed signs of getting scarce.
 
Dell, HP, and presumably other major OEMs have been doing proprietary versions of this for years. Finally producing a standard version instead of every OEM doing it's own proprietary non-sense while everyone else is stuck with a fat 24 wire cable full of mostly useless wires is a step forward even if it's not quite my preferred design.

yep my lenovo desktops are only 12v to the mobo... had to get a 24pin adapter to use a regular psu so I could power a video card (oem psu didn't have enough juice on the 12v rail)

also all the servers I use are the same thing... 12v at a ton of amps and 5vsb at like .8a
 
how bout we just leave it and trim down that 24 pin to something manageable legacy be damned.
I simply wouldn't worry about legacy support. When I do new builds, I buy new parts. It really is that simple. I don't see any rational people bitching about how DDR4 doesn't work in their DDR3 boards.

If I'm running 20yo museum hardware, then I already expect some headaches when I need to replace a component.

Currently running equipment that is only a few years old is not "legacy", it's production. And I am not talking about RAM. A raid array on a workstation mobo, and a high end GPU. Those need power, are new, and I'm not going to replace just to get some new mobo/psu. They are not even CLOSE to being "20yo"...
 
Currently running equipment that is only a few years old is not "legacy", it's production. And I am not talking about RAM. A raid array on a workstation mobo, and a high end GPU. Those need power, are new, and I'm not going to replace just to get some new mobo/psu. They are not even CLOSE to being "20yo"...
What would change with a GPU with this new SPEC? It would just connect the same PCIe 12V cables it always has.
 
Currently running equipment that is only a few years old is not "legacy", it's production. And I am not talking about RAM. A raid array on a workstation mobo, and a high end GPU. Those need power, are new, and I'm not going to replace just to get some new mobo/psu. They are not even CLOSE to being "20yo"...
It's not like this is the first time that we have changed PSU standards LTX was good until it wasn't, ATX will be good till it isn't. I remember having this exact same conversation on message boards back in the 90's about this very issue, the Intel ATX standard has had its time and replacing it seems the right thing to do. Switching out the main rail for 48v makes large parts of the setup cheaper and cooler on MB's, and most importantly GPU's, with making almost no changes to anything else.
The big vendors have already figured this out, Dell, HP, Lenovo, and Acer have all made the switch to 48v, but they are all different and they are all slightly proprietary at this stage, getting a standard in place now helps everybody in the long run.
 
Currently running equipment that is only a few years old is not "legacy", it's production. And I am not talking about RAM. A raid array on a workstation mobo, and a high end GPU. Those need power, are new, and I'm not going to replace just to get some new mobo/psu. They are not even CLOSE to being "20yo"...

You said it yourself: you're not going to do piecemeal upgrades on that machine. So don't invent a problem where none exists. When it's time for a motherboard, you would buy one along with a new PSU.
 
It's not like this is the first time that we have changed PSU standards LTX was good until it wasn't, ATX will be good till it isn't. I remember having this exact same conversation on message boards back in the 90's about this very issue, the Intel ATX standard has had its time and replacing it seems the right thing to do. Switching out the main rail for 48v makes large parts of the setup cheaper and cooler on MB's, and most importantly GPU's, with making almost no changes to anything else.
The big vendors have already figured this out, Dell, HP, Lenovo, and Acer have all made the switch to 48v, but they are all different and they are all slightly proprietary at this stage, getting a standard in place now helps everybody in the long run.
But this spec doesn't include 48v.
 
Will solar or wind power become common enough for power supplies to use DC input power?
Seems silly to store power in batteries, use an inverter to convert to ac, then convert back to dc.
 
Will solar or wind power become common enough for power supplies to use DC input power?
Seems silly to store power in batteries, use an inverter to convert to ac, then convert back to dc.

IIRC, long-distance transfer of DC power suffers pretty big losses. To do a large-scale switch to it from AC would require much more distributed generation.
 
Will solar or wind power become common enough for power supplies to use DC input power?
Seems silly to store power in batteries, use an inverter to convert to ac, then convert back to dc.
I understand what you're saying, but I don't see solar becoming the primary energy source for homes at all in the future with current technology. It's always going to be supplemental, in my opinion, meaning the bulk of your energy is always going to need to travel across long distances.
 
I said it before and I'll say it again. This "solution" by Intel is a solution looking for a problem. All the while creating problems of its own.

The only real "solution" proposed here is moving what already works perfectly fine on PSUs to the motherboard. It's highly unlikely we'd get any cost savings with the PSUs in the process. However, motherboard complexity will go up as obviously will motherboard cost. Not to mention that there are enough issues with complexity on motherboards as it is. Adding in additional complexity, additional trace runs, additional electrical noise and who knows what other problems to motherboards is not a solution. It's a problem looking to be created.

Kinda like the BTX? I still have a BTX case. I could be wrong, but this might be Intel looking for something to patent so they can control the standard and/or make royalty off of it for every PC that uses it. It's the same tactic that Nvidia uses.
 
i'm hoping that compatibility with existing power supplies will be as simple as using an adapter, 24-pin to 10-pin or whatever. I don't see any reason why the 12V coming from a regular PSU wouldn't still work.
 
i'm hoping that compatibility with existing power supplies will be as simple as using an adapter, 24-pin to 10-pin or whatever. I don't see any reason why the 12V coming from a regular PSU wouldn't still work.
If the 10-pin socket doesn't include +5 and so on, an adapter will have to be more complex than just a pin expander.
 
If the 10-pin socket doesn't include +5 and so on, an adapter will have to be more complex than just a pin expander.

Why? Couldn't the adapter simply feed from the +12v and Ground pins on the 24-pin connector and ignore the other pins that it doesn't need? Maybe feed from the 4/8-pin EPS12v/ATX12v connector also.
 
If the 10-pin socket doesn't include +5 and so on, an adapter will have to be more complex than just a pin expander.

If you plan to use a new spec PSU with an old motherboard, you would need a power distribution block like the Seasonic connect. For a new spec motherboard to work with an old spec PSU, a cheap 5v to 12v adapter would be all that is necessary in the adapter harness.
 
Why? Couldn't the adapter simply feed from the +12v and Ground pins on the 24-pin connector and ignore the other pins that it doesn't need? Maybe feed from the 4/8-pin EPS12v/ATX12v connector also.

Oh, you're thinking about running a 24-pin PSU to a 10-pin motherboard. I was thinking you meant the other way 'round.
 
If you plan to use a new spec PSU with an old motherboard, you would need a power distribution block like the Seasonic connect. For a new spec motherboard to work with an old spec PSU, a cheap 5v to 12v adapter would be all that is necessary in the adapter harness.
Yeah. For the second thing you mention, that's what I meant--can't simply have a 24-to-10-pin thing like you can for current Dell/HP models, you need the voltage converter somewhere.
 
IIRC, long-distance transfer of DC power suffers pretty big losses. To do a large-scale switch to it from AC would require much more distributed generation.

It's the other way around, longer distances are were DC shines. The AC-DC and DC-DC (different voltage) conversion steps are where the losses are; the higher costs of AC-DC conversion systems and the need for more sophisticated control/safety hardware (repeatedly going through 0V makes it easier to stop an AC short or arc). With AC your wire size and spacing are set by peak voltages which are ~40% (square root of 2) above the average (RMS) voltage; while the maximum power carried is controlled by the average level (P=V*I). When you switch to DC your peak and average voltage are the same meaning you can up the voltage by 40% significantly reducing resistive (I^2*R) losses for the same amount of power delivered and have 40% more headroom to deliver additional power without having to run new transmission lines.

The high hardware costs for DC, especially in the past were what let AC win. 100+ years ago transformers made getting AC to very high voltages cheap and easy for Westinghouse systems; lacking them and having to run at much lower and less efficient voltages was what killed Edison's DC grid. It was the couldn't do high voltages problem though, not anything fundamental with DC that settled the war a century ago. With solid state power electronics getting steadily cheaper now DC is slowly retaking the longest haul parts of the grid.
 
Will solar or wind power become common enough for power supplies to use DC input power?
Seems silly to store power in batteries, use an inverter to convert to ac, then convert back to dc.

you can probably put DC into your power supply right now as long as the voltage is high enough... first thing that happens in a SMPS power supply is the AC goes through a rectifier anyway
 
you can probably put DC into your power supply right now as long as the voltage is high enough... first thing that happens in a SMPS power supply is the AC goes through a rectifier anyway
Would be interesting to try.

Also they do make pico psus that take in low voltage DC. Some require 12V input because they directly pass that through to the motherboard. Others have a regulated 12V output and can input a range of voltages including common laptop power brick voltages.
 
you can probably put DC into your power supply right now as long as the voltage is high enough... first thing that happens in a SMPS power supply is the AC goes through a rectifier anyway

The APFC part might complain a little.
 
Dell & HP are already doing this. I can't remember if my current work PC does or not, but my last one, from 2015, does: the PSU has two cables: 8-pin mainboard and 4-pin CPU EPS. There's a 6-pin connector on the motherboard that provides 2 SATA power cables.

Yeah, its been quite a few years now in their corporate desktops. Sounds dangerous in generic desktops though. With HP and Dell, there isn't much space or incentive to expand. But you know enthusiasts are going to be buying SATA power splitters from Amazon to run a bunch of drives and so on. That's going to pull a lot of power through motherboard foil traces (instead of the current copper wire from the PSU) in a new standard.
 
Back
Top