• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

First ATX12VO Consumer Motherboard: The ASRock Z490 Phantom Gaming 4SR

/facepalm

I don't even know why you are arguing about this. You're literally making a mountain out of a grain of sand.
Who is arguing?

It's a standard you can not deviate from it. If you do it's not a standard anymore.
 
The increased capacity of the new 10pin connector will let low end systems run entirely off it it, not needing a 4pin EPS, or 6 pin PCIe for supplemental board power.

Well yes, I do believe I said that.

Not just Dell. HP and Lenovo are also doing it. But they've all got slightly different implementations.

Yeah. I mentioned that, probaby in another thread. IIRC HP is using 10-pin instead of 8-pin connectors. (I just double-checked my old work Optiplex 3020 non-SFF. It's got an 8-pin motherboard connector from the PSU, then a 6-pin connector from the motherboard, next to the incoming power connector, feeding two cables each with two SATA power connectors. Meanwhile, just for fun, my current work machine, an Optiplex 5060 non-SFF has the same SATA power connector, but the 8-pin from the PSU has been changed to a 6-pin.)
 
Well the way this is shaping up, its mostly a dodge to get around an energy reducing law. Take a part of the PSU, move it over here, and boom all is well again. All you did was move the issue to a different spot in the case and changed some wires. This is the same as moving $20 from my left pocket to my right pocket and saying I made $20. It really reminds me of BTX, the processor is an inferno, so instead of improving our cooler tech, let rearrange our boards. If we move this slot over here, and this part over here, we can put a huge fan to ram tons of air into our substandard heatsink so we dont have to replace it.
 
This will also be a boom to OEMs, as they can each have their own nonstandard PSU that you can only buy from them at a high price. So they will help their own bottom lines a lot. Kinda like going back in time to the 90's with nonstandard PSUs and riser cards again woot. Plus tons of more Ewaste with these disposable computers, works out great for OEMs.
 
This will also be a boom to OEMs, as they can each have their own nonstandard PSU that you can only buy from them at a high price. So they will help their own bottom lines a lot. Kinda like going back in time to the 90's with nonstandard PSUs and riser cards again woot. Plus tons of more Ewaste with these disposable computers, works out great for OEMs.

You mean the same OEMs who've been running their own proprietary 12V only designs for the last half dozen or so years? IF they want to keep their proprietary lockin, all they need to do is ignore this standard like they have been the covnentional atx standard and keep using their proprietary crap.
 
6 pin connectors are optional on current ATX boards.

My bad seeing it as 8

so it's a tie 32 and 32. or on some current atx boards 28 vs 32

it's a tie if you don't count the optional 6 or second 8 pin connector on conventional boards, but do count the optional 6 (which is only needed in cases where you'd need an equivalent optional connector on a 24) on this 10 pin board, and also count both of the optional 4's as well.

Normal mid range boards; 24 + 8 vs 10 + 8 + 4 +4 = 32 vs 26
High end overclocking/multiple GPU board: 24 + 8 + 8 vs 10 + 8 + 8 +4 + 4 = 40 vs 34
Lower end OEM system with a full power CPU but not a GPU or any sata devices: 24 + 4 vs 10 + 4 (EPS) = 28 vs 14
Very low end OEM system with a laptop class CPU and no GPU or sata devices: 24 + 4 vs 10 = 28 vs 10

For an equivalently capable system your baseline is 6 fewer because you took 14 out of the main power connector and are only adding 8 back to support sata devices. (Until SATA is dead on mainstream systems at which point you're 14 ahead, just like on some low end OEM systems today.)
 
Who is arguing?

It's a standard you can not deviate from it. If you do it's not a standard anymore.

And new versions of the standard can require the PSU to be able to deliver more power to the connector. The limits for the PCIe GPU connectors aren't wire/connector melting limits (no name garbage that will blow up if you actually try to draw as much power as the lying number on their box not withstanding); they're capacity planning numbers so that you're not normally able to connect devices drawing more power than the PSU can safely deliver (again no name exploding garbage not withstanding). And if you've heavily overclocked your GPUs in the past before NVidia/ATI added hard power limits you were almost certainly drawing above the nominal 75/150W ratings of your connectors without letting the magic smoke out.

When you've got a breaking change in forward/reverse compatibility like 12VO is, you've also got room to require heavier gauge wire to go above what would be the melting limit in the old standard; as long drawing above the old limit is only in spec if used for extra mobo power not extra GPU power.
 
But yeah, this shifts some of the cost away from the power supply and to the motherboard. Assuming that all motherboards include all power switching on-board you're still looking at less.

I replace my motherboard more often than my PSU. This mean's I'm buying the same hardware extra times. This costs more money. Motherboards also break or are broken way more often power supply units. I've had RAM channels die, etc. So I'd need t replace a bunch of power circutry unnecesarily because a RAM channel died, sounds a bit crap to me. My Corsair TX650 is powering it's third rig. Antec Earthwatts is powering it's 4th.


Instead of having a PSU that can power a dozen or more SATA drives, you have a motherboard that can supply power to the number of SATA drives it can support. Same for USB, and anything else.

Yeah, need more ports - go buy another motherboard. Sounds like a plan from a company that wants to sell motherboard chip-sets & processors, ie Intel. Why let your customers drop in a HBA for $50-75 when you can force them to spend $300+ on a new "high end storage" board - and hey while they are at it, it might make them buy a new chip too......

This also ignores the pain in the ass that is rewiring a case. When I keep my SATA/SAS devices in my next build, know how much work I have to do on the power? Zero if I'm keeping the same PSU - those wires don't move. The SATA/SAS side? Well depending on where they are on the board I need to re-run all of the wires. Let's just add drive power connectors to this as well, why not make it more of a pain in the ass for me? Not to mention a dozen SATA power connectors on the board, I bet that will look great and be easy to do.... why route your SATA power on the back side of the case when you can have them hanging off the motherboard instead, sounds like a plan... haha.

By tailoring the power production side, you're going to be more efficient.

Efficient measured how?
-Less power loss on due to less amperage on certain runs and more localized voltage conversion? Sure.
-More cost efficient? Not sure I'd agree with then when looking at ALL of the costs.
-Time efficient? I doubt it, sounds like more labor time messing with things when you replace a part and more things on one centralized device to go bad forcing you to replace the motherboard more often. How long does it to replace a fully modular PSU with one of the same model? about 5 minutes or less, you don't even need to touch the wires except the connectors at the unit. How long will it take you to replace one of these new motherboards when a voltage component dies? A lot longer than 5 minutes.


Seriously, 100% fuck this new PSU standard, Intel can stuff it up their you know where.
 
Last edited:
For an equivalently capable system your baseline is 6 fewer because you took 14 out of the main power connector and are only adding 8 back to support sata devices. (Until SATA is dead on mainstream systems at which point you're 14 ahead, just like on some low end OEM systems today.)

Well, clear up something for me if you can, then, Dan. I mentioned way upthread that the original articles touted space savings as well. My point was that 10 + 8 + 6 + 4 + 4 isn't saving much space over 24 + 8, especially when you take into account the space you need to leave between sockets. Was/is Intel touting the space savings, or just the electrical simplicity?

Having one spec, whatever it is, would certainly be nicer than HP, Dell, and Lenovo all making up their own proprietary connectors, not to mention Dell having at least two different ones!
 
Why would you need a dozen cables? Two gives you 12 drives.

I've personally never seen a PSU that supported more than 4 SATA drives per cable, and that's not counting low-end stuff like the Dell motherboards that provide one socket with 2 2-connector cables.
 
I replace my motherboard more often than my PSU. This mean's I'm buying the same hardware extra times.

You'd be updating it in the case of taxed PSU components. It's a possibility that some of your motherboard failures are power-related in any case.

However, as you pointed out, PSU components can last a long time. In which case you're not increasing rate of failure, whether it's on the board or in the PSU.

And you're not buying a bunch of PSU components you'll never need in the first place, which is raising your initial costs. Probably everyone reading this who runs a desktop has a PSU that can host two or three dozen hard drives, optical drives and then some. Those power components that no one uses cost money. That's why it's more efficient from a financial outset standpoint. Possibly even factoring buying multiple sets of power components across multiple motherboards. And more energy efficient, particularly with PSUs that don't have DC-to-DC components and have discrete rails for different voltages.

Yeah, need more ports - go buy another motherboard.

That's already the case, this is nothing different.

Everything else I've already addressed.
 
Probably everyone reading this who runs a desktop has a PSU that can host two or three dozen hard drives, optical drives and then some. Those power components that no one uses cost money.

Crazy talk! That's how you get past the low-efficiency part of the PSU's power curve!
 
Jesus guys... the motherboard manufacturers already have to do voltage conversion on-board for the CPU and PCH.... you really think stepping down 12V to 5V or 3.3V with the amount of current needed for NVMe and SATA drives is going to be that difficult? They have to design the newer boards to handle 250 watt Intel processors!!!
 
I still have to agree with the assessment that it's a solution in search of a problem. It might make sense at the low end if the big system builders didn't already have proprietary solution in place that also allow them to make a killing on overpriced replacement units(I've had to buy a few and it was either that or questionable rebuilds available).

You'd be updating it in the case of taxed PSU components. It's a possibility that some of your motherboard failures are power-related in any case.

However, as you pointed out, PSU components can last a long time. In which case you're not increasing rate of failure, whether it's on the board or in the PSU.

And you're not buying a bunch of PSU components you'll never need in the first place, which is raising your initial costs. Probably everyone reading this who runs a desktop has a PSU that can host two or three dozen hard drives, optical drives and then some. Those power components that no one uses cost money. That's why it's more efficient from a financial outset standpoint. Possibly even factoring buying multiple sets of power components across multiple motherboards. And more energy efficient, particularly with PSUs that don't have DC-to-DC components and have discrete rails for different voltages.



That's already the case, this is nothing different.

Everything else I've already addressed.

I've had and seen a lot more PSUs go belly up than MBs and voltage regulation as well as complete failure on the secondary rails has been the problem at times. A good MB runs $200-300 while a good PSU can be had for about $100 if you don't go to crazy with the wattage, it also usually takes me less than half the time to replace a psu compared to a MB.

As for your last comment there's always been controller cards to add capacity or features but with this standard you'd be limited to whatever the MB supports.

Jesus guys... the motherboard manufacturers already have to do voltage conversion on-board for the CPU and PCH.... you really think stepping down 12V to 5V or 3.3V with the amount of current needed for NVMe and SATA drives is going to be that difficult? They have to design the newer boards to handle 250 watt Intel processors!!!

They do voltage regulation for things that they need to monitor and adjust on the fly which makes sense. It's also worth mentioning that beefed up power management is one the things that can already drive up the price on MBs.
 
Last edited:
I still have to agree with the assessment that it's a solution in search of a problem. It might make sense at the low end if the big system builders didn't already have proprietary solution in place that also allow them to make a killing on overpriced replacement units(I've had to buy a few and it was either that or questionable rebuilds available).



I've had and seen a lot more PSUs go belly up than MBs and voltage regulation as well as complete failure on the secondary rails has been the problem at times. A good MB runs $200-300 while a good PSU can be had for about $100 if you don't go to crazy with the wattage, it also usually takes me less than half the time to replace a psu compared to a MB.

As for your last comment there's always been controller cards to add capacity or features but with this standard you'd be limited to whatever the MB supports.



They do voltage regulation for things that they need to monitor and adjust on the fly which makes sense and they're also not bucking it all the way down from 12v. It's also worth mentioning that beefed up power management is one the things that can already drive up the price on MBs.

It helps in smaller ITX and mATX builds, and if you're making it a standard there, you might as well expand it to ATX as well. I highly doubt this is intended to be a proprietary solution, just a different way of doing things. If people seem to think it makes sense, it will eventually become the standard that takes over.

I highly disagree with expansion being limited. In many systems nowadays, DIY servers are limited by the size of the 5v rail. Most consumer PSUs have 5v rails of similar capacity regardless of total wattage capabilities. With this method, RAID cards and motherboards capable of hosting lots of hard drives would build in sufficient 5v capacity to power however many drives they're designed to support, and the 12v capacity of a PSU won't go to waste. Even better, hard drives should move off of 5v and use only 12v.

Also, I'm 99.9% certain that CPU VRMs convert 12v directly to the sub 1.5v most CPUs need.
 
It helps in smaller ITX and mATX builds, and if you're making it a standard there, you might as well expand it to ATX as well. I highly doubt this is intended to be a proprietary solution, just a different way of doing things. If people seem to think it makes sense, it will eventually become the standard that takes over.

I highly disagree with expansion being limited. In many systems nowadays, DIY servers are limited by the size of the 5v rail. Most consumer PSUs have 5v rails of similar capacity regardless of total wattage capabilities. With this method, RAID cards and motherboards capable of hosting lots of hard drives would build in sufficient 5v capacity to power however many drives they're designed to support, and the 12v capacity of a PSU won't go to waste. Even better, hard drives should move off of 5v and use only 12v.

Also, I'm 99.9% certain that CPU VRMs convert 12v directly to the sub 1.5v most CPUs need.
It will be the standard that eventually takes over, environmental regulations on OEM systems will see to that. The existing ATX spec isn’t capable of meeting California, Japan, nor the EU’s upcoming low power state requirements. This spec solves that, while being cost agnostic. Also having Dell, HP, Lenovo, and Acer all have their own proprietary PSU’s has been a nightmare for me maintenance wise, My buildings have bad power we are end of line and close to dams, when they dump dirty power things blow. I need to have a handful of spare PSU’s on standby and having to order up the OEM ones to have on hand is both annoying and wasteful. Not that the spec revolves around me..... but I can’t be the only person upset having to deal with proprietary PSU’s.
 
Last edited:
Sounds an investment in power conditioning equipment would be a more cost effective solution for you than stocking up on PSUs, proprietary or not.
They just blow along with them, the power company just cuts an annual check to pay for the damages. At least once a year they manage to blow out one of my larger 3500va UPS’s. They have to be stored in adjacent racks with a fire barrier now. The fun times of doing IT in the back end of Canada. We’ve started installing solar panels and part of that project is installing a massive capacitor set in each building to smooth the power they generate before putting that back on the grid so we don’t get hit with line conditioning fees. They should by their nature solve our power issues but they are like $20K each not including install. But it is going to be a while before I stop stocking spare PSU’s.
 
And new versions of the standard can require the PSU to be able to deliver more power to the connector. The limits for the PCIe GPU connectors aren't wire/connector melting limits (no name garbage that will blow up if you actually try to draw as much power as the lying number on their box not withstanding); they're capacity planning numbers so that you're not normally able to connect devices drawing more power than the PSU can safely deliver (again no name exploding garbage not withstanding). And if you've heavily overclocked your GPUs in the past before NVidia/ATI added hard power limits you were almost certainly drawing above the nominal 75/150W ratings of your connectors without letting the magic smoke out.

When you've got a breaking change in forward/reverse compatibility like 12VO is, you've also got room to require heavier gauge wire to go above what would be the melting limit in the old standard; as long drawing above the old limit is only in spec if used for extra mobo power not extra GPU power.
Great, so I should just be able to go home and unplug one of the 8-pin connectors and my RTX 2080 Ti should continue working without any issues, yes?
 
Well, clear up something for me if you can, then, Dan. I mentioned way upthread that the original articles touted space savings as well. My point was that 10 + 8 + 6 + 4 + 4 isn't saving much space over 24 + 8, especially when you take into account the space you need to leave between sockets. Was/is Intel touting the space savings, or just the electrical simplicity?

Having one spec, whatever it is, would certainly be nicer than HP, Dell, and Lenovo all making up their own proprietary connectors, not to mention Dell having at least two different ones!

as I've said more than once upthread, the +6 is a red herring. It means the board is drawing significantly more power than normal, and the equivalent traditional ATX board would be 24 + 8 + 6/8.

The footprint savings from 24 vs 10 + 4 +4 is small (except at the margins for low power systems the +4/8 for EPS will be the same on either side so I'm ignoring it from here on); but the primary focus of the spec isn't high end enthusiast boards that need to be able to power a dozen SATA drives (each +4 will support 2 strings of 3 connectors); but OEM systems from Dell/etc that will at most need to power a single SATA SDD/HDD (and which will probably end up using some sort of combined power/data cable for faster assembly anyway), and which will probably end up not supporting SATA at all in the near future as the capacity of entry level m.2 drives increases and the already tiny price gap vs sata shrinks to nothing. 24 vs 10 + 4 is a space savings, and 24 vs 10 will be a bigger one.

On the enthusiast side despite ASROCK launching it on a near flagship level product I suspect 12VO will remain a niche option for at least a few more years and spread from SFF mITX systems upwards; and probably won't be common on high end boards until Intel/AMD drop SATA from their chipsets and anyone wanting to build a storage server will need to either buy a board intended for that with PCIE-SATA controllers on board or provide connectivity via an addon card that can also do the power conversion.
 
Great, so I should just be able to go home and unplug one of the 8-pin connectors and my RTX 2080 Ti should continue working without any issues, yes?

The PSU cable/connectors won't melt if you do so; and drawing 175W (assuming a nonOC 250W card) probably won't trigger an over-current shutoff from your PSU (assuming it's not a cheap piece of junk anyway). Your card itself may put itself in a reduced power mode (ie if the traces inside the PCB were only designed to draw ~30W from each of the 6 12v pins) or refuse to post entirely. A GTX 260 I had did the latter to me years ago when I forgot to connect a modular cable on the PSU side, but I haven't made that mistake since so I don't know what newer cards will do.
 
Great, so I should just be able to go home and unplug one of the 8-pin connectors and my RTX 2080 Ti should continue working without any issues, yes?

If it weren't for the sense circuits that would actively limit power to the card and assuming that the VRMs all have common connections, yes. Well, you would be near the limit of 3 12v connectors and thus not have any overclocking headroom, but from a technical standpoint the wires and connections are not the limiter.
 
I'll tell you in advance, "In retrospect, this was a mistake."

We just need to pump water to the board and let local hydroelectric dams convert that to solar, so it it can run the windmills more efficiently.
An over unity system that powers itself from waste heat of the coal stoked CPU. Problem solved...
 
Last edited:
I've had and seen a lot more PSUs go belly up than MBs and voltage regulation as well as complete failure on the secondary rails has been the problem at times.

Well fine good thing these don't have those.

As for your last comment there's always been controller cards to add capacity or features but with this standard you'd be limited to whatever the MB supports.

Yeah, that was my point.

I'll tell you in advance, "In retrospect, this was a mistake."

That's why it's already in use with so many computers. It was a mistake so bad that DIY building was the last holdout. The only group with the good sense to run a tower PC with six hard drives and 5 optical drives, because you never know how many DVDs of Fargo you'll have to burn at the same time.
 
Further down the article it mentions that.

What makes this perhaps a little strange is that although this new ATX12VO standard is aimed at the lower power parts of the market and the more embedded designs, ASRock has made this a full Z490 board. It supports overclocking and fast memory, it has better gold-pin plating in the DRAM slots, there’s an ALC1200 audio codec, 802.11ac Wi-Fi, and so on. That being said, the PCIe slot layout is only x16 from the CPU and x4 from the chipset, rather than an x8/x8 design.

It's not meant for Desktop grade components.

Intel must've been asleep or something when companies were already using power bricks that plugged in to motherboards rendering this thing obsolete right out of the gate.
 
That's already the case, this is nothing different.

Everything else I've already addressed.

I think he meant that in a normal PC you could throw in an HBA and have 8 or more ports and have the wiring to plug them in to power.

This new thing doesn't have a sata chain on the PSU.

You'd need a 12v to sata off board converter which defeats the purpose of this new standard.

12v.png


See only 12v coming out.

Also I gotta have a laugh at them using a second power supply to power that drive on top of the 12v only PSU.
 
Last edited:
Further down the article it mentions that.



It's not meant for Desktop grade components.

Intel must've been asleep or something when companies were already using power bricks that plugged in to motherboards rendering this thing obsolete right out of the gate.

One motherboard design does not define the success or failure of the new standard. Nor does the one design dictate what subsequent designs will be like.

Also, what do you mean by desktop grade? There are plenty of high performance ITX and mATX boards that would not be able to be powered off of a DC jack, are those not desktop grade?
 
One motherboard design does not define the success or failure of the new standard. Nor does the one design dictate what subsequent designs will be like.

Also, what do you mean by desktop grade? There are plenty of high performance ITX and mATX boards that would not be able to be powered off of a DC jack, are those not desktop grade?
Did you bother reading the quote?
 
No you can't

PCIe x4 provides 25W x16 75W. And if that's not enough then fine, break it out onto another board. That's alright. You're expanding functionality beyond what the board supports, that's when you should pay for it.

Not before when you're paying for hardware you can't and won't use.
 
No you can't

You've already got 10W of 3.3 available (all PCIe cards have this per the spec); and would only need to add a 12V to 5V converter to finish the job. As a bonus you could then run combined data/power cables from the card to the drives for less spagetti to cable manage.
 
PCIe x4 provides 25W x16 75W. And if that's not enough then fine, break it out onto another board. That's alright. You're expanding functionality beyond what the board supports, that's when you should pay for it.

Not before when you're paying for hardware you can't and won't use.
NO YOU CAN'T

There isn't one HBA that supports suppliying power to any hard drive in existence.
 
You've already got 10W of 3.3 available (all PCIe cards have this per the spec); and would only need to add a 12V to 5V converter to finish the job. As a bonus you could then run combined data/power cables from the card to the drives for less spagetti to cable manage.
So just add a converter. What a elegant solution

They already did.

It's in current ATX power supplies.
 
There isn't one HBA that supports suppliying power to any hard drive in existence.

So? Man, that's like saying you can't implement PCIe 5 motherboards because there are not PCIe 5 video cards. People will make new cards.
 
So? Man, that's like saying you can't implement PCIe 5 motherboards because there are not PCIe 5 video cards.
Is this where we have ended up? You want to add converters to take 12v and turn it in to 5v from a PCI express slot? Isn't a wire more robust that a PCI-e "finger"?

I have a case with 24 7200rpm drives what solution would be for it?

As I said above Intel hasn't solved a problem, they've just made new ones.

How about this. We leave the converter in the PSU and use cabling from that to power hard drives?

That's a great solution.
 
A GTX 260 I had did the latter to me years ago when I forgot to connect a modular cable on the PSU side, but I haven't made that mistake since so I don't know what newer cards will do.

My 560 Ti Boost would refuse to let the computer power on if the aux connector wasn't connected--at its BIOS init time it would just put up a message saying "power the machine off and plug in the connector" and stop.
 
My 560 Ti Boost would refuse to let the computer power on if the aux connector wasn't connected--at its BIOS init time it would just put up a message saying "power the machine off and plug in the connector" and stop.
GTX 1080 got upset about it too.
 
Back
Top