First ATX12VO Consumer Motherboard: The ASRock Z490 Phantom Gaming 4SR

What is the major point of this?

Better future-proofing, better cable management, fewer components, some cost and some power efficiency improvements, the possibility of modular power systems (particularly for servers) and a more consolidated standard.

Again, this isn't new. It's overdue by decades.
 
Better future-proofing, better cable management, fewer components, some cost and some power efficiency improvements, the possibility of modular power systems (particularly for servers) and a more consolidated standard.

Again, this isn't new. It's overdue by decades.
But if it gets all moved to the MB, then doesn't that make it more expensive and more prone to problems?
Cable management for the MB? 1 for the MB and 1 for the 4/8 pin.
Cost improvement? For the PS or MB? Or did that just shift it?
I am not sure I want all that on a server MB. Isn't that another point of failure that is harder to replace than just a PS?
 
It'll be more efficient since it's DC-to-DC, not a discrete rail. And if there's nothing drawing that power, it can be shut off entirely.

It might be more efficient than low end PSUs but since 100% efficiency is likely impossible it’s not going to be a real jump from already existing Platinum and Titanium rated power supplies.
 
It might be more efficient than low end PSUs but since 100% efficiency is likely impossible it’s not going to be a real jump from already existing Platinum and Titanium rated power supplies.
It's not about efficiency at 50-80% where you can get 90% or more. It's about increasing efficiency in the 10-30% usage band where even the best PSU's struggle to hit even 80-90% all because some government dude saw a report about 900 billion desktops sitting around 98% of the day sipping 62w from the wall when they only need 48w from the PSU and thinking he needs to solve this problem to save a bajillion watts of wasted power.
 
I am not sure I want all that on a server MB. Isn't that another point of failure that is harder to replace than just a PS?
Shrug. A lot of the server systems around me already have only two rails coming from the PSU: +12V and a standby (either +5VSB or +12VSB).
 
1. But if it gets all moved to the MB, then doesn't that make it more expensive and more prone to problems?
2. Cable management for the MB? 1 for the MB and 1 for the 4/8 pin.
3. Cost improvement? For the PS or MB? Or did that just shift it?

1. Same amount of stuff, if not less. A shift with a slight reduction likely.
2. Cables get to be right next to each other, not from two different components.
3. Half shift from PSU to MB, but the MB will only have enough power to run what it is limited to running, so an overall reduction. PSUs have way more than what anyone ever needs in order to cover all of their bases.

Ultimately this will lead to a reduction in costs. How much of that is realized by the consumer depends on the builders, though.

Servers is where things get real cool. You're not (always) going to put all the power supply stuff on the mobo; that doesn't have room for redundancy. So you have two 12V PSUs and two break-out boards for other hardware inside (if even necessary). Alternatively, you have two 12V PSUs for the entire rack. You could have them on separate breakers if necessary.

People are shrugging off 50 watts of saved power in this scenario. Which is crazy; can you imagine getting that kind of power efficiency added to a rack?

You don't have to be in accounting to appreciate that. You just have to work in a server room for a few days when the AC is out.
 
Something that provides exactly zero benefit to the end-user and only negatives is “useless”. There are situations where something like this can be good, but not for higher end systems.

Many high end systems are going NVMe only with external NAS for large storage needs. No cables to run from board and you get a smaller main motherboard connector.

Maybe it's also time that hard drives should move off of 5v and run only off of 12v. Also get rid of the 3.3v so that we can stop using flimsy SATA connectors. Hard drive logic boards would probably consume less power with modern low voltage chips than old 5v chips.

The "If it ain't broke, don't fix it" mentality is exactly what stifles progress. Sometimes the progress is backwards (aka Bulldozer), but lessons learned can make something far better in the future. This is at worst a sidestep from what we currently have, but can lay the foundation for better things.
 
Better future-proofing, better cable management, fewer components, some cost and some power efficiency improvements, the possibility of modular power systems (particularly for servers) and a more consolidated standard.

Again, this isn't new. It's overdue by decades.

What better cable management? You're still going to have to run the same amount of cables except now you have to run two off the motherboard for SATA instead of one.

How so with fewer components? The PSU is still going to be there but there will be additional components needed for the motherboard.

Where is this mythical cost reduction going to come from? It's extremely unlikely PSUs will drop in price at all especially since the vast majority of the PSU is going to be the same. Motherboard sure as hell aren't going to get any cheaper. In fact, the price will increase. It will have to increase because it's going to take quite a bit more engineering with increased components on something which already has tons of components. Traces are going to be more difficult, engineering will be more difficult, it's likely extra layers will be required. The amount of electrical noise from having to run extra power traces everywhere will only make things worse. All of this on a platform which already has more than enough issues with electrical noise.

I have a question for you. The one question most important to answer. What problem does this actually solve? I mean a real problem. Not an opinion or a personal want or desire. Actually, I have a second question which is almost as important. What new problems are we going to have to deal with due this new attempt at a standard?
 
What better cable management? You're still going to have to run the same amount of cables except now you have to run two off the motherboard for SATA instead of one.
SATA drives are... on the way out. No desktop needs one. Going forward, a single NVMe drive with tiered flash (some mixture of MLC/TLC and QLC or denser) will provide all of the responsiveness, bandwidth, and capacity that your average desktop user could need.

More storage can easily be added through USB or a NAS, if needed beyond adding another NVMe drive.

Again, this is most desktop users.
 
1. Same amount of stuff, if not less. A shift with a slight reduction likely.
2. Cables get to be right next to each other, not from two different components.
3. Half shift from PSU to MB, but the MB will only have enough power to run what it is limited to running, so an overall reduction. PSUs have way more than what anyone ever needs in order to cover all of their bases.

Ultimately this will lead to a reduction in costs. How much of that is realized by the consumer depends on the builders, though.

Servers is where things get real cool. You're not (always) going to put all the power supply stuff on the mobo; that doesn't have room for redundancy. So you have two 12V PSUs and two break-out boards for other hardware inside (if even necessary). Alternatively, you have two 12V PSUs for the entire rack. You could have them on separate breakers if necessary.

People are shrugging off 50 watts of saved power in this scenario. Which is crazy; can you imagine getting that kind of power efficiency added to a rack?

You don't have to be in accounting to appreciate that. You just have to work in a server room for a few days when the AC is out.
Show me where SATA power is located on this board because I don't see it.
1588855743951.png

If you can't find it, then where are they going to put it? Seems like the real estate is already tapped out.

Never mind, I see it. Doesn't seem like the difference in positioning of the data and power lines is going to make cable management for SATA drives any better.
SATA drives are... on the way out. No desktop needs one. Going forward, a single NVMe drive with tiered flash (some mixture of MLC/TLC and QLC or denser) will provide all of the responsiveness, bandwidth, and capacity that your average desktop user could need.

More storage can easily be added through USB or a NAS, if needed beyond adding another NVMe drive.

Again, this is most desktop users.
You expect your average home PC user to setup what is effectively a separate computer to manage all of their large storage needs? Remember, enthusiast forums like this represent an extremely small percentage of users. SATA isn't going anywhere anytime soon.
 
Last edited:
You expect your average home PC user to setup what is effectively a separate computer to manage all of their large storage needs?
If their needs are truly large, then yes; but generally speaking, a USB drive or two would do for most. Even a two bay USB DAS would be fine for most, providing mirroring for some redundancy to protect against drive failure.

A two-bay Synology would provide all of the services a home user might require, including secure remote access, in a compact package.

Most people simply don't have that much data, and of the data they do have, continuous high-speed access is rarely a requirement. Where it is, there are very cost-effective solutions.

SATA isn't going anywhere anytime soon.
This is more of a prediction of the direction of technology on my part. 4TB NVMe drives are already available, and with the likes of Intel and others pushing for 'hybrid' drives that combine high-endurance, responsive flash, think Optane if not just optimized SLC, with large amounts of lower-endurance flash like QLC that still has great read speeds, the need for internal SATA drives is really dying out.

USB 3 (any speed) is already faster than any single spinning drive. Latency is added of course, but if it's just for additional mass storage, there's very little need for internal spinners. Most laptops are already this way, and my gaming desktop, which will likely be reassembled shortly, will have no spinners in it. Big Fractal R5 with all of the bays tossed out; if I'd known that was how I'd end up using it, I would have bought a different case. Instead, it has three M.2 drives, at least two of which are NVMe.
 
Show me where SATA power is located on this board because I don't see it.
View attachment 243637
If you can't find it, then where are they going to put it? Seems like the real estate is already tapped out.

Never mind, I see it. Doesn't seem like the difference in positioning of the data and power lines is going to make cable management for SATA drives any better.

You expect your average home PC user to setup what is effectively a separate computer to manage all of their large storage needs? Remember, enthusiast forums like this represent an extremely small percentage of users. SATA isn't going anywhere anytime soon.
Its on the upper right edge. There are two 4 pin square connectors labeled SATA PWR. I guess you will need a special cable for that.
 
I think the basic idea here is very few people need hard drives to be internal on desktop PCs. With OEMs like Dell we have already seen these type of connectors for years.
 
What problem does this actually solve? I mean a real problem.

Having way more power supply than what is needed for a given machine. It's simpler, more efficient, cheaper, and sets a standard that is easier to future-proof.
 
Its on the upper right edge. There are two 4 pin square connectors labeled SATA PWR. I guess you will need a special cable for that.
Yes, I made an edit right after I posted the image. I don't see how this improves cable management when SATA data and power are still on completely opposite sides of the board. Also, is this implementation going to limit SATA to 2 available ports? That is going to be a no from me because I am still currently using 5 SATA devices in my PC.
 
I don't see how this improves cable management when SATA data and power are still on completely opposite sides of the board.

That is a very good point. They should put these close together.


Also, is this implementation going to limit SATA to 2 available ports?

Maybe not. I can see the break out cable having up to 3 SATA power connectors each. Although this motherboard appears to have 2 SATA data connections unless the connector on the right side close to the ASRock chipset heatsink is a 2 SATA right angle data connector. I can't tell from the picture.
 
Last edited:
Yes, I made an edit right after I posted the image. I don't see how this improves cable management when SATA data and power are still on completely opposite sides of the board. Also, is this implementation going to limit SATA to 2 available ports? That is going to be a no from me because I am still currently using 5 SATA devices in my PC.

First iteration, not completely fair to judge the new spec on just one motherboard.
 
Yes, I made an edit right after I posted the image. I don't see how this improves cable management when SATA data and power are still on completely opposite sides of the board. Also, is this implementation going to limit SATA to 2 available ports? That is going to be a no from me because I am still currently using 5 SATA devices in my PC.
Each 4-pin connector can power 4 drives (8 total for this board). There's a 6-pin option in the spec that can power 6 drives.
 
Sure. Then they added a six-pin and a pair of small 4-pins, and the spec doesn't even get rid of the need for an EPS connector. Nice savings!

Admittedly, on a cheap Dell with a 65W CPU and no SATA ports, there will be some saved space. Ugh.

The 10 pin connector provides 50% more useful (12V) power than the old 24 pin model. Powering high end CPUs while eliminating the EPS connector would require moving all the 12V/ground pairs from the EPS into the new connector, along with the extra 12V needed to run multiple GPUs each taking 75W from the mobo, you'd end up with a new connector as big as the old 24 pin one and the corresponding stiff more difficult to route cable; and would need to pay the price for the bigger connector and wires on all systems not just the high end ones that need it.

The increased capacity of the new 10pin connector will let low end systems run entirely off it it, not needing a 4pin EPS, or 6 pin PCIe for supplemental board power. Meanwhile if this board had been done conventionally, in addition to the 24pin ATX connector it would have either had 2x8 pin EPS connectors; or 1xEPS and 1xPCIe. Counting the pair of small 4pin accessory connectors the total board space footprint is minimally reduced; but maxed out desktops aren't the initial target for it.

Longer term it's expected that SATA will wither away, and the need for the 3.3/5v legacy connectors will go away too. The OEM systems that this will mostly be used in the for next few years will probably never include them to begin with because bottom end NVME SSDs have reached near price parity with SATA models.
 
Last edited:
Many high end systems are going NVMe only with external NAS for large storage needs. No cables to run from board and you get a smaller main motherboard connector.

Maybe it's also time that hard drives should move off of 5v and run only off of 12v. Also get rid of the 3.3v so that we can stop using flimsy SATA connectors. Hard drive logic boards would probably consume less power with modern low voltage chips than old 5v chips.

The "If it ain't broke, don't fix it" mentality is exactly what stifles progress. Sometimes the progress is backwards (aka Bulldozer), but lessons learned can make something far better in the future. This is at worst a sidestep from what we currently have, but can lay the foundation for better things.

Change solely for the sake of change is bad. If you are making changes there needs to be real, actual, benefit. I'm seeing exactly none for the high-end user. I only see complications and making motherboards even more prone to failure.
 
Sure, and like everything, it's a trade-off: you need however much space and however many components required for that support. In Dell's volumes, will that outweigh the savings on the PSU? I dunno. Plus, Dell is already using a proprietary implementation of a 10 pin per cable (or 8, I'm not going to crack open my work PC tonight to check.). Moving to this new ATXVO is another cost. Maybe they'll judge it worth it, especially in the server space.

Not just Dell. HP and Lenovo are also doing it. But they've all got slightly different implementations. Intels primary objective here was to unify the design to eliminate proprietary parts and extend the BOM savings down to smaller OEMs who're less able to design custom hardware.

If they had designed something for the general PC market, IMO a 12 pin connector which added a 5v, a 3.3v, and a 4th ground wire leaving the DC-DC converters and multi-voltage molex (the 12VO spec includes a 2 pin 12V only version of the 12v molex accessory cable) and sata power on the PSU as optional connectors would have been my choice because it would allow for simpler PSU adapters and immediately free up a bit of board space on crowded enthusiast boards, while allowing entry level OEMs to still build systems with single cable PSUs because they didn't install anything needing external power. But this wasn't a clean sheet design on Intel's part, just a codification/standardization of what the big OEMs were already doing.

Edit: Although really, an ~12 pin "transitional" design like I suggested is something that should've been done about 15 years ago when mobos first went to running almost everything off of 12V and the amount of current delivered by the legacy 3.3/5V wires plunged to a trickle.
 
Last edited:
It'll be more efficient since it's DC-to-DC, not a discrete rail. And if there's nothing drawing that power, it can be shut off entirely.

Except for some old really low end designs just about any PSU on the market these days does 3.3/5V via DC-DC already. It's more efficient, avoids cross load voltage problems, and as lower voltage needs have declined has a lower BOM cost than multiple rails.
 
Longer term it's expected that SATA will wither away, and the need for the 3.3/5v legacy connectors will go away too.

Eh, people said the same thing about spinning platter drives and look at all the hubbub going on today with the SMR stuff.

It'll be cheap to put four SATA controllers on a board along with a 4-pin power cable on a board somewhere. The only way SATA goes away is if some other standard, like internal USB 4/5/6... etc. takes over for power and data.

Which would be super-cool, but I don't know how cost-effective. No, USB controllers have to be dirt cheap, that just seems like a good idea to me. Not saying it's going to be the next thing, just wishful thinking on my part.

Change solely for the sake of change is bad.

It's not, though. It's about improving costs and efficiencies.
 
It's not going to improve cost. PSUs aren't that expensive as is and it WILL make motherboards cost more. As for efficiency, we'll see. I remain unconvinced.

5 pack of 12v to 5v at 3 amps buck converters is as low as $8 on Amazon. I highly doubt implementing 5v and 3.3v converters will cost motherboard manufacturers more than $10 for the amperage needed, and most likely significantly less.
 
Except for some old really low end designs just about any PSU on the market these days does 3.3/5V via DC-DC already. It's more efficient, avoids cross load voltage problems, and as lower voltage needs have declined has a lower BOM cost than multiple rails.
Eh, people said the same thing about spinning platter drives and look at all the hubbub going on today with the SMR stuff.

It'll be cheap to put four SATA controllers on a board along with a 4-pin power cable on a board somewhere. The only way SATA goes away is if some other standard, like internal USB 4/5/6... etc. takes over for power and data.

Which would be super-cool, but I don't know how cost-effective. No, USB controllers have to be dirt cheap, that just seems like a good idea to me. Not saying it's going to be the next thing, just wishful thinking on my part.

Spinning rust on SATA/SAS for storage servers will probably be around for another decade; but 5 years from now I don't expect to see SATA anywhere else. It'll follow the same path as legacy PCI, PATA, and Floppy connectors did. They'll stop being used on most OEM boards, then Intel/AMD will remove support from upcoming chipsets, then over the next year or two the number of general purpose enthusiast boards with them will trend to zero. The only question is if dedicated enthusiast storage server boards will still exist, or if anyone wanting to make a DIY NAS will need to buy a standalone SATA card to get the connections.
 
5 pack of 12v to 5v at 3 amps buck converters is as low as $8 on Amazon. I highly doubt implementing 5v and 3.3v converters will cost motherboard manufacturers more than $10 for the amperage needed, and most likely significantly less.

It would need to be a hell of a lot less than that not to have a big impact on cost.
 
The 10 pin connector provides 50% more useful (12V) power than the old 24 pin model Powering high end CPUs while eliminating the EPS connector would require moving all the 12V/ground pairs from the EPS into the new connector, along with the extra 12V needed to run multiple GPUs each taking 75W from the mobo, you'd end up with a new connector as big as the old 24 pin one and the corresponding stiff more difficult to route cable; and would need to pay the price for the bigger connector and wires on all systems not just the high end ones that need it.

The increased capacity of the new 10pin connector will let low end systems run entirely off it it, not needing a 4pin EPS, or 6 pin PCIe for supplemental board power. Meanwhile if this board had been done conventionally, in addition to the 24pin ATX connector it would have either had 2x8 pin EPS connectors; or 1xEPS and 1xPCIe. Counting the pair of small 4pin accessory connectors the total board space footprint is minimally reduced; but maxed out desktops aren't the initial target for it.

Longer term it's expected that SATA will wither away, and the need for the 3.3/5v legacy connectors will go away too. The OEM systems that this will mostly be used in the for next few years will probably never include them to begin with because bottom end NVME SSDs have reached near price parity with SATA models.
That board still has an 8-pin EPS. High-end boards are still going to need up to two 6+2 EPS and two 6+2 PCI-E for supplemental power. Nothing I've seen in the specification points to EPS and PCI-E supplemental power going away, unless I missed it.
 
So now all power supplies will come with a modular 24 pin cable?

Or do you have to tuck it?

Also have we moved from 20+4 to 10+6+4+4?

I'm not so good on the math but the latter is more.
 
That board still has an 8-pin EPS. High-end boards are still going to need up to two 6+2 EPS and two 6+2 PCI-E for supplemental power. Nothing I've seen in the specification points to EPS and PCI-E supplemental power going away, unless I missed it.

For high end boards it won't go away; although at the margin the new10pin connector providing more 12V than the 24pin one did may allow systems that just barely needed one more supplemental connector to reduce the number needed. Low end boards/systems - which massively outnumber the ones in the gaming systems we build - won't need extra power for PCIe and can benefit from using a smaller EPS connector or none at all for SFF systems without a discrete GPU at all.
 
It would need to be a hell of a lot less than that not to have a big impact on cost.

If an Amazon seller can sell those for $8, the component costs are probably less than 50 cents to an OEM.

That board still has an 8-pin EPS. High-end boards are still going to need up to two 6+2 EPS and two 6+2 PCI-E for supplemental power. Nothing I've seen in the specification points to EPS and PCI-E supplemental power going away, unless I missed it.

EPS requirements will depend on the CPU in question and the overclocking design. The main connector can supply up to 288 watts. 75 watts for each PCI-E card, 10-20 watts for NVMe, generous 50 watts for USB power, 50 watts for SATA power, and 30 watts for board power. A single PCI-E card system will not exceed the main connector power design, and most dual PCI-E card systems won't either. Each PCI-E connector can supply an additional 288 watts, so you would need to have a massive system to require the usage of 2 PCI-E connectors.
 
So now all power supplies will come with a modular 24 pin cable?

Or do you have to tuck it?

Also have we moved from 20+4 to 10+6+4+4?

I'm not so good on the math but the latter is more.

20+4 + 8 EPS + 6/8 (onboard PCIe or 2nd EPS) vs 10 + 8 + 6 in and 4+4 out.
That's 38/40 vs 24+8. And the last 8 are indirectly counterbalanced by removing SATA cables from the PSU.

The total is much less because the 24pin was full of 5/3.3v wires that have barely carried any power since the Pentium 4 and Athlon XP moved CPU power to the 12V rail.

For the boring OEM systems that this is initially primarily intended for it'll be:

20+4 + 4/8 (EPS) vs 10 + 0/4/8 (optional EPS) + maybe 4 (if the system uses a PATA SSD, or possbily none if it uses m.2).
That's 28/32 vs 10-22.
 
20+4 + 8 EPS + 6/8 (onboard PCIe or 2nd EPS) vs 10 + 8 + 6 in and 4+4 out.
That's 38/40 vs 24+8. And the last 8 are indirectly counterbalanced by removing SATA cables from the PSU.

The total is much less because the 24pin was full of 5/3.3v wires that have barely carried any power since the Pentium 4 and Athlon XP moved CPU power to the 12V rail.

For the boring OEM systems that this is initially primarily intended for it'll be:

20+4 + 4/8 (EPS) vs 10 + 0/4/8 (optional EPS) + maybe 4 (if the system uses a PATA SSD, or possbily none if it uses m.2).
That's 28/32 vs 10-22.

Well a standard board is 24+8 or 24+4

This board is 10+8+8+4+4

We are talking motherboard connections.
 
Well a standard board is 24+8 or 24+4

This board is 10+8+8+4+4

We are talking motherboard connections.


In addition to this increase in number of motherboard connections, this will never be adopted by a big OEM like Dell.

You want to know why? Because Dell has already gone 8-pin ATX on the majority of their motherboards. It's Proprietary, and it doesn't seem to be affecting their finances negatively.

They've been doing this for more than ten years now!
 
Last edited:
If an Amazon seller can sell those for $8, the component costs are probably less than 50 cents to an OEM.



EPS requirements will depend on the CPU in question and the overclocking design. The main connector can supply up to 288 watts. 75 watts for each PCI-E card, 10-20 watts for NVMe, generous 50 watts for USB power, 50 watts for SATA power, and 30 watts for board power. A single PCI-E card system will not exceed the main connector power design, and most dual PCI-E card systems won't either. Each PCI-E connector can supply an additional 288 watts, so you would need to have a massive system to require the usage of 2 PCI-E connectors.
Are we confusing things here? 8-pin PCI-E = 150W, 6-pin PCI-E = 75W, PCI-E slot = 75W. An RTX 2080 Ti requires the use of two 8-pin PCI-E supplementary power cables.
 
In addition to this increase in number of motherboard connections, this will never be adopted by a big OEM like Dell.

You want to know why? Because Dell has already gone 8-pin ATX on the majority of their motherboards. It's Proprietary, and it doesn't seem to be affecting their finances negatively.

After BTX failed, the rest of the world transitioned to their own thing. That why this is DOA.
TBF i like it for embedded solutions or ITX. But there has been DC power jacks on motherboards for ages.

This is just Intel trying to be ahead of a curve that no one wants.
 
Having way more power supply than what is needed for a given machine. It's simpler, more efficient, cheaper, and sets a standard that is easier to future-proof.

So, it doesn't solve any problems. It's a solution in search of a problem. I've been saying that ever since this spec was first posted here and no one has yet to show me different.

And not once has anyone shown how it's supposed to be cheaper. I've posted a number of times about multiple issues which will increase costs on the motherboard side and no one has even attempted to disprove what I've said. I'm guessing no one will touch it because they know it's not going to reduce costs anywhere, it's just going to increase costs despite multiple people preaching cost savings.
 
I've already said how I think it will reduce costs. You can not believe it, that's fine.

We can see OEMs doing this already, and it's bone-stock practice for a lot of non-desktop computers including laptops. If that alone isn't enough to convince you that it does these things, I can see why you don't want to hear it from some rando on the internet.
 
Well a standard board is 24+8 or 24+4

This board is 10+8+8+4+4

We are talking motherboard connections.

You're not comparing equivalent boards.

For standard boards it's 24 + 8 in vs 10 + 8 in and some number of 4 outs (for the moment I'll assume normal is 2 out, but with only 1 board using the design out that's very much TBD) so you're at 32 vs 26. (Optionally as few as 18 if you don't need to support SATA.)

The 6 (not 8) for this ASROCK board is extra power above standard and would need to be replicated on an equivalent old board.

A 12VO 10 + 8 board has more power than an old fashioned 24 + 8 one because the 24 pin is full of useless low voltage wires and only has 2 12V wires vs the 10 pins 3 12v.

To have the same amount of power with a 24 pin connector a board equivalent to this one would need a 24 and 2x 8 pin connectors for 40 total vs 32.

The 10 pin connector throws out 15 obsolete wires and now excess grounds and adds a 3rd 12v; adding 8 pins to still provide low voltage power for sata still leaves it 6 pins ahead.
 
You're not comparing equivalent boards.

For standard boards it's 24 + 8 in vs 10 + 8 in and some number of 4 outs (for the moment I'll assume normal is 2 out, but with only 1 board using the design out that's very much TBD) so you're at 32 vs 26. (Optionally as few as 18 if you don't need to support SATA.)

The 6 (not 8) for this ASROCK board is extra power above standard and would need to be replicated on an equivalent old board.

A 12VO 10 + 8 board has more power than an old fashioned 24 + 8 one because the 24 pin is full of useless low voltage wires and only has 2 12V wires vs the 10 pins 3 12v.

To have the same amount of power with a 24 pin connector a board equivalent to this one would need a 24 and 2x 8 pin connectors for 40 total vs 32.

The 10 pin connector throws out 15 obsolete wires and now excess grounds and adds a 3rd 12v; adding 8 pins to still provide low voltage power for sata still leaves it 6 pins ahead.
6 pin connectors are optional on current ATX boards.

My bad seeing it as 8

so it's a tie 32 and 32. or on some current atx boards 28 vs 32
 
Back
Top