Should you pay considerably more for High end Motherboards

Yeah I hate how high end boards have features I don't care about, like fuckin.. built in WiFi. Like bitch, its a fucking desktop. Who does competitive multiplayer gaming over WiFi god damnit..
 
The only problem with the above post is that prices of motherboards of a relative quality that's comparable to a $150 motherboard of a decade ago have crept significantly upwards. You see, today's mainstream CPUs are much, much harder on a motherboard's VRMs than a six-core HEDT CPU of a decade ago ever was. One would really, really need to spend $220 just for a motherboard that matches yesteryear's $120 motherboard in relative quality.

This is exactly why my most recent motherboards cost nearly $300. But while the motherboard in my previous AMD system had RGB LEDs on the I/O enclosure cover, my current system's Intel motherboard has no onboard RGB LEDs but will accommodate RGB memory DIMMs and RGB fans.
Exactly. I thought I would be ok spending $279 on a board to satisfy my nerdy needs, nope I was wrong. I need to now spend $500 to get a fully equipped board with all the latest features on it and this isn't even considered the high end more like mid to high end. At least I know I can connect any M.2, any GPU, any USB, any speed internet, and overclock my 12th gen 12700kf to 5.2 all core with ease or grab the latest current 13th gen CPU and clock it to the absolute maximum pushing 6ghz on a core or two with "instant 6ghz" mode in the bios and that's not even talking about the 13900ks coming which should clock higher than any other CPU with higher clocks on more cores. The VRM and heatsink package on the higher end boards like the Aorus master z790 can handle two or three times the maximum power a 13900k can push so the board is essentially bomb proof.
 
In answer to OP's question, I'd say it's worth buying the lowest-end board for the highest-end chipset. In my experience, the higher end you go, the more unecessary features, blingy lights, racing stripes and cheap plastic shrouds there are, and the price becomes exponentially more expensive. Moreover, paying top dollar for better VRMs that will allow you to overclock 2-3% higher is completely not worth your money. That said, if you step down an entire chipset tier, you'll get a materially reduced feature set.

Get an x/z670 Asus Prime and be done wth it.
 
Overclocking is also increasing power limits and boost settings even if you don't increase the peak clock speed. That still provides a performance boost without requiring stability testing. You need a high-end board for that.

No you don't. You just have to do some research and see what boards support changing these (see the B660 Mortar board I was talking about earlier).
 
In answer to OP's question, I'd say it's worth buying the lowest-end board for the highest-end chipset. In my experience, the higher end you go, the more unecessary features, blingy lights, racing stripes and cheap plastic shrouds there are, and the price becomes exponentially more expensive. Moreover, paying top dollar for better VRMs that will allow you to overclock 2-3% higher is completely not worth your money. That said, if you step down an entire chipset tier, you'll get a materially reduced feature set.

Get an x/z670 Asus Prime and be done wth it.

Eh...it depends. Some of the really, really budget basement boards don't have good VRMs at all and won't allow you a sustained boost. ASRock and even the cheaper "P" variants of the Asus Prime boards have been suspect at times.

The MSI Z690-A board is a good example of what you're talking about though. Great VRMs and budget price.
 
I usually buy the 2nd or 3rd best mobo. I did buy the top of the line Asus RIVE and that was the last time I bought a top of the line one. I will only buy the top of the line if I had no other options.
 
Last edited:
Before it was cheaper and simpler. Then I bought a really good board - and a decade later it’s still ticking.

Now, for my main workstations, I buy very good boards. Generally HEDT. For consumer I go middle of the pack (gaming/secondary systems).

My last two workstation boards had $800 prices.
 
No you don't. You just have to do some research and see what boards support changing these (see the B660 Mortar board I was talking about earlier).
That, and looking over the actual VRM setup (basically what Buildzoid does)
In some cases, the vendor just adds some man jewelry and upcharges
In others, the difference is irrelevant (eg. if you'll only need 250A, there's no sense in paying for a (technically) better VRM that delivers 1.21 JigaAmps)
 
Last edited:
Really cheap board also work well a decade later too.

When it come that having great longevity, the question can easily become has price goes up.

Are you better now (and in average during the use span) with that $400 board bought in 2012, or a $160 board in 2012 and a $260 board in 2017 ? (or a 130-150-170 in 2012-2016-2022).

For things were a computer change is a really big deal, it can make a lot of sense, but I am not sure longevity work out that well, better having needed those feature right from the start for them to make sense.
 
Really cheap board also work well a decade later too.

When it come that having great longevity, the question can easily become has price goes up.

Are you better now (and in average during the use span) with that $400 board bought in 2012, or a $160 board in 2012 and a $260 board in 2017 ? (or a 130-150-170 in 2012-2016-2022).

For things were a computer change is a really big deal, it can make a lot of sense, but I am not sure longevity work out that well, better having needed those feature right from the start for them to make sense.
Some yes, some no. As for the pricing - not quite how my budgeting works, or my use cases work. I build a high end workstation and high end gaming system every 3-4 years, and then some lower-end "used" kit to fill in gaps. After that 3 year mark, high-end stuff drops down to the tier-2 uses, tier-2 to tier-3, and so on - till it dies, or has no easy "drop down" to fit into. Current workstation is a 3960X w. 128G on a Zenith II Extreme Alpha - it's showing no signs of slowing down and is approaching the 30 month old mark, so it'll keep going (unless Storm Peak is amazing) - same for the Gaming box, which is a 10700K w. 3090 on custom water (same age, plus a month).

Thus for me, it's "will a $400 board bought in 2012 serve well as a $400 board in 3 use cases during that decade, or would a $160 board do just as well?" - and the answer to that tends to be that the $400 board has a lot more flexibility in how it can be used (good audio solutions, better network cards (I run 10G), better connectivity (thunderbolt covers a multitude of sins), bifurcation, better VRMs for long-term sustained OCs, etc).

My previous was a x370 that got upgraded to x399 and is now a Plex / media box, at 5 years old, and is still running strong. I just scored a huge deal on another Zen2 TR bundle, but I may not even bother with the upgrade as the 1950X is doing just fine. Dunno. Previous gaming system went to the wife - it's 6 years old and kicking strong, good audio setup and the 1080 is fine for her use cases.

Basically, I use my systems for a very long time - the more expensive boards give me more options on how to do so. A basic board won't.
 
You use 10gig right away and there is no cheap board with it, the question become more is it better to rebuy 10 gig and good audio or invest one time in a 10g card and DAC solution.

Also what does the B box do, for example the plex-media box you probably not want to run sustain OC on that and there is a chance that with the money save one could have built a better plex-media box machine (that use less electricty, nice iGPU with modern codec support, etc...), lot of people run plex-media box on very cheap very old motherboard (I do). 5 years old is kind of "new"

There is comfort and fun going on (and saving trouble)
 
You use 10gig right away and there is no cheap board with it, the question become more is it better to rebuy 10 gig and good audio or invest one time in a 10g card and DAC solution.

Also what does the B box do, for example the plex-media box you probably not want to run sustain OC on that and there is a chance that with the money save one could have built a better plex-media box machine (that use less electricty, nice iGPU with modern codec support, etc...), lot of people run plex-media box on very cheap very old motherboard (I do). 5 years old is kind of "new"

There is comfort and fun going on (and saving trouble)
Buying 10G cards eats a slot, that means one less slot for SAS controllers or Optane cards. Buying a DAC means one less USB port for other things, which are sometimes at a premium in my setups, and is something I have to buy multiple of instead of just leaving built-in. Also I'm generally using 2x10G pretty fast, and that gets expensive if one isn't built in to start with (doesn't save you anything, especially since ESXi is picky about cards).

My Plex server is a 1950X with 64G. It runs a PFsense router, domain controller, nested ESXi host for test/dev, a Virtual Center VM, Plex, has a 2080TI for occasional big-screen gaming, and serves as the central control center for the house and lab. It uses all 16 cores and is generally at 96% of RAM - so no, a smaller box wouldn't do the job. The upgrade plan for it is a 3965X (3970x with one bad core), 128G, and a pair of optane drives for a new workload coming on that needs the performance (clustered cassandra DB).
Hoplite is a 3950X with 64G - primarily runs server workloads, but a script will flip it over to a VR system. That one I fed a 10G card as finding an x570 board with 10G was lower priority. Has a high-end USB controller in it for hte VR setup (thank you x570 USB bugs), a SAS controller, and a 10G card. Need to finally yank that USB controller out and put optane in.
Legion is a 10980XE with 128G - summer system (workstation and gaming), runs server workloads during the winter/spring (and some of fall). Board is a x299 Designaire 10G (2x 10G) - about to get 4 new NVMe drives (optane) to do some NVMEoF tinkering :D

Yes, I'm an edge case. I just retired my old Phenom II a year ago...
 
Buying 10G cards eats a slot, that means one less slot for SAS controllers or Optane cards. Buying a DAC means one less USB port for other things, which are sometimes at a premium in my setups, and is something I have to buy multiple of instead of just leaving built-in.
You sound like you actually use the feature from the get go and would use them regardless of longevity. (And I would imagine once it become a B-C box all those issue dissapear anyway), it is almost a different conversation.
 
You sound like you actually use the feature from the get go and would use them regardless of longevity. (And I would imagine once it become a B-C box all those issue dissapear anyway), it is almost a different conversation.
B-C boxes sometimes do audio, sometimes not - but they definitely start using slots like mad (part of why I tend to buy HEDT - my old x99 box has a 10G card, basic GPU, two SAS controllers, and if it could take one more, I'd feed it an optane drive for cache :p). I'm definitely an edge case - but having features gives me reason to find uses for those features, and once found, it's hard to let go of them (one of the reasons I'm praying for next-gen HEDT still). I'm honestly happy so many things get built into motherboards now - it keeps the external parts down to just what I need, and I can rely on quality parts on the board (if you buy them as part of it) that will last the life of the board. A good built-in DAC is a good built-in DAC, and they do exist, and that means no matter where that system goes, it has that capability. A cheap one means that it might suddenly need an external device (or card) to accomplish a task. All about flexibility. But again - I'm an edge case. Most people have one or two systems - there are 8 in my game room alone.

I just got given a Nuc 12 Extreme to do some testing on, and I'm already wondering if the two thunderbolt ports are enough (drive enclosure, PCIE enclosure for compatible 10G dual port card, second drive enclosure). And I'm trying to get FreeBSD running on it right now which is HILARIOUS, but that's a separate story (might need a third thunderbolt drive just for the install).
 
B-C boxes sometimes do audio, sometimes not - but they definitely start using slots like mad (part of why I tend to buy HEDT - my old x99 box has a 10G card, basic GPU, two SAS controllers, and if it could take one more, I'd feed it an optane drive for cache :p). I'm definitely an edge case - but having features gives me reason to find uses for those features, and once found, it's hard to let go of them (one of the reasons I'm praying for next-gen HEDT still). I'm honestly happy so many things get built into motherboards now - it keeps the external parts down to just what I need, and I can rely on quality parts on the board (if you buy them as part of it) that will last the life of the board. A good built-in DAC is a good built-in DAC, and they do exist, and that means no matter where that system goes, it has that capability. A cheap one means that it might suddenly need an external device (or card) to accomplish a task. All about flexibility. But again - I'm an edge case. Most people have one or two systems - there are 8 in my game room alone.

I just got given a Nuc 12 Extreme to do some testing on, and I'm already wondering if the two thunderbolt ports are enough (drive enclosure, PCIE enclosure for compatible 10G dual port card, second drive enclosure). And I'm trying to get FreeBSD running on it right now which is HILARIOUS, but that's a separate story (might need a third thunderbolt drive just for the install).
Agree and this is another reason I don't think high end consumer platform motherboards are a good deal. All these modern platforms lack slot. Gone are the days where I could buy a board with 8 slots to plug stuff into. Today, I get perhaps 4... and only 2 or 3 usable given how large GPUs have gotten and where toasty M.2 slots are located. A very few of them do embed exciting features (isolated sound circuitry or DAC; 10GbE controllers) but the price premium for what should be a lower cost, integrated solution is such that adding cards would be less expensive--and at least carry over to new platforms. In our recent upgrade of my wife's PC to a 13900k, it was nice being able to just move a dual-port 10GbE Intel card from her old PC to the new one, without worry about having to restrict motherboard selection to that tiny, expensive minority that features onboard 10GbE.

I'd like to see 'high end' platforms have 'high end' I/O flexibility as was the case in the past but I suspect those days are gone. TBH even my TRX40 system has very little PCIE expansion card flexibility despite the enormous amount of PCIE lanes at its disposal. The physical boards simply lack the slots.

I miss having this kind of I/O:

https://www.bhphotovideo.com/c/prod...4-O8qTCx6fcxU2X6WyEYFt_LiljDt6LxoCMvgQAvD_BwE
 
Last edited:
Agree and this is another reason I don't think high end consumer platform motherboards are a good deal. All these modern platforms lack slot. Gone are the days where I could buy a board with 8 slots to plug stuff into. Today, I get perhaps 4... and only 2 or 3 usable given how large GPUs have gotten and where toasty M.2 slots are located. A very few of them do embed exciting features (isolated sound circuitry or DAC; 10GbE controllers) but the price premium for what should be a lower cost, integrated solution is such that adding cards would be less expensive--and at least carry over to new platforms. In our recent upgrade of my wife's PC to a 13900k, it was nice being able to just move a dual-port 10GbE Intel card from her old PC to the new one, without worry about having to restrict motherboard selection to that tiny, expensive minority that features onboard 10GbE.

I'd like to see 'high end' platforms have 'high end' I/O flexibility as was the case in the past but I suspect those days are gone. TBH even my TRX40 system has very little PCIE expansion card flexibility despite the enormous amount of PCIE lanes at its disposal. The physical boards simply lack the slots.

I miss having this kind of I/O:

https://www.bhphotovideo.com/c/product/1625507-REG/asus_pro_ws_wrx80e_sage_se.html/?ap=y&ap=y&smp=y&smp=y&lsft=BI:5451&gclid=CjwKCAiAwc-dBhA7EiwAxPRylCRQGCXa2DuMqXl-pIh21y4-O8qTCx6fcxU2X6WyEYFt_LiljDt6LxoCMvgQAvD_BwE
I do see the point in "move over the card" - but a lot of the time that system still needs said card (since it's sticking around, just doing something ~new~), so I'd be constantly buying dual or single port 10G cards. or DACs. Etc. If it's built in, it just goes where the box goes - and especially things like networking cards are ALWAYS needed.
 
Agree and this is another reason I don't think high end consumer platform motherboards are a good deal. All these modern platforms lack slot. Gone are the days where I could buy a board with 8 slots to plug stuff into. Today, I get perhaps 4... and only 2 or 3 usable given how large GPUs have gotten and where toasty M.2 slots are located. A very few of them do embed exciting features (isolated sound circuitry or DAC; 10GbE controllers) but the price premium for what should be a lower cost, integrated solution is such that adding cards would be less expensive--and at least carry over to new platforms. In our recent upgrade of my wife's PC to a 13900k, it was nice being able to just move a dual-port 10GbE Intel card from her old PC to the new one, without worry about having to restrict motherboard selection to that tiny, expensive minority that features onboard 10GbE.

I'd like to see 'high end' platforms have 'high end' I/O flexibility as was the case in the past but I suspect those days are gone. TBH even my TRX40 system has very little PCIE expansion card flexibility despite the enormous amount of PCIE lanes at its disposal. The physical boards simply lack the slots.

I miss having this kind of I/O:

https://www.bhphotovideo.com/c/product/1625507-REG/asus_pro_ws_wrx80e_sage_se.html/?ap=y&ap=y&smp=y&smp=y&lsft=BI:5451&gclid=CjwKCAiAwc-dBhA7EiwAxPRylCRQGCXa2DuMqXl-pIh21y4-O8qTCx6fcxU2X6WyEYFt_LiljDt6LxoCMvgQAvD_BwE
Oh, and I'd buy a Sage, but that was just Zen3 - I want Zen4/5 Threadripper, plz!
 
Hey lopoetve what do you use all the hardware for?
By Tier / hostname / time of year:
Room 1:
T1 - Forge (3960X/128G/6800XT) - Nested virtualization + windows workstation (photos/video/office tasks) + secondary gaming / Offline in summer (650W under load makes a LOT o' heat, since it's at 4.4Ghz all core :p). Nested host gets 8c/64G.
T1 - Soverign (10700K/32G/3090 all on custom loop) - Gaming box/entertainment Box, no work stuff allowed (loaner for friends when they come over).
T2 - Spartan (1950X/64G/2080TI) - Control Center + Plex + 4k gaming (couch and TV) + DC + vCenter + PFSense + Nested ESXi host (basically the top of rack stuff + bits and bobx) / Same thing in summer. Nested host gets 6c/32G.
T2 - Hoplite (3950X/64G/3070) - ESXi host + VR Gaming (reboot script) / Same thing in summer.
T2 - Cataphract (6900K/64G / 2xSAS / 8 1T SSDs) - Storage box and ESXi host year round.
T2 - Gladiator / Praetorian (1700X/10900K ITX boxes 64G) - Both straight ESXi hosts, compute only, run home stuff and lab management software year round.
T1 - Kali (Nuc 12 Extreme w. 12900/3060TI) - FreeBSD Dev box (working on getting it working) / summer occasional 1080P gaming box (doesn't put out much heat, unlike Forge/Sovereign), if I want to go hide in the man cave.

Room 2:
T2 - Legion (10980XE/128G/3080) - Winter ESXi host, Summer it's the Linux version of Forge (dual boots to windows for gaming or windows software). Also where I test Linux software, since in ESXi it boots the local NVMe as a VM so I always have a Linux box around.
T3 - Hun (7940X/128G/480) - Year round ESXi host running an S3 target for deep archive (bunch o' spinners) for the lab, plus passthrough of the 480 as an emulation / arcade cabinet (plan is for it to be all hidden inside!). Under construction once I decide on upgrading spartan or not.
T3 - Zizka (3400G/16G) - Small NAS w. Optane SLOG/Spinners for feeding Legion and Hun with storage. Also runs a PFSense box to link to the others.

Room 3:
Tmixed - Excalibur/Durandal/Mjolnir/Masamune/Rifle - ESXi hosts running production-esque workloads. Masamune is the equal to Spartan from above (controls the DC room).
T3 - Dreadnought - Older storage server feeding the 4 above (mjolnir also does, using mixed Optane/8T spinners).
Not-Named - Backup and security appliance from the company I work for currently.

Wifes:
T2 - Normandy (6700K/1080) - her VR and gaming box.

Explaining: I used to work for VMware back in the day - still have a lot of contacts and do a lot of tinkering and bleeding edge work for them, and after that I worked for Dell - I run what would be considered a MASSIVE home lab (this is one of multiple sites) - I'm the archive and prototype location before we push software out to the other sites. I'm also working on a bit of FreeBSD dev work (bored), VMware dev work (less bored, more work), and then work for ... work (they subsidize part of the power), which is in the security and backup space. I'm also tinkering with NVMeOF, RDMA, and some other advanced storage capabilities (where my career started) because I can. I move from a small game room / man cave in the winter/spring to an outer office in summer/early fall, because the sun blasts the game room during the summer and cooks me if I'm not careful (and there's no AC in the game room). Everything is scripted - during the week if I don't need it, we run in low-power mode with 1 intel and 1 AMD box in Room 1, and just Hun/Zizka in 2, and just Mjolnir/Dread in 3. If I need it, a script brings it all up to full power in about 30 minutes.

Best part - run a different script, and 8 of those are Lan-Party capable systems, so I can have people over to game without running hardware or cables or anything! Switch back with a single script again - with the exception of Cataphract, it all boots from WOL and flips right back. All running 10G with Wireless Mesh between rooms, BGP for routing and IPSec to link to the other sites. OpenVPN accessible over the internet.

I also might have gone a little overkill...
 
Last edited:
By Tier / hostname / time of year:
Room 1:
T1 - Forge (3960X/128G/6800XT) - Nested virtualization + windows workstation (photos/video/office tasks) + secondary gaming / Offline in summer (650W under load makes a LOT o' heat, since it's at 4.4Ghz all core :p). Nested host gets 8c/64G.
T1 - Soverign (10700K/32G/3090 all on custom loop) - Gaming box/entertainment Box, no work stuff allowed (loaner for friends when they come over).
T2 - Spartan (1950X/64G/2080TI) - Control Center + Plex + 4k gaming (couch and TV) + DC + vCenter + PFSense + Nested ESXi host (basically the top of rack stuff + bits and bobx) / Same thing in summer. Nested host gets 6c/32G.
T2 - Hoplite (3950X/64G/3070) - ESXi host + VR Gaming (reboot script) / Same thing in summer.
T2 - Cataphract (6900K/64G / 2xSAS / 8 1T SSDs) - Storage box and ESXi host year round.
T2 - Gladiator / Praetorian (1700X/10900K ITX boxes 64G) - Both straight ESXi hosts, compute only, run home stuff and lab management software year round.
T1 - Kali (Nuc 12 Extreme w. 12900/3060TI) - FreeBSD Dev box (working on getting it working) / summer occasional 1080P gaming box (doesn't put out much heat, unlike Forge/Sovereign), if I want to go hide in the man cave.

Room 2:
T2 - Legion (10980XE/128G/3080) - Winter ESXi host, Summer it's the Linux version of Forge (dual boots to windows for gaming or windows software). Also where I test Linux software, since in ESXi it boots the local NVMe as a VM so I always have a Linux box around.
T3 - Hun (7940X/128G/480) - Year round ESXi host running an S3 target for deep archive (bunch o' spinners) for the lab, plus passthrough of the 480 as an emulation / arcade cabinet (plan is for it to be all hidden inside!). Under construction once I decide on upgrading spartan or not.
T3 - Zizka (3400G/16G) - Small NAS w. Optane SLOG/Spinners for feeding Legion and Hun with storage. Also runs a PFSense box to link to the others.

Room 3:
Tmixed - Excalibur/Durandal/Mjolnir/Masamune/Rifle - ESXi hosts running production-esque workloads. Masamune is the equal to Spartan from above (controls the DC room).
T3 - Dreadnought - Older storage server feeding the 4 above (mjolnir also does, using mixed Optane/8T spinners).
Not-Named - Backup and security appliance from the company I work for currently.

Wifes:
T2 - Normandy (6700K/1080) - her VR and gaming box.

Explaining: I used to work for VMware back in the day - still have a lot of contacts and do a lot of tinkering and bleeding edge work for them, and after that I worked for Dell - I run what would be considered a MASSIVE home lab (this is one of multiple sites) - I'm the archive and prototype location before we push software out to the other sites. I'm also working on a bit of FreeBSD dev work (bored), VMware dev work (less bored, more work), and then work for ... work (they subsidize part of the power), which is in the security and backup space. I'm also tinkering with NVMeOF, RDMA, and some other advanced storage capabilities (where my career started) because I can. I move from a small game room / man cave in the winter/spring to an outer office in summer/early fall, because the sun blasts the game room during the summer and cooks me if I'm not careful (and there's no AC in the game room). Everything is scripted - during the week if I don't need it, we run in low-power mode with 1 intel and 1 AMD box in Room 1, and just Hun/Zizka in 2, and just Mjolnir/Dread in 3. If I need it, a script brings it all up to full power in about 30 minutes.

Best part - run a different script, and 8 of those are Lan-Party capable systems, so I can have people over to game without running hardware or cables or anything! Switch back with a single script again - with the exception of Cataphract, it all boots from WOL and flips right back. All running 10G with Wireless Mesh between rooms, BGP for routing and IPSec to link to the other sites. OpenVPN accessible over the internet.

I also might have gone a little overkill...
hahahaha-----upgrade your wife's computer, jeez!
 
I feel like it's getting tough to find an ideal motherboard these days. The high-end ones cost a small fortune and rarely offer anything I care about. Yet at the same time, the low-end (and most mid-range) boards are all missing things I want/need for basic 2-3 year "future-proofing."
The B650E's are the closest thing and they're still $300'ish for the good ones.
 
For over 10 years, I always bought Asus Maximus Hero boards which cost around $650. They were great and never gave me any trouble. However, I went with an Asus Strix for my new build. At $500, it wasn't cheap, but it does everything I want and is built like a tank. Damn thing feels like it weighs 10 pounds.
 
I'm in the same boat coming from an Asus Rampage + 5930k.

My gripe is these new boards don't have enough SATA ports.

I have:
4 WD 10tb
4 850p pro SSD's
1 blu day (can eliminate Juat liked the novelty)

No boards today seem to have more than 6 on Intel on the 790 platform. I didn't realize the extreme models doubled in price.

I was leaning to go for the 13900 but now debating if AMD is a better platform.

Anyone have any recommendations here on some good boards? I loved the Rampage line as it's headphone audio jack could power my headsets, overclock like a tank (I'm on air), and was intuitive.

Never thought a seasoned building like me would be stuck with limited options. Guess I took a step away for too long as I liked my setup so much!
 
No boards today seem to have more than 6 on Intel on the 790 platform. I didn't realize the extreme models doubled in price.
Simple PCI-E to Sata board could do otherwise pcpartpicker can show board with 8sata or more (with popularity of NAS, m.2, virtual disparition of disk drive, etc... will get rare indeed):

https://pcpartpicker.com/products/motherboard/#K=8,13&sort=price&page=1

Manual validation that they all stay active if all the m.2 slot you will need are used could be required
 
Simple PCI-E to Sata board could do otherwise pcpartpicker can show board with 8sata or more (with popularity of NAS, m.2, virtual disparition of disk drive, etc... will get rare indeed):

https://pcpartpicker.com/products/motherboard/#K=8,13&sort=price&page=1

Manual validation that they all stay active if all the m.2 slot you will need are used could be required
Thanks. So basically in some boards the PCI E lane may deactivate with more M.2 drives installed if I start converting what I have. That makes the AMD Asus AM5 extreme boards really interesting.

For Intel does that make the 790 series more flexible as it has more lands than the 690? Or is that different? Sorry if this sounds to N00b. Just getting back in the game and am trying to do lots of reading.
 
So basically in some boards the PCI E lane may deactivate with more M.2 drives installed if I start converting what I have
Sometime you cannot use in x16 more in m.2 in x4 mode, sometime 2x sata are shared with an m.2 and you can just use one or the other. If you get close to the limit it can be wise to validate for the actual model you have in mind before buying.

But if you convert, the sata port requirement goes down and platform are starting to come with a lot of m.2 connectivity.

Depending what your mass storage does, going into the NAS route instead of worrying and going high end motherboard that you repurchase can be to consider has well.
 
By Tier / hostname / time of year:
Room 1:
T1 - Forge (3960X/128G/6800XT) - Nested virtualization + windows workstation (photos/video/office tasks) + secondary gaming / Offline in summer (650W under load makes a LOT o' heat, since it's at 4.4Ghz all core :p). Nested host gets 8c/64G.
T1 - Soverign (10700K/32G/3090 all on custom loop) - Gaming box/entertainment Box, no work stuff allowed (loaner for friends when they come over).
T2 - Spartan (1950X/64G/2080TI) - Control Center + Plex + 4k gaming (couch and TV) + DC + vCenter + PFSense + Nested ESXi host (basically the top of rack stuff + bits and bobx) / Same thing in summer. Nested host gets 6c/32G.
T2 - Hoplite (3950X/64G/3070) - ESXi host + VR Gaming (reboot script) / Same thing in summer.
T2 - Cataphract (6900K/64G / 2xSAS / 8 1T SSDs) - Storage box and ESXi host year round.
T2 - Gladiator / Praetorian (1700X/10900K ITX boxes 64G) - Both straight ESXi hosts, compute only, run home stuff and lab management software year round.
T1 - Kali (Nuc 12 Extreme w. 12900/3060TI) - FreeBSD Dev box (working on getting it working) / summer occasional 1080P gaming box (doesn't put out much heat, unlike Forge/Sovereign), if I want to go hide in the man cave.

Room 2:
T2 - Legion (10980XE/128G/3080) - Winter ESXi host, Summer it's the Linux version of Forge (dual boots to windows for gaming or windows software). Also where I test Linux software, since in ESXi it boots the local NVMe as a VM so I always have a Linux box around.
T3 - Hun (7940X/128G/480) - Year round ESXi host running an S3 target for deep archive (bunch o' spinners) for the lab, plus passthrough of the 480 as an emulation / arcade cabinet (plan is for it to be all hidden inside!). Under construction once I decide on upgrading spartan or not.
T3 - Zizka (3400G/16G) - Small NAS w. Optane SLOG/Spinners for feeding Legion and Hun with storage. Also runs a PFSense box to link to the others.

Room 3:
Tmixed - Excalibur/Durandal/Mjolnir/Masamune/Rifle - ESXi hosts running production-esque workloads. Masamune is the equal to Spartan from above (controls the DC room).
T3 - Dreadnought - Older storage server feeding the 4 above (mjolnir also does, using mixed Optane/8T spinners).
Not-Named - Backup and security appliance from the company I work for currently.

Wifes:
T2 - Normandy (6700K/1080) - her VR and gaming box.
It looks like 4-5 modern boxes could replace all that.
 
It looks like 4-5 modern boxes could replace all that.
Those... are mostly pretty modern? The oldest is Zen1, or the X99 Haswell (which is all storage passthrough - it's got 16 more SSD slots available for when I need more). Most are Zen2 or Skylake... Zen 3 doesn't really offer any change, and Zen4 is currently a step backwards for me (as is Alder Lake/Raptor Lake) except for the purely gaming box.

Plus given the mixed uses, every time I hear someone say that, I scratch my head and go "how, precisely, would you do that?" Still can't stuff more than 128G of DDR4 in a box, or 64G of DDR5 (without massively compromising performance). Also not really higher core densities out there unless you go higher end on a couple of the systems - which wouldn't help, since I'm generally RAM limited on the big boys more than anything.
 
Those... are mostly pretty modern? The oldest is Zen1, or the X99 Haswell (which is all storage passthrough - it's got 16 more SSD slots available for when I need more). Most are Zen2 or Skylake... Zen 3 doesn't really offer any change, and Zen4 is currently a step backwards for me (as is Alder Lake/Raptor Lake) except for the purely gaming box.

Plus given the mixed uses, every time I hear someone say that, I scratch my head and go "how, precisely, would you do that?" Still can't stuff more than 128G of DDR4 in a box, or 64G of DDR5 (without massively compromising performance). Also not really higher core densities out there unless you go higher end on a couple of the systems - which wouldn't help, since I'm generally RAM limited on the big boys more than anything.
You have three separate storage systems that could be consolidated into one. Other than the two gaming systems, I don’t know enough details about the VM systems to know how much they could be consolidated, but two to three 128 GB 7950X or so workstations look sufficient, or a dual CPU server board with >=384 GB or so RAM. Less power, less heat, less space.
 
You have three separate storage systems that could be consolidated into one. Other than the two gaming systems, I don’t know enough details about the VM systems to know how much they could be consolidated, but two to three 128 GB 7950X or so workstations look sufficient, or a dual CPU server board with >=384 GB or so RAM. Less power, less heat, less space.
I see what you're thinking - trick is - upgrading to 7950s (even if I went with 128G, more on that in a bit) doesn't buy me anything anytime soon - and costs quite a bit. No ROI even over 3-5 years on that change, given that the rest is already in place (I wouldn't buy the current setup new, mind you - not with current options - but a lot of this was put together over the last 3 years). Plus, I wouldn't have enough slots to really make it work for long (remember, 2x10G and a SAS controller in a lot of those systems, at a minimum - they require x8 electric slots, so I'd be buying expensive boards too).

Storage:
They're in three different rooms, and actually it's more than that -
Room 1:
Yggdrasil (12T Synology NAS - personal stuff and media processing cache). This is an all-spinner now - 3T drives x5. About 80% full.
Spartan (Plex mass-store (25T currently)) About 90% full.
Cataphract (6T All flash VSA with replication/etc for VMware). About 25% full, but that's going to go up to 50% next week.

So one for performance, one for personal photos/video/media I'm working on/etc, and then the Plex server. Combining those doesn't seem to make a ton of sense, does it?

Room 2:
Mjolnir (35T Optane/Spinner for VMware)
Dreadnought (5T VSA with tiering - this one is probably going away soon - for VMware)
So a single storage system for the cluster there.

Room 3:
Hun (~50-60T S3 target for backups) - this is intentionally in a different room from 1/2, since it's the long-term target for backups for stuff in both of those rooms.
Zizka (3T for VMware).
Backups and one small, low-power storage system for here. I could put another Synology in - but I had the parts for Zizka lying around doing nothing, so... ~shrug~. I could also put the storage layer on Hun for performance, but then we hit issues with updates as I have to bring down everything in there to patch it.

I try to avoid combining uses on a NAS - speed is contrary to capacity, and personal stuff is contrary to professional stuff when it comes to managing backups (especially since we use that environment for demos!). I am probably ditching Dreadnought, as it's not worth running it for much longer - and the 5T of space on it isn't anything of note.

As for the servers in room 1 - 7950X doesn't really work - putting 128G of DDR5 on a board is almost impossible with any kind of speed, since it REALLY hates more than 1DPC on consumer kit. So if I wanted to modernize Gladiator/Praetorian I could, but it wouldn't actually buy me anything - those aren't CPU bound, and they'd still be stuck at 64G of RAM unless I wanted to drop the speeds significantly - or when we finally get denser DDR5 sticks. Same for Hoplite - I can't go above 64G easily now (the 3950 is a bit picky), and jumping to DDR5 would have me limited the same way. In theory, if I go ahead and do the upgrade to Spartan, I could yank Gladiator out and modernize it with a 5950 and 128G later on if needed - but I'm slightly lazy, and just haven't gotten to it yet. Plus it's mostly a backup host for when I have to do maintenance.

In terms of the servers in Room 3.... wellllll... Yeah, we need more than 384. Our standard build is:

1672956599176.png
 
I see what you're thinking - trick is - upgrading to 7950s (even if I went with 128G, more on that in a bit) doesn't buy me anything anytime soon - and costs quite a bit. No ROI even over 3-5 years on that change, given that the rest is already in place (I wouldn't buy the current setup new, mind you - not with current options - but a lot of this was put together over the last 3 years). Plus, I wouldn't have enough slots to really make it work for long (remember, 2x10G and a SAS controller in a lot of those systems, at a minimum - they require x8 electric slots, so I'd be buying expensive boards too).
Fair, it's often best to use what one already has.

Storage:
They're in three different rooms, and actually it's more than that -
Room 1:
Yggdrasil (12T Synology NAS - personal stuff and media processing cache). This is an all-spinner now - 3T drives x5. About 80% full.
Spartan (Plex mass-store (25T currently)) About 90% full.
Cataphract (6T All flash VSA with replication/etc for VMware). About 25% full, but that's going to go up to 50% next week.

So one for performance, one for personal photos/video/media I'm working on/etc, and then the Plex server. Combining those doesn't seem to make a ton of sense, does it?

Room 2:
Mjolnir (35T Optane/Spinner for VMware)
Dreadnought (5T VSA with tiering - this one is probably going away soon - for VMware)
So a single storage system for the cluster there.

Room 3:
Hun (~50-60T S3 target for backups) - this is intentionally in a different room from 1/2, since it's the long-term target for backups for stuff in both of those rooms.
Zizka (3T for VMware).
Backups and one small, low-power storage system for here. I could put another Synology in - but I had the parts for Zizka lying around doing nothing, so... ~shrug~. I could also put the storage layer on Hun for performance, but then we hit issues with updates as I have to bring down everything in there to patch it.

I try to avoid combining uses on a NAS - speed is contrary to capacity, and personal stuff is contrary to professional stuff when it comes to managing backups (especially since we use that environment for demos!). I am probably ditching Dreadnought, as it's not worth running it for much longer - and the 5T of space on it isn't anything of note.
A single ZFS NAS with segmented datasets by speed/size needs would handle that nicely.

As for the servers in room 1 - 7950X doesn't really work - putting 128G of DDR5 on a board is almost impossible with any kind of speed, since it REALLY hates more than 1DPC on consumer kit. So if I wanted to modernize Gladiator/Praetorian I could, but it wouldn't actually buy me anything - those aren't CPU bound, and they'd still be stuck at 64G of RAM unless I wanted to drop the speeds significantly - or when we finally get denser DDR5 sticks. Same for Hoplite - I can't go above 64G easily now (the 3950 is a bit picky), and jumping to DDR5 would have me limited the same way. In theory, if I go ahead and do the upgrade to Spartan, I could yank Gladiator out and modernize it with a 5950 and 128G later on if needed - but I'm slightly lazy, and just haven't gotten to it yet. Plus it's mostly a backup host for when I have to do maintenance.
If you're truly utilizing those resources, wouldn't Epyc builds be more suitable? Ok, reusing old stuff and all, but at some point the sheer amount of boxes and power usage get too annoying.

In terms of the servers in Room 3.... wellllll... Yeah, we need more than 384. Our standard build is:

View attachment 539711
Ok, that wasn't apparent from the previous post.
 
Fair, it's often best to use what one already has.


A single ZFS NAS with segmented datasets by speed/size needs would handle that nicely.
Thought about it. Issues I see:

No ZFS system with a SRM SRA available. I'd have to go actually buy a true SAN to pull that off - and outside of DDN Tegile, I'd be missing features (even Synology/QNAP doesn't have an SRA that is up to date). The VSA that is running on Cataphract (and was going to run on Dreadnought) is a full enterprise bit of software - replication and coordinated failover and all. :) Now if I ditched the MSCS DB clusters I could skip that - might be worth thinking about - but then we get farther down the line. I could also skip replicating them from my site to Site A and just keep them, but I'd have to move some DBs around to make that work.

To build that, we'd also need a 4U style case - not enough bays otherwise to pull it off - and since a lot of these systems are in normal rooms in the house, that means ~loud~. We're talking about what will soon be close to 150T of storage, a decent chunk of that all-flash. So now we're talking a dedicated storage box or SAN - a monster machine with probably 4 of my optane drives for various SLOG/L2ARC to make performance work. Those, plus 10G cards, plus SAS controllers - we're definitely talking an enterprise system here, so we're probably looking at about $5-8k for the hardware alone. Which takes us to point 3:

That mixes my personal stuff with professional. I'm not the only one using this environment; the Synology is not documented other than in the IPAM system, no one else can go poking at what I have stored on there. Stick it on a shared system and there are other folks that would have access (but the Synology isn't the issue you're thinking about here either).

The fundamental issue - it's three different rooms 50' apart. Wireless mesh between them @ 100mbit - not dropping 2x10G to 100mbit for VM storage, especially over L3 (this is routed between each room over BGP), and I don't feel like trying to run fibre between rooms, as the layout isn't precisely conducive to that (nor is the age of the house - we're poking holes boys!). I've got optics and all that could do it, but I'd have to switch out from RJ45 in Room 1/2 to SR OM3, or find a different switch that supports mixed RJ/SR (or use a LOT of adapters)... Not really fond of trying to do 10G over that distance with 10GBASE. Room 3 is already Fibre (generally TwinAx). That's more money again :p Even if I tried RJ45 for that, I'd have to do LAGs either way to get the performance, and I ~hate~ bonds with the passion of a thousand burned experiences. Especially for storage traffic. And now I'm trunking that VLAN around all over the place too.

If you're truly utilizing those resources, wouldn't Epyc builds be more suitable? Ok, reusing old stuff and all, but at some point the sheer amount of boxes and power usage get too annoying.


Ok, that wasn't apparent from the previous post.
Sure - but an Epyc build can just be a server. It's a really crappy workstation, VR system, etc. I MUST have 4 nodes of compute for management - two of our sites run those as dedicated servers, but instead, I can build a couple of multi-purpose boxes here and knock out management AND my plex server AND my workstation AND my VR box - without double buying hardware :D

The way I approached it was this:
I have to have the above 4 nodes of management (3 + failover). I need a gaming box and hellacious workstation for some of my dev work. I ~hate~ trying to run VR off of my main gaming box because it often requires the HDMI port / etc, which is already in use for a monitor and audio, so if I want to play VR games, I have to swap cables around (and deal with having the sensors sitting near the monitors/etc and getting blocked). I did that for 4 years and it sucked - so I'm building a VR box too - it's the one thing my wife and I play together and we love it. Also need a VM storage box, and I want the plex server detached from anything I normally interact with, because I ~break~ stuff and the family will complain. Plus I need a router, domain controller tied to local storage, etc... Thus we ended up with 5 boxes to handle those workloads - workstation, gaming box, VR box, storage server, Plex/control. 4 nodes of management - just they can do two things at once!

Then I added the two pure-servers as I got parts cheap or on trade for work/help/etc, and just kinda threw them together. I don't use them all the time - we often run cut down to the workstation, plex server, and storage box (and Hun or Legion) - that's enough with some workloads off to run everything (for now, once I upgrade the NSX controllers we'll need more RAM than those three can give).

Shrinking down generally loses us capabilities. We shut down parts of the lab when we don't need them, and over the weekends. And in the summer, we go low-power for peak energy cost times.

I wouldn't ever recommend doing this for normal people. Folks try to shoehorn NUCs into this a lot of the time :D

I never said I was normal. :D
 
  • Like
Reactions: Meeho
like this
lopoetve : Rough tally of ~150TB? Glad I'm not the only one with excessive home storage (288TB in sig, full setup replicated at another site as well) :D
 
lopoetve : Rough tally of ~150TB? Glad I'm not the only one with excessive home storage (288TB in sig, full setup replicated at another site as well) :D
Haha! Yup! I have a lot of weird things I do. And access to a lot of hardware.

For site D (my site)- prototype and archival, it’s about 150T active and archive, plus 30T backup appliance.


Site A is ~90T, split equally between all flash and all NVMe VSAN. High speed low drag compute. Backups are software - archive to AWS Glacier.

Site B is ~350T raw, but fully half of that is a massive hybrid VSAN so it’s only 180-200 usable. Normal server workloads and training. 60T backup appliance. This is our main “here, go play” location.

Site C is 80T effective. Small site. This might get another 40 and 4 more nodes. Debating on that. I only have half a rack to work with here and the appliance in question is … loud.

Site Z will be about 130 all flash. He powered up everything at first - that was an $800 power bill. Oops?
 
That is A LOT of storage, and good golly Ms Molly @ 130T of flash!
Then again, you're using it for respectable purposes, unlike myself (pretty much media storage only) :D
 
Back
Top