Motherboards for consumers who aren't gamers?

RedOak

n00b
Joined
Jun 26, 2022
Messages
13
The PC tech is evolving so quickly that even similar threads are out of date now, so I thought that I'd run this one by the [pardon the pun] "board" to get the latest...

Okay, after more than 10 years away from building/operating our own PCs, I've found a need to run so-called "abandonware" that was never intended for Mac OS X. After contemplating our options, including the possibility of reviving one of my wife's abandoned PC laptops, I decided to dig through our storage and, eventually, I found the last PC "tower" that I built (circa 2004). It was built around an AMD CPU and, until I started doing research on what Intel's been doing for the past 10 years or so, I had forgotten why I turned my back on the mega-giant chip company that loves lakes and, instead, spent our hard-earned on AMD.

After spending so many years away from Windows X World, I consider myself to be back at rookie status, but, in short, this is what quite a few hours on the interwebs has garnered about what Intel's been up to. In short, I've discovered that the folks in Santa Clara wrung as many dollars as possible out of four (4) measly "quad-cores" of CPU architecture for over a decade...and they probably would've continued with this extremely profitable business model had it not been for AMD's Ryzen 7 1700 CPU -- with 8 cores and 24 PCIe lane expansion support for $329 -- which was rolled out to the consumer PC market in March of 2017.

Fast forward to the summer of 2022 and, yeah, [read: yet another freakin' lake] "Alder Lake" offers the consumer up to (16) "total" cores...but what's this? Only (20) lanes of PCIe expansion support?!? Seriously, I can't find a graphics card that doesn't require at least eight (8) PCIe lanes...so 8 divided by 20 equals...40 percent!?! Let's say you need Thunderbolt support, which we do, there goes another (4) PCIe lanes. How about that nice USB 3.2 Gen 2x2 port on the front of your new motherboard? Yep, another (4) PCIe lanes...so now we're up to sixteen (16) PCIe lanes and I haven't even gotten into those M.2 NVMe SSD slots on the new motherboards, and at (4) PCIe lanes per SSD...well, I think you should get the point by now.

Put plainly, Intel had to be forced by their only real competition, AMD, to open up the CPU core chest, but why have they chosen to cap their expansion support at only twenty (20) lanes? It's certainly not because they don't know how. If you take a look at the specifications for the i9-10980XE CPU, for example, you will quickly see that Intel designed a CPU with (18) cores and support for forty-eight (48) PCIe expansion lanes back in late-2019! Sure, the 12th generation Intel CPUs are faster than their 10th generation processors, but one still has to ask themselves about dollars spent vs expandability because, after all, Intel isn't exactly giving away their i9-12XXX processors, are they?

So this leads me to the question at bar...Is there such a thing as a PC motherboard -- anywhere near the consumer level -- that isn't based on the insulting concept of "shared resources"?

I'm aware of the HEDT (High-End Dollar Desktop) and some of the work station motherboards, but I haven't, yet, been able to find a PC motherboard that fits inside of the huge gap between the gaming market and the corporate market...and, needless to say, I'm slowly beginning to remember why I built that AMD system back in 2004. ;)
 
Last edited:
Interesting perspective. In my opinion, it seems most motherboards are oriented toward gaming, workstation, or strike a good middle ground for some of both and everything else. Non-enthusiast boards can still handle far beyond what most average PC users would throw at it.
Not overly familiar with intel chipsets since H110/Z170 (Skylake i5-6500 was the last one I had), other than Z690 and B660. The H110 was totally sufficient for home use and some moderate gaming at the time. I think it ultimately comes down to preference, because I'd say pretty much any decent board on the market can handle everything the average PC user would need. Only the hardest gamers, enthusiasts, or people who rely on extreme processing or rendering capabilities really need the highest-end boards, chipsets, etc.
 
You can look at what the marketing is often called "creator" for a lot of connectivity.

For example:
https://www.asus.com/Motherboards-Components/Motherboards/ProArt/ProArt-Z690-CREATOR-WIFI/
https://latestintech.com/asus-proart-z690-creator-wifi-motherboard-review/

You can buy motherboard that come with 10gb, Thunderbolt, and USB 3.2 Gen 2x2, 4 M.2 slots leaving the rest of lane for what you want (there is sharing like some of the extra M2 port using the SATA ports, which make sense too).

two PCI3 5.0 x16 or x8/x8 and PCIe 3.0 x16.
PCI3 3.0 x4, two PCIe 3.0 x1,

They tend to cost a lot.

And if you are ok with previous generation and buying into a death platform, there is the ThreadRipper non-pro platform.
 
At least some of what has happened, has happened because the resources being shared are so much faster. In 2004 you had PCIe 1.0 assuming it was using PCIe at all. Today's PCIe 3.0 is about 4 times faster and PCIe 4.0 is another 2x (8 times faster than 1.0). Given that in real life, many of the PCIe resources aren't flooding the bus anyway, it makes a LOT of sense to reduce the CPU lane count - which reduces the pin count needed for CPU lanes and lets you use the pins for something else - and push some of the PCIe services out to the chipset.

The corporate market outside of servers is arguably much less resource intensive than gaming. You don't need fancy graphics, integrated graphics is more than enough to run a desktop. You don't need tons of storage because most of the corporate data is on on-prem servers or in the cloud somewhere.

I can put together a desktop with massively better performance, both CPU and graphics, and many times more and faster storage, using 20 lanes off the CPU today, than one could with any number of lanes you care to imagine back in 2004.
 
And if you are ok with previous generation and buying into a death platform, there is the ThreadRipper non-pro platform.

Thank you for the post. Would you mind elaborating on what "...a death platform..." is?
 
I can put together a desktop with massively better performance, both CPU and graphics, and many times more and faster storage, using 20 lanes off the CPU today, than one could with any number of lanes you care to imagine back in 2004.

Sure, the 12th generation Intel CPUs are faster than their 10th generation processors, but one still has to ask themselves about dollars spent vs expandability because, after all, Intel isn't exactly giving away their i9-12XXX processors, are they?

I never claimed to be Ernest Hemingway, so perhaps it was my writing...but I think that you may have missed the point. As I mentioned in the sentence above, there's no doubt in my mind that the newer components are "faster" than the components that came before them, but that's not what I prefaced my question with. I prefaced it with Intel's obvious decision to keep folks who need expandability from being able to do much of it. I have no feeling for any of these capitalist monsters, but I would like to get some bang for [what I see as] the big bucks that Intel demands.

Regardless, I'm not looking for the "fastest," I'm looking for a reasonably fast system that allocates it resources to more than just super-fast graphics. Considering that Intel's "K" suffixed CPUs have reasonable graphics onboard, perhaps there's a motherboard out there that "splits" the eight (8) PCIe lanes that gamers require and, instead, adds an aditional PCIe slot with less active PCIe lanes? I don't know, that's why I came here to ask folks with much more "modern" PC experience than I have.
 
Thank you for the post. Would you mind elaborating on what "...a death platform..." is?
I mean by that a platform for which it is likely to not have new CPU made for it, which could be a big deal for some at the price of those motherboards and could be something to think about if you buy a giant amount of expensive DDR-4.

perhaps there's a motherboard out there that "splits" the eight (8) PCIe lanes that gamers require and, instead, adds an aditional PCIe slot with less active PCIe lanes?
That common I think that a PCI slot can either be in x16 or be in x8 and give x8 to a different slot if the mobo detect that something is connecting in it, I think the Asus motherboard linked just above does that.
 
I never claimed to be Ernest Hemingway, so perhaps it was my writing...but I think that you may have missed the point. As I mentioned in the sentence above, there's no doubt in my mind that the newer components are "faster" than the components that came before them, but that's not what I prefaced my question with. I prefaced it with Intel's obvious decision to keep folks who need expandability from being able to do much of it. I have no feeling for any of these capitalist monsters, but I would like to get some bang for [what I see as] the big bucks that Intel demands.
Yes, but who needs expandability, and for what? Not for storage, unless you need more than a few TB on the desktop. Not for networking, unless you need multiple 10gbit ethernet connections. GPU compute, maybe, but then you don't really need the x16 interface. Not for audio, motherboards have onboard audio now, and if you want quality you'll be using an outboard USB DAC anyway. Maybe a capture card, but it's easy to find a mobo with a suitable slot, and it's not going to flood the chipset to CPU link by any means. So what exactly are you going to want to put there? If you're in the "unless" category, you are almost certainly a large server, and then you are looking at Xeon or Epyc which does in fact have plenty of additional PCIe lanes.

Regardless, I'm not looking for the "fastest," I'm looking for a reasonably fast system that allocates it resources to more than just super-fast graphics. Considering that Intel's "K" suffixed CPUs have reasonable graphics onboard, perhaps there's a motherboard out there that "splits" the eight (8) PCIe lanes that gamers require and, instead, adds an aditional PCIe slot with less active PCIe lanes? I don't know, that's why I came here to ask folks with much more "modern" PC experience than I have.

Not to be too repetitive, but what other resources are you looking for? You can certainly find motherboards which will run in a double x8 mode instead of a single x16, plus whatever additional PCIe lanes you can get out of the chipset. The CPU-chipset connection is fast enough that "bandwidth sharing" need no longer be a dirty phrase.
 
I read an article last night on PC Mag dot com about the newest "lake" CPUs, Alder Lake, and the new Z690 chipset used with it. Here (below) is an excerpt from the article:

Screen Shot 2022-07-01 at 12.29.49 AM.png


As the author cited the (20) PCIe lane limit that Intel chose for their so-called "top end" consumer-level CPUs, it was shocking to read that (16) of those (20) lanes are pretty much for the gamers only, which means that expansion options for non-gamers, like us, have been limited even more than I thought. I'm actually finding this hard to believe, so I'd like to ask if anyone has built a "K"-suffixed PC recently -- i.e., a PC with an i7-12700K CPU and a Z690 chipset -- and has gone the non-graphics AIC route? If you have, were you able to employ the PCIe slot(s) normally employed by the graphics card(s) for other purposes?

We, for example, wish to add multiple M.2 NVMe PCIe SSDs to our new PC build in the future, which will require a large number of PCIe lanes/bandwidth. Here's the entire article, if you care to read it.
 
We, for example, wish to add multiple M.2 NVMe PCIe SSDs to our new PC build in the future,
There is CPU lane and chipset lane has well I think, from your article: The Z690 chipset itself now has support for up to 12 PCIe 4.0 lanes,

If you look at typical lower priced z690 motherboard they offer 3 or 4 M2 slot for SSDs, needing more than 4 drive is a quite a bit but they will also support 4 drive expansion card (some motherboard at high price will ship with them).

If you look at the pro art above:

PCIe x16 Slots: 3

M.2 Slots

  • 2242/2260/2280/22110 M-key
  • 2242/2260/2280 M-key
  • 2242/2260/2280/22110 M-key
  • 2242/2260/2280 M-key
  • 2230 E-key

SATA 6.0 Gb/s: 8

Onboard Ethernet

  • 1 x 2.5 Gb/s (Intel)
  • 1 x 10 Gb/s (Marvell)

2 x Thunderbolt 4. USB Type-C® ports.

6 x USB 3.2 Gen 2.

It is not obvious to me, how you miss connectivity for a workload that is not worth going the threadripper pro/Xeon workstation route or the server Epyc/Xeon options. You can go Xeon workstation 64 lanes, if the above is not above chance are you are the type of entreprise that will pay the Xeon/Apple high price if you are forced too (and I guess apple will not look that pricey when you compare it to the threadripper pro/Xeon line).

 
Last edited:
The excerpt that I placed into my last post (Post no. 9) is predicated on the future, and PCIe 5.0, specifically. I started this thread with future expansion options in mind, so what the author of the article had to say about how PCIe 5.0 can and cannot be used is what I'm concerned with at the moment. In other words, the 5th generation of M.2 NVMe PCIe SSDs is on the way, so we would like to find a board that will allow is to use more than one of them in the future.

Regardless, the question about whether or not anyone has built a PC recently that didn't include a graphics card is still of great interest to me. Specifically, I'd like to know if anyone has utilized the PCIe slot normally used for a graphics card (the primary PCIe slot) for other purposes?
 
RedOak,

To clarify the PCI-e lane count on the Alder Lake platform:

The Alder Lake CPUs themselves have 16 PCI-e 5.0 lanes plus 12 PCI-e 4.0 lanes. Of those, the PCI-e 5.0 lanes go directly to either the primary PCI-e x16 slot or bifurcated into multiple combinations of x8 and/or x4 slots, while four of the PCI-e 4.0 lanes go to the primary m.2 SSD slot. The remaining eight PCI-e 4.0 lanes are used to connect via DMI to the motherboard's core logic chipset. The Z690 and H670 chipsets utilize all eight of those PCI-e 4.0 DMI lanes while the B660 and H610 chipsets utilize only four of those lanes.

In the case of the Z690 chipset, you have the capability of having up to 12 PCI-e 4.0 lanes and/or 16 PCI-e 3.0 lanes. However, all 28 theoretical Z690 chipset lanes share the exact same DMI 4.0 x8 throughput and bandwidth, so the maximum throughput of the entire chipset is equivalent to eight PCI-e 4.0 lanes (or PCI-e 4.0 x8).
 
The excerpt that I placed into my last post (Post no. 9) is predicated on the future, and PCIe 5.0, specifically. I started this thread with future expansion options in mind, so what the author of the article had to say about how PCIe 5.0 can and cannot be used is what I'm concerned with at the moment. In other words, the 5th generation of M.2 NVMe PCIe SSDs is on the way, so we would like to find a board that will allow is to use more than one of them in the future.

I have trouble keeping a PCIe 4.0 SSD busy, and I'm a DBMS developer. I'm not at all worried about a PCIe 5.0 SSD, much less having multiples of them. But, we'll leave aside the question of how many people can actually keep multiple m.2 drives busy simultaneously, for now. (and, if you can't keep them busy simultaneously in theory, then running most of them through the chipset should be fine.)

Regardless, the question about whether or not anyone has built a PC recently that didn't include a graphics card is still of great interest to me. Specifically, I'd like to know if anyone has utilized the PCIe slot normally used for a graphics card (the primary PCIe slot) for other purposes?

Out of 8 non-Mac computers in the house, 7 would not include a graphics card if more Ryzen CPU's came with basic integrated graphics. Those 7 have junk video just to display a desktop (GT 710's, HD 6450's). The eighth probably could go the same way; I have an RX550 in it now and I've never done anything that actually needed it.

If I didn't need the x16 CPU-connected slot for a GPU, I strongly suspect that it would remain empty in most of the machines. There's a couple that are built around low-end B450 mobos, and it's not out of the question that I might want that PCIe slot for an m.2-to-PCIe card so that I can add a second NVMe drive. For now, if I end up needing more storage, I'll add a SATA drive.

That's a long-winded way of answering your question with a "no" for me.
 
The excerpt that I placed into my last post (Post no. 9) is predicated on the future, and PCIe 5.0, specifically. I started this thread with future expansion options in mind, so what the author of the article had to say about how PCIe 5.0 can and cannot be used is what I'm concerned with at the moment. In other words, the 5th generation of M.2 NVMe PCIe SSDs is on the way, so we would like to find a board that will allow is to use more than one of them in the future.

Regardless, the question about whether or not anyone has built a PC recently that didn't include a graphics card is still of great interest to me. Specifically, I'd like to know if anyone has utilized the PCIe slot normally used for a graphics card (the primary PCIe slot) for other purposes?
Yes

The PCIe x16 slots that Graphic cards fit in can be used for ANY type of PCIe card. They are not coded or marked in some ways to only allow Graphics cards. You could toss a dual slot NVMe card in a PCIe 16x slot if you want....or a 10GB dual port SFP card if you want..Yo could put a PCIe x2 sound card in one if you wanted.

If you are wanting to build a new system and keep it for 10 years, that has never been a good way to go. Technology changes. You are better off buying what you need now, and then upgrading in 5 years or so to a new platform vs say building a high end rig thinking you are future proofing.

To ask, and maybe I missed it, what do you plan to initially use in this system? What add-on cards do you plan to use that has you so concered modern CPU's can not handle what you plan to throw at it?

I get the urge to want to build with the latest and greatest and AM4 is a dead socket, but I still build my X570 rig with a 5950x AMD processor in it knowing it will last me 3-5 years if I want it to. By that time, AM5 should be mature and the bugs worked out and heck, maybe Intel wont need a nuclear power plant to power their CPU's an an iceberg to cool it.
 
Last edited:
The PC tech is evolving so quickly that even similar threads are out of date now, so I thought that I'd run this one by the [pardon the pun] "board" to get the latest...

Okay, after more than 10 years away from building/operating our own PCs, I've found a need to run so-called "abandonware" that was never intended for Mac OS X. After contemplating our options, including the possibility of reviving one of my wife's abandoned PC laptops, I decided to dig through our storage and, eventually, I found the last PC "tower" that I built (circa 2004). It was built around an AMD CPU and, until I started doing research on what Intel's been doing for the past 10 years or so, I had forgotten why I turned my back on the mega-giant chip company that loves lakes and, instead, spent our hard-earned on AMD.

After spending so many years away from Windows X World, I consider myself to be back at rookie status, but, in short, this is what quite a few hours on the interwebs has garnered about what Intel's been up to. In short, I've discovered that the folks in Santa Clara wrung as many dollars as possible out of four (4) measly "quad-cores" of CPU architecture for over a decade...and they probably would've continued with this extremely profitable business model had it not been for AMD's Ryzen 7 1700 CPU -- with 8 cores and 24 PCIe lane expansion support for $329 -- which was rolled out to the consumer PC market in March of 2017.

Fast forward to the summer of 2022 and, yeah, [read: yet another freakin' lake] "Alder Lake" offers the consumer up to (16) "total" cores...but what's this? Only (20) lanes of PCIe expansion support?!? Seriously, I can't find a graphics card that doesn't require at least eight (8) PCIe lanes...so 8 divided by 20 equals...40 percent!?! Let's say you need Thunderbolt support, which we do, there goes another (4) PCIe lanes. How about that nice USB 3.2 Gen 2x2 port on the front of your new motherboard? Yep, another (4) PCIe lanes...so now we're up to sixteen (16) PCIe lanes and I haven't even gotten into those M.2 NVMe SSD slots on the new motherboards, and at (4) PCIe lanes per SSD...well, I think you should get the point by now.
That's not quite how that works - most of that is built into the CPU now in one form or another - including the USB controllers (although some have additional ones tied to the chipset lanes) - so you don't have to quite count lanes like that.

28 Lanes base. 8 for connection to chipset, leaves 20 (the number you see advertised). 4 for the first M2 slot (direct attached to CPU). 16 left. Those are either 16x/0x, or 8x/8x, depending on your board, hardware choices, etc. Which is plenty, since you can't even saturate PCIE3 with a GPU right now, never mind 4 or 5. For most use cases, at least.
Put plainly, Intel had to be forced by their only real competition, AMD, to open up the CPU core chest, but why have they chosen to cap their expansion support at only twenty (20) lanes?
Because with networking, USB, sound, TB, etc all built into the CPU, the majority of people don't need cards for anything else. I'm one of the edge cases, and even then for my gaming box, consumer lane counts are just fine.
It's certainly not because they don't know how. If you take a look at the specifications for the i9-10980XE CPU, for example, you will quickly see that Intel designed a CPU with (18) cores and support for forty-eight (48) PCIe expansion lanes back in late-2019!
The 10980XE is a Skylake Xeon chopped down for desktop support. If you look at the Xeon-W processors, they get even more lanes (and hexa or octa channel memory too). If you want that much expansion (and I'm one of the folks that uses it), HEDT or Workstation class boards are the way to go. You also pay for them, since Threadripper/TR Pro/X299/C621 are ~literally~ server CPUs chopped down (or even just an unlocked server part). The question is - what do you need more slots for? What will you use them for?
Sure, the 12th generation Intel CPUs are faster than their 10th generation processors, but one still has to ask themselves about dollars spent vs expandability because, after all, Intel isn't exactly giving away their i9-12XXX processors, are they?
No, but they have more expansion (technically) as the chipset link is now x8 (and faster), and the chipset has another set of lanes it can use.
So this leads me to the question at bar...Is there such a thing as a PC motherboard -- anywhere near the consumer level -- that isn't based on the insulting concept of "shared resources"?
No. HEDT or Workstation only at that level - but again, what do you need the expansion for?
I'm aware of the HEDT (High-End Dollar Desktop) and some of the work station motherboards, but I haven't, yet, been able to find a PC motherboard that fits inside of the huge gap between the gaming market and the corporate market...and, needless to say, I'm slowly beginning to remember why I built that AMD system back in 2004. ;)
AMD has the same limitation. 24 lanes, 4 for chipset, 4 for NVMe 1, 16 for the first two PCIE slots (16/0 or 8/8). This is a consumer platform limitation on both sides - but again, what will you be putting into those slots? I use a bunch of 10G NICs and SAS controllers, but I ~don't generally pay for them~, and I need them for work.
I never claimed to be Ernest Hemingway, so perhaps it was my writing...but I think that you may have missed the point. As I mentioned in the sentence above, there's no doubt in my mind that the newer components are "faster" than the components that came before them, but that's not what I prefaced my question with. I prefaced it with Intel's obvious decision to keep folks who need expandability from being able to do much of it. I have no feeling for any of these capitalist monsters, but I would like to get some bang for [what I see as] the big bucks that Intel demands.
Consumer chips are pretty cheap - I've got both 10980XE, 3960X, 1950X, 6800K, 7940X, etc - those are expensive.
Regardless, I'm not looking for the "fastest," I'm looking for a reasonably fast system that allocates it resources to more than just super-fast graphics. Considering that Intel's "K" suffixed CPUs have reasonable graphics onboard, perhaps there's a motherboard out there that "splits" the eight (8) PCIe lanes that gamers require and, instead, adds an aditional PCIe slot with less active PCIe lanes? I don't know, that's why I came here to ask folks with much more "modern" PC experience than I have.
You can stick whatever you want in that first slot. I have a 10900K box that uses the first slot for an Intel X710 dual-port 10G nic. Works great. The second slot is a SAS controller. Third is currently unused. You can use a PCIE slot for anything - you get either 16x/0x, or 8x/8x, which is MORE than quick enough for anything we have that can ~fit~ into a slot - unless you're doing Optane VROC or the like, which... well, at that level, you're not bitching about limitations in consumer parts - you've got 10k in drives alone.
The excerpt that I placed into my last post (Post no. 9) is predicated on the future, and PCIe 5.0, specifically. I started this thread with future expansion options in mind, so what the author of the article had to say about how PCIe 5.0 can and cannot be used is what I'm concerned with at the moment. In other words, the 5th generation of M.2 NVMe PCIe SSDs is on the way, so we would like to find a board that will allow is to use more than one of them in the future.
My main 10th gen system has 3 NVMe drives in it. 2 of them are provided for by the chipset - which is more than fast enough for what I use them for. If you want more direct connected - get Threadripper or x299. Or wait for whatever comes with Sapphire Rapids.
Regardless, the question about whether or not anyone has built a PC recently that didn't include a graphics card is still of great interest to me. Specifically, I'd like to know if anyone has utilized the PCIe slot normally used for a graphics card (the primary PCIe slot) for other purposes?
Yes. Plenty. I have a lot of compute-only nodes that don't have graphics cards, and use the first slot for non-GPU based workloads. It's a PCIE slot - but at the same time, 99% of the folks even here on [H] don't have my use cases - I'm buying bloody persistent memory right now for a new C621E Sage box - nor even use cases that require more than one or possibly two cards.
 
I have trouble keeping a PCIe 4.0 SSD busy, and I'm a DBMS developer. I'm not at all worried about a PCIe 5.0 SSD, much less having multiples of them. But, we'll leave aside the question of how many people can actually keep multiple m.2 drives busy simultaneously, for now. (and, if you can't keep them busy simultaneously in theory, then running most of them through the chipset should be fine.)
Bingo. I do, but I'm doing NVMeOF work! My next build is 256G of DDR4 RDIMM + 1TB of PMEM + 4T of Optane + 4T of NVMe. It's for testing some storage workloads and various tiering algorithms. That's all going into a dual-socket Sage board with 2x 6248 Xeons (unless I get my hand on 8260Ls). 4x 10G for ROCE.
 
Bingo. I do, but I'm doing NVMeOF work! My next build is 256G of DDR4 RDIMM + 1TB of PMEM + 4T of Optane + 4T of NVMe. It's for testing some storage workloads and various tiering algorithms. That's all going into a dual-socket Sage board with 2x 6248 Xeons (unless I get my hand on 8260Ls). 4x 10G for ROCE.
aaaand, that's not a typical desktop build. :) To paraphrase Pauli, it's not even not a typical desktop build. (Not even if desktop CPU's could do RDIMM!) If you were working in the AMD world, I expect you'd be dealing with Epyc, not Ryzen. In the Intel world, you're working with Xeon, not Core i7 or i9. Building that many PCIe lanes into a desktop CPU just to satisfy the 1 in 10,000,000 that wants to do that sort of thing would be silly.

I can kinda-sorta understand a certain amount of angst if one is worried about adding NVMe storage, since each NVMe SSD wants 4 lanes. What one needs to look at, though, is why so many drives? capacity? 2 TB consumer NVMe SSD's are commonplace and you can get more, for a price, and if you really need that much more space, you're still into hard drives which are SATA. (Or, enterprise SSD's, which tend to be u.2 form factor which isn't supported by desktop mobos anyway, because no desktops need it.) Performance? I defy most users to reliably distinguish a PCIe 4.0 NVMe SSD from a SATA SSD in most situations. Never mind what the numbers look like on paper. (Yes, I agree, there are definitely situations where the NVMe drive makes itself known vs SATA, but it's rarely the 10x that the sequential numbers might suggest. PCIe 3.0 vs 4.0, not so much, unless you're regularly transferring 10's of GB sequentially. Just work the numbers; it's hard to tell 1/3 sec from 1/6 sec.) PCIe 4.0 vs 5.0 will be even harder to distinguish subjectively; can you tell 1/6 sec from 1/12 sec? It's increasingly rare that sequential transfer rates are the long pole of the tent, outside of corporate data volumes. Bottom line is that the real limitation on adding m.2 NVMe drives these days is the physical mounting capability of the motherboard, not the lack of dedicated PCIe lanes. For the vast majority of users, CPU lanes and chipset lanes will show effectively identical performance.

I don't want to come off sounding like an Intel (or AMD) apologist. For instance, Intel's segmenting ECC away from the desktop was just evil IMO. But I can't fault them for limiting the CPU PCIe lanes when the 20 from the CPU and the additional lanes from the chipset really do satisfy the performance needs of 99.999% of desktops.

I see a rough analogy to ATX PC cases as they've evolved. I have a secondary machine built into a Fractal Designs Define S that I bought used for a song. At least 1/3 of the case is completely wasted, because I have no hard drives. I have a pair of NVMe m.2 drives on the mobo and a couple of SATA drives hiding on a bulkhead somewhere near the PSU. The rest of the box is empty. As recently as 5 years ago I might have needed that space, and I might have whinged about expandability and drive mounting points ahd why didn't Fractal give me 10 drive slots instead of just 3 or 4. Today, it's all just air.
 
Yes

The PCIe x16 slots that Graphic cards fit in can be used for ANY type of PCIe card. They are not coded or marked in some ways to only allow Graphics cards. You could toss a dual slot NVMe card in a PCIe 16x slot if you want....or a 10GB dual port SFP card if you want..Yo could put a PCIe x2 sound card in one if you wanted.

If you are wanting to build a new system and keep it for 10 years, that has never been a good way to go. Technology changes. You are better off buying what you need now, and then upgrading in 5 years or so to a new platform vs say building a high end rig thinking you are future proofing.

To ask, and maybe I missed it, what do you plan to initially use in this system? What add-on cards do you plan to use that has you so concered modern CPU's can not handle what you plan to throw at it?

I get the urge to want to build with the latest and greatest and AM4 is a dead socket, but I still build my X570 rig with a 5950x AMD processor in it knowing it will last me 3-5 years if I want it to. By that time, AM5 should be mature and the bugs worked out and heck, maybe Intel wont need a nuclear power plant to power their CPU's an an iceberg to cool it.
Actually, you can't just plug in multiple NVME drives to a single slot. The BIOS has to support and be configured for PCI-E bifurcation in order to split up the lanes. The good news is that, when it is supported, you can get up to four drives on one breakout card.

Whether or not it's supported probably depends on the motherboard. I gather it is supported on a lot of Asus' recent Intel boards.
https://www.asus.com/support/FAQ/1037507/

Not sure about AMD's consumer grade platforms. I know the WRX80E Sage supports it, because I've used it, but that's a thousand dollar motherboard that calls for a multi-thousand dollar CPU, and won't necessarily fit in every regular ATX case.
 
Not sure about AMD's consumer grade platforms. I know the WRX80E Sage supports it, because I've used it, but that's a thousand dollar motherboard that calls for a multi-thousand dollar CPU, and won't necessarily fit in every regular ATX case.
RazorWind ..... Would you please elaborate a bit more about the WRX80E Sage system you mentioned? More details about the components attached to the motherboard, for example, would be very helpful.
 
RazorWind ..... Would you please elaborate a bit more about the WRX80E Sage system you mentioned? More details about the components attached to the motherboard, for example, would be very helpful.
He’s got a thread here. He has 4 breakout cards for (IIRC) 16 total NVMe drives for drone data and scanning. It’s a work system in the back of a van stuffed into a rack or pelican case.
 
RazorWind ..... Would you please elaborate a bit more about the WRX80E Sage system you mentioned? More details about the components attached to the motherboard, for example, would be very helpful.
I actually made a whole thread about it a few weeks ago.
https://hardforum.com/threads/pci-e...se-wifi-how-crazy-can-i-actually-get.2019992/

The cost of doing that was about $15K, though. It'd be hard to justify for any sort of personal application unless you happen to be Scrooge McDuck-level well-heeled.
 
Was about to post the link to your thread. That system is being used for what it was designed for. Most consumers don’t need that. Hell, most prosumers don’t!
 
Was about to post the link to your thread. That system is being used for what it was designed for. Most consumers don’t need that. Hell, most prosumers don’t!
Seriously. That motherboard is super cool, and I'd love to own one myself, but I can't imagine what I'd do with it. With the exception of the storage, it's overkill even for most of what we do in the office.

Unrelated, something the OP may not be aware of is that most motherboards have additional M.2 slots that are wired up to the PCH (it'll always be the south bridge to me!), which is in turned hooked up to the CPU with something approximating a PCI-E X4 link. You can totally add at least one or two more drives this way, and if what you're looking for is lots of drives, you actually might do better to consider SATA anyway.
 
That’s why I keep asking what he wants to do with it. I’m buying the Xeon version of the sage- but I’m only buying a board. Everything else is provided by work for it. I couldn’t justify it any other way- even I don’t need that much for anything I normally do on my own.
 
RazorWind ..... First things first, the WRX80E Sage system you put together is quite the PC work station. Great job! As I mentioned in the OP, I've been away from PCs for over a decade, so please go easy on and old vet. As I researched the AMD Threadripper Pro (TP) 3955WX, I saw other Threadripper CPUs that didn't have "Pro" in their name. Further research also showed that the TP 3955WX can be had (used) for a decent price. I wrote a fairly detailed message to Gill Boyd, the YT work station guy, and he recommended a WRX80-based system to us; hence, the questions about your build.

The PC project in my mind is predicated on several different uses. Our Mac stuff is getting a bit long in the tooth, so I imagined a new PC as serving office functions at some point, but the need for NAS type functions comes in because we have a rather massive audio & video collection that needs to be converted to digital (our desire to make physical space). I'm also into upmixing audio from high-resolution stereo to multiple channels, which invariably results in a lot of rather large files. The Mrs is into video editing, which also renders a lot of large files.

Regardless, I don't wish to spend a ridiculous amount of money on an NAS device, so I also imagined the new PC as serving that purpose as well. To give you a better idea of how much storage we'll be needing, we already have about 12TB of data stored away and we're only getting started...and, as the storage trend seems to be going in the PCIe NVMe SSD direction, I trust you can understand why future PCIe 5.0 tech is a consideration.

So, as I hope you can see, we are truly "tweeners," somewhere between a consumer gaming system a HEDT system. The WRX80 platform looks pretty interesting, but, as ever, it comes down to how much money one is willing to spend, aye?

EDIT: Would you kindly elaborate on what "PCH" means? Are you referring to what I know as "DMI"?
 
Last edited:
Yes

The PCIe x16 slots that Graphic cards fit in can be used for ANY type of PCIe card. They are not coded or marked in some ways to only allow Graphics cards. You could toss a dual slot NVMe card in a PCIe 16x slot if you want....or a 10GB dual port SFP card if you want..Yo could put a PCIe x2 sound card in one if you wanted.
MrGuvernment ..... Thank you for writing back and, essentially, debunking the PC Mag article that I quoted earlier in this thread. Another dude that I bumped into on another forum told me that half of the "tech articles" he reads these days are written by folks who research Reddit for their support and, given what I'm learning in this thread, he may be right.
 
RazorWind ..... First things first, the WRX80E Sage system you put together is quite the PC work station. Great job! As I mentioned in the OP, I've been away from PCs for over a decade, so please go easy on and old vet. As I researched the AMD Threadripper Pro (TP) 3955WX, I saw other Threadripper CPUs that didn't have "Pro" in their name. Further research also showed that the TP 3955WX can be had (used) for a decent price. I wrote a fairly detailed message to Gill Boyd, the YT work station guy, and he recommended a WRX80-based system to us; hence, the questions about your build.
Used the procs aren’t bad. Motherboards are still very expensive though.

The PC project in my mind is predicated on several different uses. Our Mac stuff is getting a bit long in the tooth, so I imagined a new PC as serving office functions at some point, but the need for NAS type functions comes in because we have a rather massive audio & video collection that needs to be converted to digital (our desire to make physical space).
Buy a NAS.
I'm also into upmixing audio from high-resolution stereo to multiple channels, which invariably results in a lot of rather large files. The Mrs is into video editing, which also renders a lot of large files.
Definitely buy a NAS.
Regardless, I don't wish to spend a ridiculous amount of money on an NAS device, so I also imagined the new PC as serving that purpose as well.
As someone who has 8 different repurposed systems serving as NAS devices, and two dedicated NAS (Synology) - buy a NAS. They’re cheaper than you think per G, and more reliable because you’re not trying to dual purpose the box. I do it because I’m emulating various devices outside the norm- but when I just need raw throughput storage like you do? I buy a NAS.
To give you a better idea of how much storage we'll be needing, we already have about 12TB of data stored away and we're only getting started...and, as the storage trend seems to be going in the PCIe NVMe SSD direction, I trust you can understand why future PCIe 5.0 tech is a consideration.
For ultra high performance or absurd throughput definitely. You need fast scratch space and long term storage. That matches my Threadripper box - it has two NVMe drives and then is hooked up to 40T of NAS storage via 10G Ethernet. Because most of that is idle until I pull it over and do something with it.
So, as I hope you can see, we are truly "tweeners," somewhere between a consumer gaming system a HEDT system. The WRX80 platform looks pretty interesting, but, as ever, it comes down to how much money one is willing to spend, aye?
Ayup. For what you’re doing all NVMe is a waste. Imho of course.
EDIT: Would you kindly elaborate on what "PCH" means? Are you referring to what I know as "DMI"?
It’s what used to be the south bridge. It’s a second set of connected devices via a link back to the CPU. Usually an extra PCIE slot too.
 
RazorWind ..... First things first, the WRX80E Sage system you put together is quite the PC work station. Great job! As I mentioned in the OP, I've been away from PCs for over a decade, so please go easy on and old vet. As I researched the AMD Threadripper Pro (TP) 3955WX, I saw other Threadripper CPUs that didn't have "Pro" in their name. Further research also showed that the TP 3955WX can be had (used) for a decent price. I wrote a fairly detailed message to Gill Boyd, the YT work station guy, and he recommended a WRX80-based system to us; hence, the questions about your build.

The PC project in my mind is predicated on several different uses. Our Mac stuff is getting a bit long in the tooth, so I imagined a new PC as serving office functions at some point, but the need for NAS type functions comes in because we have a rather massive audio & video collection that needs to be converted to digital (our desire to make physical space). I'm also into upmixing audio from high-resolution stereo to multiple channels, which invariably results in a lot of rather large files. The Mrs is into video editing, which also renders a lot of large files.

Regardless, I don't wish to spend a ridiculous amount of money on an NAS device, so I also imagined the new PC as serving that purpose as well. To give you a better idea of how much storage we'll be needing, we already have about 12TB of data stored away and we're only getting started...and, as the storage trend seems to be going in the PCIe NVMe SSD direction, I trust you can understand why future PCIe 5.0 tech is a consideration.

So, as I hope you can see, we are truly "tweeners," somewhere between a consumer gaming system a HEDT system. The WRX80 platform looks pretty interesting, but, as ever, it comes down to how much money one is willing to spend, aye?

EDIT: Would you kindly elaborate on what "PCH" means? Are you referring to what I know as "DMI"?
I agree with lopoetve - get a NAS, and then handle the client tasks with appropriate hardware for the clients.

My use case with the bazillion NVME drives involved a need for that machine to be semi-portable and hardened against damage from rough handling. If you just need a whackton of storage, there are simpler and cheaper ways to do it than what I did there.

Edit: Also get 10G NICs and switches. It'll be worth it.
 
Back
Top