Why are the latest DDR5 motherboards so tiny?

rkd29980

Limp Gawd
Joined
Oct 19, 2015
Messages
181
So I have been looking at the EATX and ATX Intel 1700 motherboards from Asrock and Asus and they ALL only have have at most a measly pathetic 2 PCIe 5.0 x16 slots and some will also have 1 additional 4.0 PCIe x16 slot for a total of three.

It also seems the new AMD DDR5 motherboards are just as if not even more cucked!

What gives?

Was it like this when DDR4 motherboards first came out? My current PC is DDR3 with 16GB which is why I am looking to upgrade. I skipped the DDR4 generation completely and I have to say, I'm not impressed by what I am seeing available with DDR5 hardware.

When do you think we will see motherboards with at least 6 or more full PCIe slots like this? https://www.asrock.com/mb/Intel/X79 Extreme11/index.us.asp
 
Not sure my uninformed guess would be maybe because there isn't enough pci e lanes on the CPU to support more slots and/or they know graphics cards are going to be 3 or 4 slots blocking them all and/or heatsinks for the multiple m.2 slots. It may be one of these or all or none lmao
 
You will see 6 full slots if HEDT ever returns.... otherwise, PCIe lanes are generally the limiting factor. IMO, AMD has done a better job on the PCIe lane front than Intel has... My X570 board has 3 full 16x PCIe 4.0 slots, and 2 1x PCIe 4.0 slots. It also has 1 nvme slot at PCIe 4.0 from the CPU and 2 more at PCIe 4.0 from the chipset (shared). The new AM5 X670E boards also have 3 nvme slots and 3 x16 PCIe slots. I believe at least 1 is PCIe 5.0 for both PCIe x16 and nvme.
 
My Asus Crosshair Hero VIII X570 only has two M.2 slots. I though it was kind of crappy since a lot of others come with three slots. I'm actually using both at the moment as well. One is tied directly to the CPU and the other the chipset. I do think every board on the full featured chipset should be required to have three.
 
So I have been looking at the EATX and ATX Intel 1700 motherboards from Asrock and Asus and they ALL only have have at most a measly pathetic 2 PCIe 5.0 x16 slots and some will also have 1 additional 4.0 PCIe x16 slot for a total of three.

It also seems the new AMD DDR5 motherboards are just as if not even more cucked!

What gives?

Was it like this when DDR4 motherboards first came out? My current PC is DDR3 with 16GB which is why I am looking to upgrade. I skipped the DDR4 generation completely and I have to say, I'm not impressed by what I am seeing available with DDR5 hardware.

When do you think we will see motherboards with at least 6 or more full PCIe slots like this? https://www.asrock.com/mb/Intel/X79 Extreme11/index.us.asp
The slots are still there, they have just taken a different form. Your lanes are being taken up by M.2 slots. They are in essence, mini-PCIe slots. Also, M.2 is a stupid form factor for the desktop as it takes up a ton of space on the PCB. The connector is small but the device lays flat on the PCB taking up about as much room or more than a conventional PCIe slot does. With two, three or more M.2 slots on some of these boards, there isn't any physical space for more PCIe slots even if you had the lanes for them. Besides, how many PCIe slots could you possibly need? Again, non-HEDT platforms lack the lanes to run much more than what we are given anyway. You are going to use a single GPU that's going to take up two or three slots of space. You could use two with NV-Link, but that's for niche applications and uses. You might need one more slot for a controller or something. What else? Modern PC's generally don't need a ton of add-in boards. If you want better audio, or network ports, buy higher end boards. Obviously, there are some limitations there but it is what it is.

The X79 board you linked with 6 PCIe slots was an HEDT motherboard. The HEDT market is pretty much dead now. Outside of some very niche scenarios, there is no justification for HEDT as a gaming platform anymore. It has no inherent advantage and quite a few disadvantages. The last real HEDT platform we got was during the Ryzen 3000 series days and it had reached costs of around $1,000 or more for the motherboards and $2,0000+ for the CPU's. With SLI being dead and gone, the need for 32 PCIe lanes just for graphics cards died off. Mainstream CPU's and their platforms provide more PCIe lanes than they had before making HEDT an absurdly expensive proposition with little to no upside for the gamer or even most enthusiasts. HEDT still exists, but pretty much in an OEM only type of format. At the DIY and enthusiast level, it's pretty much dead.

The demand for more lanes and more cores on the mainstream side (and fervent competition between AMD and Intel) lead to the mainstream segment overlapping with HEDT. HEDT ended up getting moved up to ridiculous core counts and PCIe lane counts that became more costly and much harder to justify. Essentially, the high end mainstream segment ate the mid-range to lower end HEDT market alive. That and a shift in how the lanes are used (more and more are going to M.2 slots) are why you won't see motherboards with 6 traditional expansion slots. We may never see something like that ever again.
 
Last edited:
how many PCIe slots could you possibly need? You might need one more slot for a controller or something. What else?

Most controllers use a USB or Com port, not a PCI-E slot. My Xbox One controller uses USB.
Orher things I needed more PCI-E slots for include a capture card for streaming PS3 games, a PCI-E to m.2 adapter since my board didn't include a m.2 slot, and an expansion card to add USB-C my PC. Makes it really convenient for plugging in t peripherals in dark because USB-C can't be accidentally plugged in upside down.
 
Last edited:
Most controllers use a USB or Com port, not a PCI-E slot. My Xbox One controller uses USB.
Orher things I needed more PCI-E slots for include a capture card for streaming PS3 games, a PCI-E to m.2 adapter since my board didn't include a PCI-E slot, and an expansion card to add USB-C my PC. Makes it really convenient for plugging in t peripherals in dark because USB-C can't be accidentally plugged in upside down.
Not the kind of controller I was talking about. I was talking about network or storage controllers. The capture card was one device I hadn't considered but you'd have a slot for that. Every motherboard made since at least 2015 has built in M.2 slots. Any motherboard made in the last couple of years or more will have at least one built in USB 3.x type-C port. You obviously haven't really looked at motherboards in awhile as COM ports haven't been a thing in a very long time. Controllers have been USB pretty much exclusively since about the early XBOX 360 days. That was almost two decades ago at this point. Based on what you've said, you need precisely two PCIe slots. I've not seen any motherboards with less than that. You'd have one for your GPU and one for your capture card. Everything else would be built into the motherboard itself.
 
Last edited:
Most controllers use a USB or Com port, not a PCI-E slot. My Xbox One controller uses USB.
Orher things I needed more PCI-E slots for include a capture card for streaming PS3 games, a PCI-E to m.2 adapter since my board didn't include a m.2 slot, and an expansion card to add USB-C my PC. Makes it really convenient for plugging in t peripherals in dark because USB-C can't be accidentally plugged in upside down.
Most of that would be built-in into a modern MB.
 
So I have been looking at the EATX and ATX Intel 1700 motherboards from Asrock and Asus and they ALL only have have at most a measly pathetic 2 PCIe 5.0 x16 slots and some will also have 1 additional 4.0 PCIe x16 slot for a total of three.

It also seems the new AMD DDR5 motherboards are just as if not even more cucked!

What gives?

Was it like this when DDR4 motherboards first came out? My current PC is DDR3 with 16GB which is why I am looking to upgrade. I skipped the DDR4 generation completely and I have to say, I'm not impressed by what I am seeing available with DDR5 hardware.

When do you think we will see motherboards with at least 6 or more full PCIe slots like this? https://www.asrock.com/mb/Intel/X79 Extreme11/index.us.asp
No PCIE lanes - you couldn't feed more slots even if you wanted to. The older boards used PLX chips to effectively handle more slots (it's basically a PCIE switch).
 
No PCIE lanes - you couldn't feed more slots even if you wanted to. The older boards used PLX chips to effectively handle more slots (it's basically a PCIE switch).
PLX chips also came with their own issues. They greatly increased the cost of the motherboards and they increased latencies across the PCIe bus. There were plenty of tests back in the day showing that this negatively impacted frame rates. You were better off sticking with 8x PCIe lanes for your graphics (even with SLI) card(s) on a mainstream system than having x16/x16 lanes via a PLX chip. While many people overstated the performance hit for this, it did lead to some manufacturers even going so far as to offer a separate PCIe x16 slot that bypassed the PLX chip entirely for situations where only one GPU was to be used. What PLX chips really did for you was make things more convenient by handling all the switching and thus allowing the automatic allocation of lanes to the slots in a variety of different ways. This would allow you to have more potential configurations and better distribution of the lanes albeit at the cost of latency and physical price increases for the boards themselves. Eventually motherboard manufacturers stopped even bothering with PLX chips entirely. I haven't seen one on a motherboard in a long time now. I don't think I've seen one since the X99 and Z170 days.
 
PLX chips also came with their own issues. They greatly increased the cost of the motherboards and they increased latencies across the PCIe bus. There were plenty of tests back in the day showing that this negatively impacted frame rates. You were better off sticking with 8x PCIe lanes for your graphics (even with SLI) card(s) on a mainstream system than having x16/x16 lanes via a PLX chip. While many people overstated the performance hit for this, it did lead to some manufacturers even going so far as to offer a separate PCIe x16 slot that bypassed the PLX chip entirely for situations where only one GPU was to be used. What PLX chips really did for you was make things more convenient by handling all the switching and thus allowing the automatic allocation of lanes to the slots in a variety of different ways. This would allow you to have more potential configurations and better distribution of the lanes albeit at the cost of latency and physical price increases for the boards themselves. Eventually motherboard manufacturers stopped even bothering with PLX chips entirely. I haven't seen one on a motherboard in a long time now. I don't think I've seen one since the X99 and Z170 days.
Sure, I was thinking more of the other crap you might want to attach - less worried about PCIE latency on a SAS controller for instance than I would be on a GPU - or an extra 10G network card, etc. I'm the oddball that uses my systems as servers too, so I like the extra connections (but I'm an EXTREME edge case - especially these days). Not GPUs, but the other fiddly bits :D

There was one Z490 board with them from Supermicro - might have been Z390. It was supposed to be a unique board for folk like me. I just bought HEDT instead.
 
Check out Level1Techs on YouTube Wendell does all kinds of workstation stuff. He's an ultra mega nerd. I enjoy his content. He'll be able to answer your questions.
 
What gives?
Nobody is filling those slots. Or I should say, most people aren't. you're using either onboard or USB audio, onboard LAN, onboard WiFi...the only real add in is the GPU, so manufacturers are directing more of those precious lanes to fast storage.
 
Nobody is filling those slots. Or I should say, most people aren't. you're using either onboard or USB audio, onboard LAN, onboard WiFi...the only real add in is the GPU, so manufacturers are directing more of those precious lanes to fast storage.
Well I'm ok with onboard NIC but onboard audio, never. I absolutely need a sound blaster card. Onboard audio is miserable in comparison. I've had a sound blaster card for the past 15 years and will have it no other way, there is a massive difference.
Interesting how I've never needed the 3rd slot though lol
 
Well I'm ok with onboard NIC but onboard audio, never. I absolutely need a sound blaster card. Onboard audio is miserable in comparison. I've had a sound blaster card for the past 15 years and will have it no other way, there is a massive difference.
Interesting how I've never needed the 3rd slot though lol
I hear you, just most folks don’t bother. The onboard is “good enough”
 
Well I'm ok with onboard NIC but onboard audio, never. I absolutely need a sound blaster card. Onboard audio is miserable in comparison. I've had a sound blaster card for the past 15 years and will have it no other way, there is a massive difference.
Interesting how I've never needed the 3rd slot though lol
Depends on the quality of board you buy. Both of my main systems have high-end DAC included for headphones, and excellent integrated audio - and both motherboards cost well north of $500 (heck, one of them was $800, but it's HEDT). For that matter, given the change in the windows sound subsystem, most folks are just using USB sound cards now if you don't like the integrated audio - they have to go through that code path ANYWAY, so might as well make it external and have more options that way too.
 
Depends on the quality of board you buy. Both of my main systems have high-end DAC included for headphones, and excellent integrated audio - and both motherboards cost well north of $500 (heck, one of them was $800, but it's HEDT). For that matter, given the change in the windows sound subsystem, most folks are just using USB sound cards now if you don't like the integrated audio - they have to go through that code path ANYWAY, so might as well make it external and have more options that way too.
The funny thing is I'm still "old school" I have always gone with direct inputs into the sound card for the optical out for the 5.1 and 3.5mm out for the headphones. I have never tried any USB dac style headphones. Lol.
 
I tried the onboard audio for the first time on my 5.1 setup on my newer AMD rig and it was poop. Sound Blaster all the way. Sure, the signals have to be processed in software now and have for ages, but the DACs still matter, and onboard audio has shit DACs most the time. I have had Sound Blaster cards since the late 90's... lol. Currently on an AE-7 using the analog outs to my 5.1 setup and it sounds wonderful. A somewhat notable upgrade over the SBx-Z I had previously.
 
I tried the onboard audio for the first time on my 5.1 setup on my newer AMD rig and it was poop. Sound Blaster all the way. Sure, the signals have to be processed in software now and have for ages, but the DACs still matter, and onboard audio has shit DACs most the time. I have had Sound Blaster cards since the late 90's... lol. Currently on an AE-7 using the analog outs to my 5.1 setup and it sounds wonderful. A somewhat notable upgrade over the SBx-Z I had previously.
I have my 2nd rig connected to a SB ZX from the analog outs to the Z5500 also so my main rig can connect the SB AE-5 to the same Z5500 with optical digital. Both rigs headphones 3.5 out. This way I managed both rigs to be hooked up to the Z5500 with full 5.1 and both rigs headphones with max amplification. I can't say enough good things about the direct pci e Sound blasters. They are monstrous lol. So powerfully can never even dream of going more than halfway up volume. And the Z5500 juiced up either sound balster forget it if I turn it up more than half I think it'll blow my house down lmao
 
I think the demand for significant PCIe is handled by the server market. I would guess that most use cases that demand many PCIe devices also need significant core counts and memory bandwidth that the server CPUs cater to. 128 lanes of PCIe 5 and 460 GBps/12 channel memory bandwidth along with 16-96 cores per socket. On the latest AMD Epyc 9004 series. You miss out on the crazy per core performance of the modern desktop CPUs but scaling out to multiple boxes for additional compute and storage is where I'd go. If you can accommodate the fastest GPU + 100 gbps NIC + the best boot drive money can buy... you can likely move rendering, storage, etc to separate computers and give up minimal real world performance.
 
I have a little Fiio (spelling?) outboard sound card/DAC/whatever USB device connected to a tiny amp connected to some old Sony bookshelf speakers. Does the job for me and then I can disable the onboard soundcard in BIOS and I'm not using a PCI-E slot. I think good ole FrgMstr posted a how-to here about it back in the day. Except he added a subwoofer to the mix.
 
Have X-Fis (Titaniums, modded XtremeMusics, modded XtremeGamers) in all systems (except the HTPC for obvious reasons)
Had a chuckle when I recently built the 7900X system in my sig (PCI-E Gen1 in a Gen5 platform)
 
Also something to consider is PCIe 5 is double the bandwidth of PCIe4 so essentially a x4 PCIe 5 slot will have the same bandwidth as a x8 PCIe 4 slot so don't forget that aspect. Outside of graphics cards (down the line) and maybe storage (or network?), how much bandwidth does one need in a PCIe slot? A capture card and 10gbe NIC only require so much bandwidth, no sense in having a full 16x slot for a NIC (as an example). So I personally think it's less of an issue of how many full size PCIe slots a board comes with looking forward as how many x4 and maybe x8 slots come included. Really only should need 1 x16 slot for high end graphics, the rest can be x4, maybe x8 (or a mix ofc).

If you want to have fun installing expansion cards, pull a dusty ol' system out of the closet and toss in a sound card, modem, nic, USB 2.0 controller, video card in AGP trim; the skys the limit!

Edit: As an aside to this, moving forward I've found myself looking more and more at ITX and MATX motherboards for future builds as I just don't need the amount of (slot) expansion a full sized board has. 99% of what I want/need is onboard these days so outside of a video card (on my main system), having more is just...flexing? Even on my Plex server the only card I'd like to add is one to add some additional SATA ports for drive expansion. My Pfsense box has an ITX board which is perfect b/c the only card I have/need in it is a dual port gigabit NIC.

Also, I've had a Lian Li 011 on my desk for quite a while now and my next system I'd like to be just a bit...smaller :LOL:
 
Last edited:
If you want to have fun installing expansion cards, pull a dusty ol' system out of the closet and toss in a sound card, modem, nic, USB 2.0 controller, video card in AGP trim; the skys the limit!
No point half-a*sing it. Pull out a really dusty 'ol system, and toss in a FDD controller, HDD controller, serial/parallel controller, RAM expansion, NIC, sound, gameport card, hard-card, etc. :D
 
No point half-a*sing it. Pull out a really dusty 'ol system, and toss in a FDD controller, HDD controller, serial/parallel controller, RAM expansion, NIC, sound, gameport card, hard-card, etc. :D
Don't forget about using a board that supports a CPU cache expansion card! Mmm that's the good stuff right there 🤩
 
So I have been looking at the EATX and ATX Intel 1700 motherboards from Asrock and Asus and they ALL only have have at most a measly pathetic 2 PCIe 5.0 x16 slots and some will also have 1 additional 4.0 PCIe x16 slot for a total of three.
How many active PCIe 5.0 cards of any type are available?

How many PCIe 5.0 NVME Drives are available?

Crossfire / SLI are both dead.

What do you need all these PCIe 5.0 lanes for?
 
Not the kind of controller I was talking about. I was talking about network or storage controllers. The capture card was one device I hadn't considered but you'd have a slot for that. Every motherboard made since at least 2015 has built in M.2 slots. Any motherboard made in the last couple of years or more will have at least one built in USB 3.x type-C port. You obviously haven't really looked at motherboards in awhile as COM ports haven't been a thing in a very long time. Controllers have been USB pretty much exclusively since about the early XBOX 360 days. That was almost two decades ago at this point. Based on what you've said, you need precisely two PCIe slots. I've not seen any motherboards with less than that. You'd have one for your GPU and one for your capture card. Everything else would be built into the motherboard itself.

OP here.

So in my current PC, I have my 980Ti, a Mellanox ConnectX-2 10G NIC card, a USB 3.1 C card and a card to add more SATA ports.

On the new build I am planning, I will have an RTX 3 card and I will keep the Mellanox ConnectX-2. I can eliminate the USB 3.1 card since the new motherboard with have plenty of those and some USB 4.0 ports built in and I only briefly used the SATA card since my Z97 Extreme6 has 10 SATA ports which has served me well and I could eliminate that as well since it is old as fuck but looking at these new Z690 and Z790 motherboards, they only have 6 SATA ports so to use all 10 of my drives and to add more drives (I do have servers and am a general data hoarder) I would need to keep and start using the SATA card.

So that would use up all 3 PCIe slots on one of these new $500+ DDR5 motherboards and leave me with no room for a capture card and any other cards I may want or need if decide to get into game streaming or if I decide to start making YouTube videos. It leaves me no room for a dedicated sound card either.

Which takes me back to my original post. What gives? Come on manufacturers, don't be assholes!
 
OP here.

So in my current PC, I have my 980Ti, a Mellanox ConnectX-2 10G NIC card, a USB 3.1 C card and a card to add more SATA ports.

On the new build I am planning, I will have an RTX 3 card and I will keep the Mellanox ConnectX-2. I can eliminate the USB 3.1 card since the new motherboard with have plenty of those and some USB 4.0 ports built in and I only briefly used the SATA card since my Z97 Extreme6 has 10 SATA ports which has served me well and I could eliminate that as well since it is old as fuck but looking at these new Z690 and Z790 motherboards, they only have 6 SATA ports so to use all 10 of my drives and to add more drives (I do have servers and am a general data hoarder) I would need to keep and start using the SATA card.

So that would use up all 3 PCIe slots on one of these new $500+ DDR5 motherboards and leave me with no room for a capture card and any other cards I may want or need if decide to get into game streaming or if I decide to start making YouTube videos. It leaves me no room for a dedicated sound card either.

Which takes me back to my original post. What gives? Come on manufacturers, don't be assholes!
I know we want the cake and to eat it too but the several M.2 slots on board eat up PCIe too. Something has to give. My guess is the manufacturers are using the lanes but not in the way you'd prefer. I'd eliminate the need for additional SATA by moving the card to a server if possible or looking at other server upgrades. Since you have 10G connectivity you'll have excellent performance accessing SATA over ethernet. ISCSI or network shares.

I did what I'm recommending many years ago. Now my desktops have minimal storage internally and server does server things. Virtualization, DVR/NVR, and storage moved out of client machines.
 
I know we want the cake and to eat it too but the several M.2 slots on board eat up PCIe too. Something has to give. My guess is the manufacturers are using the lanes but not in the way you'd prefer. I'd eliminate the need for additional SATA by moving the card to a server if possible or looking at other server upgrades. Since you have 10G connectivity you'll have excellent performance accessing SATA over ethernet. ISCSI or network shares.

I did what I'm recommending many years ago. Now my desktops have minimal storage internally and server does server things. Virtualization, DVR/NVR, and storage moved out of client machines.

But I don't use or care about M.2 drives. I get that they are faster but 2.5" SATA SSD's are still plenty fast and are a lot cheaper and can you even RIAD M.2/PCIe drives?

I would rather have many cheap high capacity mechanical SATA drives in a RAID than several really fast over priced M.2 SSD's and with my current motherboard, when I used the M.2 slot, it disabled several of my SATA ports so I now have a 500GB M.2 SSD that I wasted money on many years ago that I actually forgot about until now and if these newer motherboards do the same thing of letting you use either the M.2 slots OR the SATA ports but not both then fuck the M.2.

And yes while I have servers, I only power them on when I need them. I only really use them for storing backups, media and things I would want to access rarely. Everything else I like to have on hand locally on a bunch of SATA drives.
 
But I don't use or care about M.2 drives. I get that they are faster but 2.5" SATA SSD's are still plenty fast and are a lot cheaper and can you even RIAD M.2/PCIe drives?

I would rather have many cheap high capacity mechanical SATA drives in a RAID than several really fast over priced M.2 SSD's and with my current motherboard, when I used the M.2 slot, it disabled several of my SATA ports so I now have a 500GB M.2 SSD that I wasted money on many years ago that I actually forgot about until now and if these newer motherboards do the same thing of letting you use either the M.2 slots OR the SATA ports but not both then fuck the M.2.

And yes while I have servers, I only power them on when I need them. I only really use them for storing backups, media and things I would want to access rarely. Everything else I like to have on hand locally on a bunch of SATA drives.
To answer your first question, yes, you can RAID nvme drives. My mobo supports RAID 0/1/10 on the M.2 slots.
 
But I don't use or care about M.2 drives. I get that they are faster but 2.5" SATA SSD's are still plenty fast and are a lot cheaper and can you even RIAD M.2/PCIe drives?

I would rather have many cheap high capacity mechanical SATA drives in a RAID than several really fast over priced M.2 SSD's and with my current motherboard, when I used the M.2 slot, it disabled several of my SATA ports so I now have a 500GB M.2 SSD that I wasted money on many years ago that I actually forgot about until now and if these newer motherboards do the same thing of letting you use either the M.2 slots OR the SATA ports but not both then fuck the M.2.

And yes while I have servers, I only power them on when I need them. I only really use them for storing backups, media and things I would want to access rarely. Everything else I like to have on hand locally on a bunch of SATA drives.
i think you might need to get caught up on storage options. m.2 is not "over priced" anymore. you can get a samsung 980 1tb m.2 for $120, a 870 ssd is $160(CAN pricing). you can also raid nvme with most boards built in raid.
seems you want server level hardware connections on your desktop, not happening. maybe you need some NAS...
 
But I don't use or care about M.2 drives. I get that they are faster but 2.5" SATA SSD's are still plenty fast and are a lot cheaper and can you even RIAD M.2/PCIe drives?

I would rather have many cheap high capacity mechanical SATA drives in a RAID than several really fast over priced M.2 SSD's and with my current motherboard, when I used the M.2 slot, it disabled several of my SATA ports so I now have a 500GB M.2 SSD that I wasted money on many years ago that I actually forgot about until now and if these newer motherboards do the same thing of letting you use either the M.2 slots OR the SATA ports but not both then fuck the M.2.

And yes while I have servers, I only power them on when I need them. I only really use them for storing backups, media and things I would want to access rarely. Everything else I like to have on hand locally on a bunch of SATA drives.
A single decent gen 3 NVMe drive will still crush any SSD or HDD raid setups. HDD erw cheap for the capacity you can get now. No reason to have multiple in a normal system nowadays. When I built my NAS I got rid of all the 1, 2, 4tb drives I had in my main system. Most people don't need 4 SATA ports let alone 10. Get with the times old man.
 
SuperMicro has a Z790 4 slot board, although 2 of them are 3.0 1x.

https://www.supermicro.com/en/products/motherboard/C9Z790-CGW

They also have a pair of W680 boards with 4 slots, 2 of which are 3.0 4x, and a PCI slot for your older cards.

https://www.supermicro.com/en/products/compare?sku=X13SAE-F,X13SAE
Since when does Supermicro put lighting on their boards?! NGL tho, that's kinda sexy...

1671733728361.png
 
But I don't use or care about M.2 drives. I get that they are faster but 2.5" SATA SSD's are still plenty fast and are a lot cheaper and can you even RIAD M.2/PCIe drives?

I would rather have many cheap high capacity mechanical SATA drives in a RAID than several really fast over priced M.2 SSD's and with my current motherboard, when I used the M.2 slot, it disabled several of my SATA ports so I now have a 500GB M.2 SSD that I wasted money on many years ago that I actually forgot about until now and if these newer motherboards do the same thing of letting you use either the M.2 slots OR the SATA ports but not both then fuck the M.2.

And yes while I have servers, I only power them on when I need them. I only really use them for storing backups, media and things I would want to access rarely. Everything else I like to have on hand locally on a bunch of SATA drives.
If you don't like M.2 you can use a M.2 to U.2 converter. You can also get M.2 to SATA converters cheap. I read about a 10 gigabit m.2 NIC being made awhile back.

1671734576247.png
 
i think you might need to get caught up on storage options. m.2 is not "over priced" anymore. you can get a samsung 980 1tb m.2 for $120, a 870 ssd is $160(CAN pricing). you can also raid nvme with most boards built in raid.
seems you want server level hardware connections on your desktop, not happening. maybe you need some NAS...

No, as a data hoarder who has literally spent many many thousands of dollars on hard drives in just the last decade and is well up on storage trends, I think it is you who needs to "get caught up on storage options".

Just looking at Newegg and nowhere else, the cheapest 8TB M.2 SSD is $1,075 while the cheapest 2.5" 8TB SSD is $640. I got my 8TB QVO off of ebay for $450 and those who are patient and willing to wait and shop around for deals and holiday sales can always get a 2.5" SSD for half the price of an M.2 SSD. And those who are willing to deal hunt and/or take a chance on a used/refurbished or shucked drive can get a 20TB for around $300.

So yes, my statement still stands. M.2 drives are way over priced.
 
No, as a data hoarder who has literally spent many many thousands of dollars on hard drives in just the last decade and is well up on storage trends, I think it is you who needs to "get caught up on storage options".

Just looking at Newegg and nowhere else, the cheapest 8TB M.2 SSD is $1,075 while the cheapest 2.5" 8TB SSD is $640. I got my 8TB QVO off of ebay for $450 and those who are patient and willing to wait and shop around for deals and holiday sales can always get a 2.5" SSD for half the price of an M.2 SSD. And those who are willing to deal hunt and/or take a chance on a used/refurbished or shucked drive can get a 20TB for around $300.

So yes, my statement still stands. M.2 drives are way over priced.
M.2, NVME, is more expensive per gigabyte because it vastly outperforms and/or outlasts those other options. You're not comparing things properly or trying to do something in a less efficient way than available technology can do it.
There's a reason only the slowest maintream Intel CPUs support ECC. They don't want i3, i5, and i7 cannibalizing the server market. Celeron and Pentium CPUs have support ECC memory for years but none of the higher end stuff will. Motherboard manufactures probably tailer their motherboards as a result. You probably want to look at the W680 chipset motherboards.

1671741537483.png
 
Back
Top