Good use for an x299 box?

lopoetve

Extremely [H]
Joined
Oct 11, 2001
Messages
33,883
Yet another lockdown inspired build... did the gaming system and the workstation in may and June, did the HTPC in August... VR system is wrapping up shortly... which leaves me bored again - just as we go into lockdown two.
So, with an X570, TRX40, x399, and Z490 already done... all that’s left is to build an X299 system! Because why the hell not. But I can’t think of what to use it for... local micro center has 10980s in stock and on sale.

Arcade cabinet setup?
hardline watercooling learner box?
Extra ESXi host?
Portable (LOL) lan box?

Trying to think of something creative to do that would give me a reason to build one last machine.
 
Mine was a Kubernetes lab.
128gb ram, 3x nvme drives, 4 ssds.

I ran discreet redis 5.0 install as a dummy cache on a dedicated samsung 960 Pro.

Ironic but I had to run MQ on the other 960 Pro bc I was testing DRs with an eye towards preserving data in flight.

N-tier web application, key value stores, mongo ran on ssds.
 
Oh now that’s a fascinating possibility I hadn’t considered, and actually need... damned brilliant idea!
 
Dammit. And there are even things I need to do on the CSI driver side. Why didn’t I think of that?!?
 
I mean, what would you like to learn that you can take advantage of the pcie bandwidth, ram density, and core ct?

I haven't spoken to any of the guys I used to work with at a gaming pub lately, but cracking the Intel multithread Dev guide was a big topic earlier this year.

A significant % of the full stack/BI/DBA set all dove at machine learning.

lots of VMware guys have to start planning on hybrid or at least drafting public cloud migration POCs.

Kubernetes is everywhere like Linux was everywhere 20 years ago. It's just an OS for all intents and purposes. The actual workflow of Docker containers from initial commit thru Prod is an arc we all benefit from understanding.

CICD is something you can work on. People still cling to 1 recipe and don't account for issues in build time, Sec checks, quality checks, etc thru multiple environments. That workflow in itself is the hot ticket right now.

We can all learn more networking.

We can all learn more storage tier architecting for a given DB and datastore use case.

Even the desktop support guys get to learn how AD/Ldap/Id federation remote at scale for a given quality level works.
 
So I’m a datacenter architect. Up till last week, I ran a lab with 1800c/40TB of ram (and about a PB of all flash storage), all for my use and a couple of others. Virtualized networking, full BGP and spine leaf stack, you name it. I’ve designed and architected container solutions... just never spent much actual hands on time with Kubernetes for some reason- thats why this excites me.
Cloud and VDI expert, just... never got the hands on time with the container side once kube came out. Lots and lots of docker time, but... yeah. So there’s a gap I can fill!!!
 
Oh, you mean a plex server or the like? That's on x399 as a HTPC already :)
 
x399 as an HTPC? Isn't that a little bit overkill? I think I might have a few more build ideas for you. First the passive HTPC build (e.g. no moving parts unless you want a blu-ray drive) that actually sits next to the TV. Just move the x399 plex server to the machine room next to the multi-socket machines where it belongs. That's the other build idea. You haven't mentioned building an EPYC or Xeon machine or anything about rack mount yet... :D

I'd suggest something to do with an X299 box but we have pretty different ideas about professional development. I'm a software engineer in the financial trading industry so I'd use it as part of a group of machines for playing around with high performance network and client-server programming or just as a primary desktop, which is how I'm using mine. Containers? I have a setup in AWS for doing functional testing but that's about all I do with virtualization. In production our stuff mostly runs on bare metal Linux machines (aside from a few customers that insist on Windows) with isolcpus set and our apps pin threads to the isolated cores.

I built a 10980 box last month. I was looking at a 10940, then Microcenter put 10980s on sale. I'd rather have a 3xxx Threadripper, but +$600 and overkill, so waste of money. I don't need that many cores. 12 or 14 would be enough, 18 is plenty. My cheap option would have been a R7-3900X or 5900X but they don't have enough PCIe lanes. I don't play twitch games so enough PCI lanes > a little more clock speed.
 
all that’s left is to build an X299 system! Because why the hell not. But I can’t think of what to use it for
For the sake of humanity, it's better to let the older and cheaper build to the folks who really have a need for it. With the current pandemic, parts availability is getting worse, on top of the stupidly high price for everything, including old parts.
 
x399 as an HTPC? Isn't that a little bit overkill? I think I might have a few more build ideas for you. First the passive HTPC build (e.g. no moving parts unless you want a blu-ray drive) that actually sits next to the TV. Just move the x399 plex server to the machine room next to the multi-socket machines where it belongs. That's the other build idea. You haven't mentioned building an EPYC or Xeon machine or anything about rack mount yet... :D

I'd suggest something to do with an X299 box but we have pretty different ideas about professional development. I'm a software engineer in the financial trading industry so I'd use it as part of a group of machines for playing around with high performance network and client-server programming or just as a primary desktop, which is how I'm using mine. Containers? I have a setup in AWS for doing functional testing but that's about all I do with virtualization. In production our stuff mostly runs on bare metal Linux machines (aside from a few customers that insist on Windows) with isolcpus set and our apps pin threads to the isolated cores.

I built a 10980 box last month. I was looking at a 10940, then Microcenter put 10980s on sale. I'd rather have a 3xxx Threadripper, but +$600 and overkill, so waste of money. I don't need that many cores. 12 or 14 would be enough, 18 is plenty. My cheap option would have been a R7-3900X or 5900X but they don't have enough PCIe lanes. I don't play twitch games so enough PCI lanes > a little more clock speed.
Every system has dual purposes. The x399 runs Plex, a nested ESXi host, several VMs for managing media, and then doubles as a 4k console-like gaming system when I need it to (2080TI) - while not gaming, it's doing a LOT of other things constantly, and is Threadripper so I can keep chucking more RAM and drives at it as it grows (currently 50G of RAM allocated just to VMs, out of the 64 on the system). My room-scale VR system runs another nested ESXi host, an SSD storage appliance, and a backup device, when not playing VR. And so on, and so on.

Trojan - Core Server - Xeon V3, also runs storage appliance (auto-tiering + replication), domain controller, PFsense. The core server of the house, basically.
Forge - Threadripper 3960X + 5700XT - Content creation, management, gaming (my workstation). Nested ESXi as well. Currently running the second domain controller, but I plan on moving that off to Agamemnon (see below). This is where I do all my serious work, but that's mostly VMs and simulations.
Sovereign - Z490+3080 - 1440P high-end gaming, backup intel system when I need to test code on Intel (boots from an external SSD in that case)
Spartan - x399 Threadripper 1950X + 2080TI - Media server, 4k big-screen gaming, Nested ESXi, media download tools, etc.
Hoplite - x570 + 3950X + GTX970. VR Room Scale, Flex-1, Nested ESXi. If I get a 6800XT for Forge, the 5700XT will flow down here.

Centurion - In design - Arcade emulation in a stand-up console. No idea what goes into this - might be x299, might be something else. Will also run Flex-2, which is hte second node of an ultra high-speed storage layer (needs 3 nodes). May run kubernetes on this and do it as x299 - Centurion sounds like the name for an intel box. :p
Mongol - In design, no idea, but Flex-3 will run on this. Still debating on the name even, or what the second use case is. Won't build it till I figure out two uses. Leaning towards this being an X399 system with 2950X - there's one for sale in the FS/FT forum for a good price. Got a spare 970GTX for it as well.

Agamemnon - old IBM Laptop, ESXi bare-metal, will have the second domain controller and virtualized network controllers on it soon.
Achilles - old IBM Laptop, ESXi bare-metal, will have the rest of the virtualized network controllers on it soon.
Lexington/Yorktown/Hornet/Saratoga - Avoton Atom mini-servers, all running bare-metal ESXi. These run the virtualized network that the prior two machines "control".

I do a mix of stuff in all my boxes, just because I can only "use" one at a time, so they do "useful things" when I'm not sitting in front of them. Plus, my friends don't have gaming laptops - we do regular game nights, and I can just whack each box over to a spare gaming system in about 2 minutes with some scripts, so then they do whatever, reboot when done, and it's back to useful things!
 
For the sake of humanity, it's better to let the older and cheaper build to the folks who really have a need for it. With the current pandemic, parts availability is getting worse, on top of the stupidly high price for everything, including old parts.
There's a debate there between sending funds to folks that need funds (especially if their kit has been sitting on here for a while), and letting someone else take it for sure. FWIW, I regularly sell my lightly used gear on here for insanely low prices just for that reason, but right now I've lost my main lab, and need SOMETHING to build before I go insane. I also help folks buy kit all the time - I'm lucky in that I have the funds to build all this stuff, but not everyone is. So I help when I can. :)
 
I originally got into distributed computing (Folding@home, BOINC, etc.) many years ago because I like messing around with a variety of hardware, really enjoy building PC's, and couldn't think of any other way to justify purchasing as much PC equipment as I would like to since I wouldn't have had a use for all of it. With COVID being such an issue there are a lot of projects for that too which is cool to be a part of.

The most interesting part to me of your list of machines is with how much hardware you have that you still found a use for some old IBM laptops :ROFLMAO:
 
I originally got into distributed computing (Folding@home, BOINC, etc.) many years ago because I like messing around with a variety of hardware, really enjoy building PC's, and couldn't think of any other way to justify purchasing as much PC equipment as I would like to since I wouldn't have had a use for all of it. With COVID being such an issue there are a lot of projects for that too which is cool to be a part of.

The most interesting part to me of your list of machines is with how much hardware you have that you still found a use for some old IBM laptops :ROFLMAO:
Needed two more systems with decent local storage. They were paired up as a mobile build server for some enterprise IT equipment for a long time - travelled in an old roller bag with a switch and 18 Ethernet cables. Was always weird getting that through security at the airport 🤣
 
So I’m a datacenter architect. Up till last week, I ran a lab with 1800c/40TB of ram (and about a PB of all flash storage), all for my use and a couple of others. Virtualized networking, full BGP and spine leaf stack, you name it. I’ve designed and architected container solutions... just never spent much actual hands on time with Kubernetes for some reason- thats why this excites me.
Cloud and VDI expert, just... never got the hands on time with the container side once kube came out. Lots and lots of docker time, but... yeah. So there’s a gap I can fill!!!
There's a lot of technical debt from the build release side to dig into with K8s.
Networking is diff.
Simple issues like Dev not understanding how all the tools they took for granted don't work on ARM Macs is fun to architect around.

It'll power a lot of knowledge for the next 5-10 years bc I deal with Dev that don't get how K8s in of itself is essentially an OS that jumps over the fact they can't manage build artifacts properly.
 
Back
Top