unlimited budget server/workstation, EPYC 9174F?

I thought I told you to get the P5800X, what the hell. This build got way too soft
Oops! I meant to Google it and got distracted. My guess was something to do with a RAID controller, but now I see. Oh damn, I SEE! 1.6GB max size is 400GB less than I had intended, but could still be OK.

How is the P5800X expected to compare to PCIe 5.0 SSDs? 7k sequential RW for Optane versus probably 12k for the SSDs, but random RW will probably still be 3x+ IOPS. That sound right?

Optane does what it does using PCIe 3.0. Any idea if there are plans for a newer Optane? I'd be annoyed to drop $3.7k just to have something better come out right away!

Optane is dead. Not sure if there’s much out there in the market still 😂
Sorry if you are joking, what do you mean? It's dead? I see the P5800X for sale in a number of places.
 
Intel just discontinued optane this year. The P5800X is, I believe, the latest/highest end one they released. It should be pcie 4.0. It's crazy expensive per gigabyte and the formfactor is weird and the speed benefits only seem to make sense for certain workloads (hence poor sales/discontinuation). It also has very high endurance. I basically just wanted someone who could test stuff for me :p Supposedly the random read/write speeds at low queue depths are great for SQL databases and potentially for crypto nodes but info is hard to come by since no one buys these for consumer use.

Here's a couple 1:1 reviews
https://www.tomshardware.com/reviews/crucial-p3-plus-ssd-review-capacity-on-the-cheap/2
https://www.tomshardware.com/reviews/intel-optane-ssd-dc-p5800x-review/2
 
Oops! I meant to Google it and got distracted. My guess was something to do with a RAID controller, but now I see. Oh damn, I SEE! 1.6GB max size is 400GB less than I had intended, but could still be OK.

How is the P5800X expected to compare to PCIe 5.0 SSDs? 7k sequential RW for Optane versus probably 12k for the SSDs, but random RW will probably still be 3x+ IOPS. That sound right?

Optane does what it does using PCIe 3.0. Any idea if there are plans for a newer Optane? I'd be annoyed to drop $3.7k just to have something better come out right away!


Sorry if you are joking, what do you mean? It's dead? I see the P5800X for sale in a number of places.
Intel discontinued the lines and killed the fabs. Optane is dead. If you want one - I can help you get one. PM me.
 
Intel just discontinued optane this year. The P5800X is, I believe, the latest/highest end one they released. It should be pcie 4.0. It's crazy expensive per gigabyte and the formfactor is weird and the speed benefits only seem to make sense for certain workloads (hence poor sales/discontinuation). It also has very high endurance. I basically just wanted someone who could test stuff for me :p Supposedly the random read/write speeds at low queue depths are great for SQL databases and potentially for crypto nodes but info is hard to come by since no one buys these for consumer use.

Here's a couple 1:1 reviews
https://www.tomshardware.com/reviews/crucial-p3-plus-ssd-review-capacity-on-the-cheap/2
https://www.tomshardware.com/reviews/intel-optane-ssd-dc-p5800x-review/2
I’ve got 10 of them…? 😂
 
I thought I told you to get the P5800X, what the hell. This build got way too soft :(



Is that what generally causes the long boot times? I don't have a lot of memory by volume in my supermicro boards but all slots are filled and it only takes maybe 2 mins to boot.
Bios being silly.
 
I've been reading about Optane a lot, it's pretty dope. Optane is as fast or faster than anything else right now. The gap is closing with NAND but only in some ways. It may lose to PCIe 5.0 SSDs for sequential RW, though maybe not sustained, but should continue to be faster for Q1 and random.

Need to use a PCIe to U.2 card. Some people suggest this one. Does it matter if it goes in a CPU or chipset PCIe slot? I've got both open, since I have just the RTX4090 and these slots:
CPU
2 x PCIe 5.0 x16 slots (support x16 or x8/x8 modes)
Chipset
1 x PCIe 4.0 x16 slot (supports x2 mode)
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Eh. Depends on the hardware. If you’re going professional it tends to be ok. Consumer is harder. To me both zen4 and AL/RL are transitional platforms. But the professional versions are old right now.

Even if you go with prosumer/pro gear there's a chance of an undiscovered problem with a new platform. I would never want to go back to tearing my hair out as well as wasting days over some stupid quirk of a new platform.

I will say this rarely happens, but with how CPUs have really bad security bugs recently I wouldn't trust new platforms at all until they're out in the wild and proven after at least 2-3 years.

The extra 10-20% performance wouldn't be a huge dealbreaker over the stability and peace of mind to me.
 
I've been reading about Optane a lot, it's pretty dope. Optane is as fast or faster than anything else right now. The gap is closing with NAND but only in some ways. It may lose to PCIe 5.0 SSDs for sequential RW, though maybe not sustained, but should continue to be faster for Q1 and random.

Need to use a PCIe to U.2 card. Some people suggest this one. Does it matter if it goes in a CPU or chipset PCIe slot? I've got both open, since I have just the RTX4090 and these slots:
CPU
2 x PCIe 5.0 x16 slots (support x16 or x8/x8 modes)
Chipset
1 x PCIe 4.0 x16 slot (supports x2 mode)
~definitely~ CPU if you have a choice.
Even if you go with prosumer/pro gear there's a chance of an undiscovered problem with a new platform. I would never want to go back to tearing my hair out as well as wasting days over some stupid quirk of a new platform.

I will say this rarely happens, but with how CPUs have really bad security bugs recently I wouldn't trust new platforms at all until they're out in the wild and proven after at least 2-3 years.

The extra 10-20% performance wouldn't be a huge dealbreaker over the stability and peace of mind to me.
True, but a lot of times it's more than 10-20%. And it can be argued that new platforms don't have security bugs - they ain't been found yet! :D
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I've been reading about Optane a lot, it's pretty dope. Optane is as fast or faster than anything else right now. The gap is closing with NAND but only in some ways. It may lose to PCIe 5.0 SSDs for sequential RW, though maybe not sustained, but should continue to be faster for Q1 and random.

Need to use a PCIe to U.2 card. Some people suggest this one. Does it matter if it goes in a CPU or chipset PCIe slot? I've got both open, since I have just the RTX4090 and these slots:
CPU
2 x PCIe 5.0 x16 slots (support x16 or x8/x8 modes)
Chipset
1 x PCIe 4.0 x16 slot (supports x2 mode)
Second alternative - you can use an M2 -> U2 adapter too. Although since you generally only get one M2 CPU slot, that may not be ideal.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Almost Al the ones I’ve had with server CPUs tend to be on the very long end. But most of them have been HP or Lenovo branded. The one sage board we tested had piles of PMEM in it and that takes forever too
I have an HP Z6 G4 workstation i just grabbed off someone the other week, I am booted into the OS in about 2 mins or less from power on, that's with an intel Xeon Silver 4114, 64GB of ECC ram, and 4 HD's and 1 M.2 drive
 
I had a thought: RAID 0 is great but with SSDs has the downsides of 1) increased risk, and 2) worse random RW. The P5800X mitigates both of those, yes? Is it possible to run two P5800X in RAID 0? Are we getting HARD yet?

On the CPU I've got one free PCIe and two free M.2. The M.2 to U.2 adapters provide a U.2 plug, so there would be a cable to the drives. Is that OK? Wendell from Level2Techs says, "a cabled adapter will not work", but isn't the drive designed to be plugged into a U.2 using a cable? The next post down mentions M.2 to U.2 "was definitely slower".

There are bifurcation cards, so I could split eg the CPU x16 into multiple U.2 plugs. That'd still require a cable though. It might be easier to find a quality card like that (eg SuperMicro has one) than it is to find a quality M.2 to U.2 adapter.

On the chipset I've got one free PCIe. If I want to use two PCIe slots for RAID 0, one would be on the CPU and one on the chipset. Is that OK? I don't find hardly any talk about the difference.
 
An optane drive is the one thing that could actually saturate the PCH links. It’s only x4 after all.

Can you do RAID0 of them? Yep. In fact most memory configurations of it does just that - across 12-24 devices! It’s overkill except as a DRAM alternative though 😂😂
 
Can you do RAID0 of them? Yep.
Sign me up! :D

An optane drive is the one thing that could actually saturate the PCH links. It’s only x4 after all.
Would performance still be good if I have one P5800X on the chipset and one on the CPU? It's either that or put both on the CPU using M.2 to U.2 adapters + cables. I can't tell which is better.

lopoetve see DMs, pretty please!
 
Last edited:
Sign me up! :D


Would performance still be good if I have one P5800X on the chipset and one on the CPU? It's either that or put both on the CPU using M.2 to U.2 adapters + cables. I can't tell which is better.

lopoetve see DMs, pretty please!
That's always SUPER hard to say - will it work? Oh definitely. Will it work ~well~? Depends on what you're trying to do. It's the one drive set where you can hit a limit for sure on the architecture of consumer platforms, it's just hard to say ~where~ or how much you will (or won't) notice. :D
 
Alright, that's fair enough. Knowing it'll work is helpful. I'll have to just try it and see which setup works best!
 
If you're buying RAM for a Zen 4 CPU, it only supports unbuffered RAM. Registered RAM won't work, to the best of my knowledge.
 
Aye, where were you days ago!? ;) Registered was the only DDR5 ECC I could find in stock. Thankfully I cancelled the order once I figured out it wasn't going to work. For now I'll go with G.Skill Trident Z5 2x16GB, 6400MHz, CL32-39-39-102 (F5-6400J3239G16GA2-TZ5RK), which is from the mobo QVL. I'd much rather have 32GB ECC, so I'll keep my eye out for that in time.

Updated build:
Code:
CPU: Ryzen 7950X
Cooler: Noctua NH-D15 Chromax Black
Mobo: Asus ProArt X670E
RAM: G.Skill Trident Z5 2x16GB, 6400MHz, CL32-39-39-102
Main storage: 2x Optane P5800X, 1600GB, RAID 0
Mass storage: Crucial P3 Plus, 4TB, M.2
GPU: Asus Strix RTX 4090 OC
PSU: Corsair AX1600i
Case: Fractal Meshify 2
Case fans: 6x Noctua NF-A14 Black
Paste: Corsair XTM70
 
Last edited:
Aye, where were you days ago!? ;) Registered was the only DDR5 ECC I could find in stock. Thankfully I cancelled the order once I figured out it wasn't going to work. For now I'll go with G.Skill Trident Z5 2x16GB, 6400MHz, CL32-39-39-102 (F5-6400J3239G16GA2-TZ5RK), which is from the mobo QVL. I'd much rather have 32GB ECC, so I'll keep my eye out for that in time.

Updated build:
Code:
CPU: Ryzen 7950X
Cooler: Noctua NH-D15 Chromax Black
Mobo: Asus ProArt X670E
RAM: G.Skill Trident Z5 2x16GB, 6400MHz, CL32-39-39-102
Main storage: 2x Optane P5800X, 1600GB, RAID 0
Mass storage: Crucial P3 Plus, 4TB 2280 (4800/4100MB/s RW)
GPU: Asus Strix RTX 4090 OC
PSU: Corsair AX1600i
Case: Fractal Meshify 2
Case fans: 6x Noctua NF-A14 Black
Are you planning to add any more PCIE devices/storage? Not sure but it might make sense to move to another platform just so you can get more PCIE lanes.
 
Yeah, I might add more storage later. If the Optanes are best connected via PCIe, there's three M.2 x4 and one x2 available. If the Optanes are better via M.2, there's two PCIe x16, one M.2 x4 and one x2 available. Either way that's probably enough.

I changed mass storage to: Seagate FireCuda 530, 4TB, heatsink

I'll do another build mid-2023 that will be more of a server. It'd be fitting for that one to be more nuts. I've already got the case for it. It's a Storinator Q30 but I had Protocase customize it to fit E-ATX, 6x fans 120->140mm, and a faceplate that hides the fan screws. The faceplate is Cerakote'd to match Ubiquiti silver. It'll be NAS and run server processes similar to what I described for this current build (it's in a different location). The software builds it does in VMs take longer, and are more intense than the builds I do on my workstation.

5925-xb9T[1].png
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
Aye, seems to be a number of single drive cards. I only found the one above that supports two AND is also PCIe 4.0. I have high hopes for it!

This level1techs thread mentions the second drives getting x1, as you mentioned, but it was a PCIe 3.0 AIC.
 
If you have a lot of budget I would frankly rethink using your workstation has a NAS (if you think to even need a NAS), having the NAS down because you're updating your workstation Linux or anything like that....how much power it would use to stay awake 24/24 to access your NAS on vacation if you are not confident into your wake over internet and so on, you probably will have a machine more than strong enough for a NAS task already lying around or will after adding a new machine.

From your description, you do not seem to need PCI-LANE or much if any feature that EPYC, Threadripper pro would bring ?
Integrating a NAS with a workstation/server essentially makes it DAS.
So the benefits of a segmented and optimized route for internal devices as well as external access is kinda pointless here.
 
An optane drive is the one thing that could actually saturate the PCH links. It’s only x4 after all.
Isn't the x4 link between chipset and CPU on my X670E motherboard a PCIe 5.0 link? IIUC, that'd be equivalent to x8 PCIe 4.0. Edit: nevermind, seems it's PCIe 4.0:
5934-oSom[1].jpg
 
Last edited:
Is it possible to run two P5800X in RAID 0? Are we getting HARD yet?
Has a complete peasant in regard of almost everything but specially raid and Optane, not sure I get a semi.

Optane feel to me an exercise into reducing latency and IOPS, while still having more than OK bandwith, while raid 0 seem to be the complete exercise of that.

Would go strait Optane for cache, programs, data that are collection of small files and regular NVME raid 0 for larger file to go hard. First reflex for a dual optane machine would be to go OS-Apps drive and the other one has a cache drive.
 
I'm also relatively a noob, but here are my thoughts.

Many reviews show Optane as roughly equivalent to the current best SSDs in every way, except for a few ways where it beats them handily. Sometimes by 4-5x (random reads), sometimes 100-300x+ (endurance). Not all the reviews show sustained write performance. SSDs can hit similar 6k+ speeds, but not sustained.

The only important metric for me is what will actually be noticeable in workstation usage. The PC Mark 10 tests stand out for that by measuring scripted real workloads in various software. The performance test has it at double a normal score, though that isn't very tangible. The bandwidth test shows the cumulative differences result in roughly twice the bandwidth for "real-ish usage". Another real world test is SPECworkstation, and these results for Optane used a CPU with 50% fewer cores than the other drives on the chart. At any rate, altogether this gives me confidence Optane will give some sort of noticeable Real Life improvement.

Usage varies of course. Some of what I do deals with a lot of small files (eg compiling OpenJDK, 64k files), so that my benefit a little more than usual. My expectation is that a single P5800X will be noticeably faster than any SSD. I don't expect an earth shattering difference, just that things open faster, compilation/etc completes sooner, and generally I'll wait slightly less for things to happen. Optane is the only way to get this bit of extra performance. Once budget is removed from the equation, there's no reason not to do it.

At this point, I'm already sold on a single P5800X. Now put two in RAID 0, roughly doubling RW to 11-12k MB/s with negligible failure risk and hopefully losing little of the great performance in other aspects (latency/random/etc). This destroys current SSDs, who can't benefit from RAID 0 without suffering poor random performance and increased failure risk. PCIe 5.0 SSDs will compare similar to a single P5800X versus current SSDs (similar burst RW, weak sustained/random/latency). RAID 0 lets the Optane advantages persist through the PCIe 5.0 SSD generation while not being worse in any aspect.

With 3.2TB of Optane, I can put nearly everything on those. I'll still have a decent SSD (Seagate FireCuda 530, 4TB) for other storage (backups and other seldomly accessed large files).

I just need the P5800Xs and we can see how it actual goes down! Where did lopoetve gooooo?! :D
 
Last edited:
Usage varies of course. Some of what I do deals with a lot of small files (eg compiling OpenJDK, 64k files), so that my benefit a little more than usual. My expectation is that a single P5800X will be noticeably faster than any SSD. I don't expect an earth shattering difference, just that things open faster, compilation/etc completes sooner, and generally I'll wait slightly less for things to happen.
Could be different in some use case, but in general compilation have so much compute relative to the the extremelly small by modern standard text files size that the difference between even a ram drive and a 5400 Rpm HDD can be nill, has everything fit and is in the ram (and-or disk cache)and if it is not the cost of a drive read is minimal relative to the compute.

Maybe it change over time with the superfast machine, but back in the days we were talking margin of error from going to an external USB drive to an internal SSD:
http://blog.kdgregory.com/2014/06/is-that-ssd-really-helping-your-build.html

harddrive review virtually never have compile benchmark and I imagine it would be the reason.

I could try to do some benchmark in that regard.
 
Last edited:
I saw mention (by Wendell @ Level1Techs) of compiling Linux and LLVM with Genoa (something about Linux not being big enough), but of course that was about CPU. I agree, Optane might not help compile times much. At the very least when I copy a folder with many tens of thousands of files, it takes a while. It probably helps there and in general other places, at least a little -- hopefully enough to be noticeable.

It'll be hard to quantify how "worth it" RAID 0 Optane really is. If it's impossible to notice that'd be lame, otherwise worth is subjective. I'm fine doing it not knowing for sure, as long as it's reasonable to expect it to be fast. I like that it should be basically as fast as possible. If it could possibly make a difference for me, then I am sure to have the benefit. That's probably common reasoning to dork around overclocking or otherwise trying to extract the last bit of performance.
 
What the hell, recent news says Intel released new Optane drives! P5810X in 400GB and 800GB. What are the chances after all this time, I finally stumble upon the P5800X, order some, and then BOOM new drives are released!

I've seen old news about a planned P5801X that presumably got shit canned when Intel gave up on Optane. This P5810X is different, if a suspiciously similar model number.

Intel Ark pages are here: P5810X 400GB and P5810X 800GB. For comparison: P5800X 800GB (which I've ordered). Better, here's a comparison all on one page. TLDR; the new 800GB is slightly worse than the old one. The 400GB compared to my 800GB has basically identical specs, except the 400GB draws more power, is heavier at 147g vs 140g, and has 1.38M IOPS random write vs 1.35M.

I assume they changed the model number because it's either better or worse than the old model in some ways, and it does seem worse. I could return my P5800X (they haven't arrived yet, can be returned until end of January), but it doesn't seem worthwhile. Maybe these new models will be priced more competitively? Does anyone have any info on P5810X pricing?
 
Things are getting spicy 'round here!

nLwOUNk[1].jpg


5987-FULr[1].jpg


If only I had a #%@$ing case to put all this in! Comes tomorrow. My patience is already thin, I might just fire it up without a case -- sleep is overrated. Dual U.2 card hasn't arrived, but I have a single U.2 card. I think I'll briefly run off the FireCuda to get a feel for the speed, then move to 1 Optane, then both. I'm not planning an in depth review, but I can do some quick benchmarks of each.

FireCuda 530 and CPU are in the mobo already. I removed all the M.2 heatsinks since the FireCuda has its own and that seems better, and I don't need it over the unused M.2s. Also removed some silly covers that aren't doing anything except blocking airflow, especially over the chipset. Plus less of the gold/brown color is welcome. At least there's no RGB.

I've been way too obsessed with this build as I wait for deliveries. Reading up on all the things, I came across that delidding can drop temps 20C. Well, that delta is stock + paste versus delid + LM. Performance difference is only +50-100Mhz, so not worth the hassle and risk, but modding stuff is kind of fun! The last time I delidded was a 9900KS using der8auer's tool. His Ryzen 9 tool isn't ready yet but I'm sure I can delid without it no problem, using guitar wire to cut the glue and a hot air station to desolder the indium.

Look at me, starting out wanting badass server parts and stability with ECC, ending with consumer parts and a delid!

The problem is without the IHS, the CPU retention bracket has to go. I could keep the stock backplate and use the Noctua DH-15 cooler to hold the CPU on. It isn't ideal since it only has 2 screws rather than 4, plus it's super tall so has a lot of leverage, but I'm pretty sure it could work. The cooler has plastic standoffs that would be easy to shorten by the height of the IHS.
 
Last edited:
Look at me, starting out wanting badass server parts and stability with ECC, ending with consumer parts and a delid!
Don't do it. Save that for the gaming/toy machine. Let us know how the optane performs with those adapters.
 
None of this is making any sense to me.
Having an unstable work machine is going to cost you more money than the extra performance is worth.

The optane drives don't directly connect to any consumer PC parts so you need an adapter for them. The adapters are all kinda janky from what I've seen though, and while a 1 x optane adapter seems to work fine, the 2 x ones I haven't found any feedback on showing both drives operate at full pcie speeds. Since these are really fast drives it's possible this could result in reduced performance, but I'm not sure so basically I want him to let us know how it goes.
 
Having an unstable work machine is going to cost you more money than the extra performance is worth.

The optane drives don't directly connect to any consumer PC parts so you need an adapter for them. The adapters are all kinda janky from what I've seen though, and while a 1 x optane adapter seems to work fine, the 2 x ones I haven't found any feedback on showing both drives operate at full pcie speeds. Since these are really fast drives it's possible this could result in reduced performance, but I'm not sure so basically I want him to let us know how it goes.
I agree.

Even tho I focused on AWS after 2015, I still talked to guys I worked with still focused on datacenter compute.
Both camps have to architect data tiers that have to contend with gpu acceleration or keep very horizontal transactions at edge iops speeds for consumption.

I hung out with a L4 AWS colo engineer at the last Summit just to get a read on how much gear they were cooking for certain services.
Dumpsters full of enterprise components literally used up.

But the mix of build here is not driven by those use cases.

It’s not like having an argument with a network admin about using R instead of python bc legit his tooling is scanning that many CIDR blocks and needs to deliver answers about population contention in real time. I’m like either improve your logic bc it’s affecting execution time, or get to know a new lang that’ll help do the facepalm math for you.

If I need a gpu host, it’s going to leverage a stack of A5000 in its chassis.

If I need a ram host for commodity virtualization then we’re talking clusters of them so I can use minified vms at scale.

I guess you can exceed consumer gaming cpus for frequency, then decide to apply that pegged core spec to density of cores….but why?

I mean if you wanted to prove your methods in Rust vs Go against production datasets in a lake house sure.
But most actual computer scientists aren’t using local machines for that.
That’s literally just counting one lang vs another one producing say columnar data, and visualizing the drag race over time.

You could also visualize the entire chain of data like: https://www.cs.ucdavis.edu/~ma/abs.html

A lot of this work is done on an M1 air or a Thinkpad running Ubuntu, then you ship the job and go back to whatever you were doing.

Just confused by the point of the build bc I’ve heard of some builds that try to address unique use cases before.

This build doesn’t seem to conform to anything I’ve built, or anything problems I’ve seen.

Like better off buying a recent Thinkstation from a MAANG employee that just got canned for $3k and call it a day.
I suspect a bunch of the FinOps noobs I see wandering around SF will have all sorts of stacked vendor workstations up for grabs the way the exchanges are going.

The links are weird, you wanna talk about sustained write performance then Bytedance’s 10pg paper on the recommendation engine used for TikTok is an amusing read.
 
Last edited:
It's not so complex: I want to dork around making a hybrid workstation/server and I don't have a budget to do it. It is used for work but it's also plain fun to optimize and use a fast setup. Computer go BRRRR

The dual U.2 works! I'll post more and results soon, but first I want to test RAID0. Some questions:

Do I want to use "read cache" for my array?

Do I want to use "write back cache"? I'd guess not, as it's scary that a crash before the write cache is written to disk could be disastrous. NAND SSDs buffer with DRAM, but probably they have enough capacitance to flush on power loss.

What's a good stripe size for my P5800Xs?

In other news, what do you think of this thing?
https://www.graidtech.com/product/sr-1010/
 
Last edited:
I can’t believe you guys aren’t trying to stop this OP at this point.
It sounds like mania with a credit card and no conceivable value besides Homer Simpson designs his perfect car.
 
I can’t believe you guys aren’t trying to stop this OP at this point.
It sounds like mania with a credit card and no conceivable value besides Homer Simpson designs his perfect car.

You've been a member for 9 years and you think we're going to try to stop someone from building a over the top machine? This isn't bogleheads ;)

Sounds like he's got some stuff working already, it's way better than Homer's car.
 
I can’t believe you guys aren’t trying to stop this OP at this point.
It sounds like mania with a credit card and no conceivable value besides Homer Simpson designs his perfect car.
somebrains, cool trolling. Now do a bit more OT rambling and we'll be convinced of your smarts.

I went with no read or write caching and a 64KB stripe size. I'll try turning the disk benchmarks into charts soon!
 
Have you decided on a case yet? There's some cases out there in 4U footprints that can also be rackmounted if you decide on making a server out of it later.
I can’t believe you guys aren’t trying to stop this OP at this point.
It sounds like mania with a credit card and no conceivable value besides Homer Simpson designs his perfect car.
sorry for encouraging him, but I miss fun builds like this.
 
Back
Top