New Server build

Harvestor

Limp Gawd
Joined
Apr 21, 2009
Messages
222
My current file server has been rock solid for the last 10 years but its really showing its age when trying to stream 4k rips , plus most of the drives are over 7 years old so I am starting to feel the pressing need to build now and migrate while still functional.

Current server is 6 6TB WD blacks In raid, would like to at least double the capacity looking for the best bang for my buck but pretty fast storage.

Main use is going to be a plex server but would like to try new things like maybe local cloud storage for the family. Will also be storing my steam library for the first time since next year we are going to be building rural with Starlink internet so i wont be able to just download a game in 15min like i do now.

Wishlist will be something that can saturate my 2.5gb Switch with future plans for 10gb when new house is built and wired.

Thinking about repurposing my current CPU/motherboard but i am open to all suggestions.
 
At 10 years on your current server, it's definitely the right time to replace the CPU/MB. What are you running now? If you're interested in the big stuff, Skylake SP Xeons and Zen 2 Epycs are going pretty dang cheap.

For HDD's, I'd suggest looking at the big WD (Red+/Pro/Gold) or Seagate (Exos) drives depending on your budget and deals being offered. Traditional RAID is dead, but unRaid and ZFS/RaidZ(2) are a thing if you are interested in putting in the work for them (they have dedicated threads on this board, and also benefit from lots of RAM and caching SSDs). Depending on the age of your Raid card (if you have one) you might want to snag a modernish PCIe3 LSI/Avago/Broadcom HBA off of fleabay for cheaps.

A big standalone SATA or NVMe SSD would work great for a Steam cache.
 
I'm in the process of upgrading my home server as well. In my old server I used mirrored disks (zfs) for redundancy. In my new server I want - at least for some data - the speed of ssds. But I sure don't want to pay for mirrored ssd storage! Realising that I want data redundancy but not necessarily high availability, I'm going to set up single-disk, non-redundant zfs pools for the ssd storage, and set up automatic migration of the data (zfs snapshots + send/recv) to hdds (maybe once a day for personal storage and once a week for media libraries?). So a kind of automated, on-line/first-line backup rather than seamless failover.

If an ssd fails I can then switch over to use the migrated file system(s) on the corresponding hdd while waiting for the replacement ssd, with some but not a lot of downtime, and max 24 hours (or whatever) of data loss. I gain ssd speed + cost savings. I will also reuse my existing hdds for the replication, since speed isn't important in that role, and they can be easily replaced when they fail. (Obviously there will also be offline backups.)

This solution might not be right for everyone, but it's an idea to mull over at least. :)
 
I would not continue to use a 10 year platform, for 1)reliability concerns and 2) performance. I'm a big fan of socket AM4 for stuff like this. All the CPUs that aren't APUs support ECC (the Pro APUs like Pro 5650Ge support it though). Most motherboards support unbuffered DDR4 ECC except for MSI. Asrock even has a "proper" server boards that have IPMI and video output without needing iGPU via the ASPEED AST2500 such as the X470D4U.

I've had two setups running TrueNAS Core (I think I started when it was called FreeNAS still...) X370 Taichi / 2700X and now X570S Aero G / Pro 5650Ge (both with 128GB Nemix Unbuffered ECC) and they've been rock solid. In the X370 Taichi system I had some hiccups that were my own fault when I initially did the build (my launch Ryzen 1700 didn't like FreeBSD, and my onboard 10 SATA ports- some from ASMedia 3rd party controller- didn't like all being populated but moving to a 2700X and LSI HBA totally solved everything)

1700232224255.png


For storage I went with the Seagate 20TB Exos models for that have been on sale for a long while at $279.99. 5 year warranty, enterprise quality, so far so good but I have plenty of redundancy and backups if the array were to fail for whatever reason. I know many people don't trust Seagate, but it's hard to beat that deal in terms of price/capacity on new drives. Not sure what your budget is, obviously you can buy whatever drives you feel comfortable with. ZFS planning is "fun", you have to consider if you want to mirror vdevs and/or run RAIDZ2/RAIDZ3 etc... You don't have to use TrueNAS Core or anything FreeBSD based, Linux based OSes have pretty good OpenZFS support nowadays so choose whatever OS fits your use case best. Optionally get a GPU if you're doing something like Plex and need extra transcoding horsepower if the CPU isn't going to keep up. I would at least encourage ECC memory but it's ultimately your choice.

To "saturate" 2.5GbE it's pretty easy, your read spead on the array will only need to be around 312 MB/s. To "saturate" 10GbE you're looking at around 1250 MB/s. I'm bottlenecked on network transfers pretty hard right now, but I've been too lazy to swap in a 10GbE NIC. I started with the onboard 2.5GbE just to get it up and running and said screw it. It works fine for what I'm doing.

Last thing I'll mention is hosting a Steam Library is fine. Just keep in mind you probably/definitely?? can't do this as an SMB share, you'll have to create a zvol and set it up as an iSCSI target on your gaming machine.
 
I second some things already said.

Look into ZFS. And if you have throughput problems already then don't do raidz, do mirrors and stripes only. 4k video does not saturate even 1 Gb/sec, though.

AM4 with ECC memory is a very attractive platform for small servers.

I still use Toshiba HDs.
 
4k video does not saturate even 1 Gb/sec, though.

Uncompressed 4k video does in fact saturate a 1 Gb/sec.
The bandwidth needed for video doesn't have to do with the resolution of the video. It has to do with how high the video's bitrate is.
 
Uncompressed 4k video does in fact saturate a 1 Gb/sec.
The bandwidth needed for video doesn't have to do with the resolution of the video. It has to do with how high the video's bitrate is.

Right, of course. I was thinking about compressed video with bitrates of streaming or as ripped from 4k Blu Rays.

Uncompressed 4k is probably rarely done on fileservers.
 
I would not continue to use a 10 year platform, for 1)reliability concerns and 2) performance. I'm a big fan of socket AM4 for stuff like this. All the CPUs that aren't APUs support ECC (the Pro APUs like Pro 5650Ge support it though). Most motherboards support unbuffered DDR4 ECC except for MSI. Asrock even has a "proper" server boards that have IPMI and video output without needing iGPU via the ASPEED AST2500 such as the X470D4U.

I've had two setups running TrueNAS Core (I think I started when it was called FreeNAS still...) X370 Taichi / 2700X and now X570S Aero G / Pro 5650Ge (both with 128GB Nemix Unbuffered ECC) and they've been rock solid. In the X370 Taichi system I had some hiccups that were my own fault when I initially did the build (my launch Ryzen 1700 didn't like FreeBSD, and my onboard 10 SATA ports- some from ASMedia 3rd party controller- didn't like all being populated but moving to a 2700X and LSI HBA totally solved everything)

View attachment 614120

For storage I went with the Seagate 20TB Exos models for that have been on sale for a long while at $279.99. 5 year warranty, enterprise quality, so far so good but I have plenty of redundancy and backups if the array were to fail for whatever reason. I know many people don't trust Seagate, but it's hard to beat that deal in terms of price/capacity on new drives. Not sure what your budget is, obviously you can buy whatever drives you feel comfortable with. ZFS planning is "fun", you have to consider if you want to mirror vdevs and/or run RAIDZ2/RAIDZ3 etc... You don't have to use TrueNAS Core or anything FreeBSD based, Linux based OSes have pretty good OpenZFS support nowadays so choose whatever OS fits your use case best. Optionally get a GPU if you're doing something like Plex and need extra transcoding horsepower if the CPU isn't going to keep up. I would at least encourage ECC memory but it's ultimately your choice.

To "saturate" 2.5GbE it's pretty easy, your read spead on the array will only need to be around 312 MB/s. To "saturate" 10GbE you're looking at around 1250 MB/s. I'm bottlenecked on network transfers pretty hard right now, but I've been too lazy to swap in a 10GbE NIC. I started with the onboard 2.5GbE just to get it up and running and said screw it. It works fine for what I'm doing.

Last thing I'll mention is hosting a Steam Library is fine. Just keep in mind you probably/definitely?? can't do this as an SMB share, you'll have to create a zvol and set it up as an iSCSI target on your gaming machine.
Good info, i have done zero research into steam shares yet i just know id like to have the option to have as much stored as possible.
Any good articles/guides on using ssds as a cache, i have zero knowledge in that and would love to research it
 
I second some things already said.

Look into ZFS. And if you have throughput problems already then don't do raidz, do mirrors and stripes only. 4k video does not saturate even 1 Gb/sec, though.

AM4 with ECC memory is a very attractive platform for small servers.

I still use Toshiba HDs.
I will look into zfs for sure thank you. Any recommendations for a am4 motherboard? Something with good ECC support and room to eventually put a 10gb nic into
 
Good info, i have done zero research into steam shares yet i just know id like to have the option to have as much stored as possible.
Any good articles/guides on using ssds as a cache, i have zero knowledge in that and would love to research it
I'd recommend against using any sort of cache SSD. Just get as much RAM as you can and you'll be fine. You can read more about the types of ZFS cache drives here: https://www.45drives.com/community/articles/zfs-caching/
I will look into zfs for sure thank you. Any recommendations for a am4 motherboard? Something with good ECC support and room to eventually put a 10gb nic into
And yea sorry I saw bitnick mention it, and mixed that up with you and though you were already aware and familiar with ZFS.

Basically any motherboard that isn't from MSI, fits your budget and has the features you want. Asrock Rack has many boards with IPMI and video output without the need for an APU/separate video card. Here is a Newegg search that has most of them or look on Asrock Rack's website in server motherboards and filter by CPU socket AMD AM4. These are hands down the best for a server, since with IPMI you can remotely control it even to the point of navigating the BIOS, booting ISOs, etc...

If you don't want one of the Asrock Rack boards pretty much anything from Asus/Asrock/Gigabyte will work with Unbuffered ECC. You can check the motherboard support page to verify, maybe some very low end ones won't work. I have used Asrock X370 Taichi (repurposed desktop board) and Gigabyte X570S Aero G (I liked this one for the 2.5GbE, iGPU output capability, 4 NVME and PCIe layout). For a 10GbE NIC later I'd make sure the board has a free PCIe x4 slot (they are usually physical x16 slots at the bottom of the board or whatever).

Another random piece of advice, ideally connect your hard drives to an LSI HBA that's flashed in IT Mode. ZFS needs direct access to the drives so normal RAID controllers won't work and I feel like the motherboard SATA ports can be hit or miss (ESPECIALLY if you're using a third party controller from ASMedia or whatever). Here is an example of what you would want. It can handle 8 SATA/SAS drives hooked up to it and will take one of your PCIe slots.
 
Last edited:
I will look into zfs for sure thank you. Any recommendations for a am4 motherboard? Something with good ECC support and room to eventually put a 10gb nic into

I use the Asus Prime x570. And I have a dual 10 Gb/s Ethernet card in there.
 
Researching that one now. Does video card matter that much with transcoding in plex, i have my 5600xt thats going to be replaced and was thinking of putting it into the server if it was a big improvement.

Who has the best drive warrenty these days? I have heard nightmares about WD the last few years but my WD blacks have been great and Im thinking of doing them again unless theres another drive thats better performance/ dependability.
 
Leaning towards freenas as of right now but still looking for other options. What else should i be looking at besides zfs and raidZ.

Specs so far im thinking

Asus prime x570
64gb ram
Ryzen 5 5600(can pickup local for $100) if its not sufficient i will upgrade
8 12tb seagate ironwolfs
Wd black boot drive
5600xt (already have)

Looking for pci solution for more sata ports
 
Leaning towards freenas as of right now but still looking for other options. What else should i be looking at besides zfs and raidZ.
Looking for pci solution for more sata ports

Hardware seems ok. For more hd ports I would use a12G/ 8port LSI HBA with an 9300 chipset. You can connect 6G Sata disks or (2x mpio) 12G SAS disks.

Do you plan to use a barebone filer?
An option is an All in One config with a virtializer as base and all services including storage on guest VMs like BSD, Linux, OSX or Solaris.
I prefer the ultra minimalistic webbased ESXi as base, ProxMox is another option.

Regarding the OS you have the choice of BSD, Linux or Solaris.
Mainstream is Linux. Best ZFS integration/lowest resource needs is Solaris (native ZFS) or a free Solaris fork like OmniOS (Open-ZFS).

Main advantage for a Solaris based filer beside easy up/downgrades is the OS/kernelbased SMB server. Unlike Linux or SAMBA based solutions it offers full NFS4 ACL integration into the ZFS filesystem (a superset of Windows ntfs ACL, Posix ACL and classic Unix permissions) with Windows SID as user/owner file reference or local Windows compatible SMB groups. It is also much easier to configure than SAMBA via smb.conf and allows a backup/restore/move of ZFS filesystems with AD permissions intact without additional idmappings uid->SID.

Btw.
If you prefer ZFS on Linux, avoid Open-ZFS 2.2 until bug state becomes more clear,
https://github.com/openzfs/zfs/issues
 
Leaning towards freenas as of right now but still looking for other options. What else should i be looking at besides zfs and raidZ.

Specs so far im thinking

Asus prime x570
64gb ram
Ryzen 5 5600(can pickup local for $100) if its not sufficient i will upgrade
8 12tb seagate ironwolfs
Wd black boot drive
5600xt (already have)

Looking for pci solution for more sata ports
LSI HBA for the 8 drives - I linked one on eBay in a prior post. It NEEDS to be running in IT mode- ZFS needs direct access to the drives and if it's in traditional RAID mode you're bound for trouble.

If I was starting fresh I'd probably choose TrueNAS Scale (Debian Linux based) over TrueNAS Core (FreeBSD based). They basically have feature parity, but you can easily spin up Docker containers (soooo many easily deployable things) on Scale vs iocage jails on Core. VMs are better on Scale too. The plugins are poorly maintained on Core so you have to take the time to learn basic FreeBSD operation/management and maintain your own stuff. It's not that bad and there is a good bit of stuff supported but nowhere near the ease and amount of software available via Docker containers.

Scale will be easier to do GPU passthrough for transcoding Plex or whatever as well. You'll probably need a separate GPU (at least initially) to do video output since you're not using an APU with integrated graphics or a motherboard with dedicated BMC. After you get it setup, I think???? you can run headless and just use web interface/SSH for management.
 
Hard to add much to the already great advice. I also have some 10+ year-old servers been trying to get the customers to upgrade for a few years now. I too echo, yes it's time to replace and upgrade the hardware. Also, there seems to be som much better support for the server and tools to monitor compared to a server that is 10 years old.

Do take the time to consider the cost versus performance factors as well.
 
Picked up a few items on decent black friday sales, Cdn prices are in the crapper.

As far as software i am completely in over my head, current server is running windows server with a basic raid card, i am one step above a linux noob so most of it is foreign to me.

Since plex is the number 1 use of this server whats the best resource to research how to setup for gpu passthrough.

Will most likely start off using the onboard network port but would like to go up to 10gig, what is going to be most user friendly/ easy to setup.
 
Will most likely start off using the onboard network port but would like to go up to 10gig, what is going to be most user friendly/ easy to setup.
NICs are basically plug and play. Just have the drivers downloaded in advance to save yourself any hassle. That said, if you're going straight 10Gb and not looking at intermediate NbaseT options, an Intel X520 or X540 is a great option. Both are available in RJ45 and SFP+. NbaseT would be either an AQC107 or the more expensive Intel X550 (with caveats).

Edit: Forgot that the X520/X540 are x8 slots for some dumb reason. If that's an issue, go for the AQC 107 or X550 which are only x4.

What hardware did you buy?
 
Last edited:
I picked up a ryzen 5 5600
Msi b550 tomahawk
2tb WD blue m.2
Corsair cx650m

Going to do more research on ram, i have 16tb to get it up and running, no crazy deals on HD’s yet going to keep looking
 
Since plex is the number 1 use of this server whats the best resource to research how to setup for gpu passthrough.
Well first you should probably decide on what OS you're going to be using before looking for a guide. But assuming you go with TrueNAS Scale (which is what I would personally wholeheartedly recommend). It should be as simple as installing the Plex plugin and in the resource reservation section settings, make sure you select the 5600XT.

1701090356009.png


Then in Plex settings enable Use hardware acceleration when available

1701090496647.png


I think it should work with a 5600XT which you mentioned you have. Most people are using Nvidia or Intel integrated graphics but maybe research it a little just to verify it will work. There are lots of good resources on the TrueNAS community forums (stick to the SCALE subforum as CORE subforum is FreeBSD based and configured a lot differently). Another thing, I don't even use hardware transcoding on my Plex and I'm using a 5650GE which is slower than a 5600. All my content is 720P/1080P HEVC so most clients direct play, but if a transcode needs to happen the CPU can easily handle 3 - 4 at that same time with a fair bit of other software running in the background (various software running "natively" in FreeBSD jails and stuff running in Linux virtual machines). If you have 4k content it will be more demanding. You might be surprised, try it without hardware (GPU) transcoding and monitor your CPU usage. Then you can just throw it the cheapest potato GPU just to have video output for when you need to physically access the server.

Get ready to set basic permissions for your datasets. You will need to make sure yourself (probably a user with SMB share access), Plex, and any programs that need to access the files (sonarr, radarr, whatever) can read/write data to the dataset.

Will most likely start off using the onboard network port but would like to go up to 10gig, what is going to be most user friendly/ easy to setup.
Debian Linux (the underlying OS) will most likely be plug and play with pretty much ANY 10GbE NIC so just get whatever. I'm a fan of SFP+ NICs, then assuming your switch is in close proximity and has SFP+ available you can just run a DAC cable. SFP+ NICs can usually be had for less money, and same with switches.
 
I have been running Truenas/Freenas/Nas4Free for over a decade. I've never run ECC memory, Lawrence Systems on Youtube concurs that ECC memory is a "nice to have" in the home but not needed. As long as you run a file scrub job you should be fine. I have MP3s that I downloaded from the napster days that I rarely touch on that server and I haven't lost one yet.

I run Intel "T" series processors, used from [H] or ebay, 35w, perfect. New motherboard, used ram, new pico-PSUs and new SSDs or HDDs. Never had a problem.

My NAS is also now an NFS server server for my Proxmox box to hold the VMs. No issues on a 10Gbps link betwseen them.
 
I've never run ECC memory, Lawrence Systems on Youtube concurs that ECC memory is a "nice to have" in the home but not needed. As long as you run a file scrub job you should be fine. I have MP3s that I downloaded from the napster days that I rarely touch on that server and I haven't lost one yet.

When you buy a car for private use, do you declare one or another security feature like an airbag as nice to have?
If you want to be prepared to an accident as good as possible, use all available/ affordable security options.

The risk of memory errors that lead to data manipulation or corruption is not very high regarding percentage of io or ramsize. But it is a statistical number what means that it scales with time, ram size/usage and read/write io. This means a certain number of problems per year. If you are lucky, it does not affect critical data but it may result in a -100.000 instead +1000.000 or another wrong data. If you wait long enough you have errors for sure.

But what can happen on a ram error. Best case is a kernel panic (no data corruption in ZFS due copy on write, ram writecache lost without sync write). But a ram flip can occur during data processing after read checksum control or prior write checksumming. In such a case you have bad data. Even ZFS cannot do anything against. If it happens prior write you have bad data on pool with correct checksums. A scrub cannot detect such problems as checksums are correct.

So, if you like good data, use ECC. Missing ECC is the only way to loose data with ZFS beside bugs, human errors and amok hardware.
 
The random chance to suffer a flipped bit from cosmic rays is one thing.

DIMMs or DIMM slots going bad or overheating or whatever are quite another. There can be masses of bit errors from that one day to the next. With ECC you will be warned about this condition.

Or in other words: if you don't have ECC you can't tell whether you need ECC.
 
With masses of bit errors you have a very high chance of a kernel panic. Also number of detected checksum errors will increase to a "too many errors" level what means that ZFS will take the disks offline.This is what I have seen more than once with unreliable RAM.

But you are right, without ECC you will never be told when ram errors have happened
or as we say in German "Was ich nicht weiß, macht mich nicht heiß"

(What I do not know will not hurt me)
 
Back
Top