General Purpose Server 30tb+

demechman

Weaksauce
Joined
May 19, 2005
Messages
126
What are some resources I could look into for building a server in the 30-90tb range that would allow for Downloading, Plex Streaming and possibly game server hosting.

I was considering an older rack server but I am not up to date on all the hardware and options.

Budget for this around 5K so this more a performance and longevity requirement than cost. I am also less familiar with Linux but willing to learn if I have to go that route but would prefer a windows solution so I can run some usenet applications and other things as needed.
 
How much do you anticipate your storage needs to grow in the next 3-5 years. Is heat &/or noise an issue for where you plan on storing the server? Is electricity higher than average priced where you are?
 
I am figuring on about a 5tb usage per year with a scale up in content resolution doubling every 4-5 years. So a 100tb server would last me about 10 years is my guess. I am planning on a basement location where heat and noise can be minimized. I am interested in the process of selecting from all the various server options and what is exotic to me in terms of 10gbe and HDD shelves. I am just starting the learning process about all the pro's and con's and I can not find decent articles on the subject. I read a lot about "look how i bought this cheap stuff and made a thing" I would prefer something a little more prosumer. I had a synology but really hated the interface and inability to update applications or try new ones for downloading and serving up media.
 
You can get as fancy or as simple as you'd like.
My file "servers" have been simple JBOD setups for the past decade (see HTPC in sig, have 2x of those in separate locations)
Started out way back when with a bunch of Promise IDE HAs + ~200GB HDDs, now on 16x 8TB He8s.
 
Last edited:
Well, since it sounds like you are going to run this headless, I would recommend running ESXi as your host which would allow you to run both a Windows variant client for your Win apps and a *nix variant client with a ZFS datapool(with a passed-through HBA) as your storage. If you ONLY want to go windows then I would suggest going hardware RAID6. A 24 bay encosure, a Xeon or Threadripper CPU with ECC RAM. 12x 10TB drives would yield you 100TB in available, dual parity storage and another 12 slots to expand as time goes on. Hardware R6 will allow you to expand your array 1 drive at a time as needed without wiping your data and starting over, ZFS is a bit more complicated and you may need to expand multiple drives at a time depending on your initial Z2 setup.
 
r/DataHoarder/ ... Will have all the info and reading you need.

You're being pretty vague in your questioning. Technologies you should read into and understand:

JBOD
NAS
SAN
RAID
ZFS
Storage Spaces

Big storage is fun...

$5K is enough to play with. But expect to spend 60% of that on drives alone.

---------

I'm a solution design architect.

Personal deployment is 120TB of highly available storage for Plex.

Largest deployment designed was 300TB available storage for a local government.
 
So Far this is what I have come up with.

Super Micro case - CSE-846BE1C-R1K28B - $1700
Raid Card - LSI 9280 - $150 (internal connections?)

I am a little lost on the best choice for a Motherboard and Processor. I assume I will want ECC memory so that may limit things.

But this would put me around $2500 for hardware and leaves room for $4100 for Hard drives or about $6500 for the project.

Any suggestions for a hardware Raid 6 controller? I am considering a disk shelf and a separate server using a SAS expander as an alternative option to fully integrates system. This would keep the hard drives separate from the server and I could get a 24 disk shelf and fill half of it with 10tb and expand as needed for a much longer service life. I assume this would increase my budget which isn't a big deal.

If I wanted to custom build my hardware for the server are there any configurations online I could research to understand incompatibilities?
 
So Far this is what I have come up with.

Super Micro case - CSE-846BE1C-R1K28B - $1700
Raid Card - LSI 9280 - $150 (internal connections?)

I am a little lost on the best choice for a Motherboard and Processor. I assume I will want ECC memory so that may limit things.

But this would put me around $2500 for hardware and leaves room for $4100 for Hard drives or about $6500 for the project.

Any suggestions for a hardware Raid 6 controller? I am considering a disk shelf and a separate server using a SAS expander as an alternative option to fully integrates system. This would keep the hard drives separate from the server and I could get a 24 disk shelf and fill half of it with 10tb and expand as needed for a much longer service life. I assume this would increase my budget which isn't a big deal.

If I wanted to custom build my hardware for the server are there any configurations online I could research to understand incompatibilities?
Personally, I like the Areca cards, an 1882 would be a great addition if you intend to stick to Windows and go hardware RAID. For spinning rust there is no reason to go to SAS3, so you can stay with SAS2 for now. You can get a lightly used 24 bay SAS2 Supermicro enclosure/expander combo for 1/3 to 1/2 of the price you quoted above, so you might want to think about that.
 
I assume you are referring to serversupply or ebay to find used hardware. Any suggestions on case model? I assume a regular consumer motherboard and processor will work but would it be worth it to get a server grade setup?
 
I assume you are referring to serversupply or ebay to find used hardware. Any suggestions on case model? I assume a regular consumer motherboard and processor will work but would it be worth it to get a server grade setup?

Just read this - a couple of thoughts...

Keep the server and storage in 1 case (think a Supermicro 4u 24/36 bay server). I am currently using SAS to combine 2 additional 4u 24 bay servers to my main server and while it works pretty well, the added complexity of hardware issues that cause drives from the external enclosures to drop out can be a PITA to deal with. Luckily I'm using Snapraid with Stablebit Drivepool so I don't have any raid corruption issues.

It is possible to use consumer mobo/cpu in a Supermicro case (as well as consumer PSU to bypass the noise that the server PSUs generate) you will need to get an adapter to connect the power/reset/hd/etc connectors. Which one you will need will depend on the chassis that you choose.

If you go the route of a consumer mobo/cpu in a Supermicro case, make sure the mobo has plenty of 4-pin fan headers. The case fans that come with these server grade systems can run very fast and very loud. Connecting all the case fans (5 of them at least for the cases I have - SC846TQ) will allow you to tune down the fan speed.

The biggest compatibility issue you need to be sure to research/address is the SAS controller capability.

If you eventually want to get to a 24-drive array, you will need at least double parity (raid 6) or go with a non-raid array and configure the drives as JBOD. For my arrays using Snapraid (3TB drives x24, 4TH drives x24 (x2)) I do triple parity. I've actually had 2 drives fail in the same array and when rebuilding I lost 1 more. Luckily I didn't have any loss, but one of the benefits of Snapraid is that you would only ever lose the content on the drive that fails (if you can't rebuild) - raid arrays put the entire array at risk which isn't a bad thing back in the day when it was like 16 1TB drives, but 16 12TB drives would make me very hesitant to do standard raid setup.

Good luck - building a larger media server is a lot of fun but not without it's gotchas and growing pains.
 
You can find a lot of servers, HBA's, NIC's, etc, that's still quite fast (SandyBridge or newer) on Ebay.
 
5k as a budget for your solution maybe doable, but keep in mind.. Anything you buy off of ebay is already deprecated throw away hardware. Meaning its most likely already 4 years old or older. May not be a big deal now, but getting drivers to function with a modern OS can be a nightmare. As an example, you can still "buy" emulex lpe11000 cards.. but driver support is dead for it past windows 2008. You could easily score some HP g4 or g5 server for a "steal of a price" only to find out that later versions of Windows/Linux do not function due to lack of hardware driver support.

Maybe instead of building 1 giant complicated monster machine, why not do something with 3 or 4 intel nucs with a few external drives attached to them? Upgrades will be easier, and storage management will be much more simplified. Would be way easier in 3 years to replace nucs with new ones, and just reattach your external storage. Is it as sexy as having a rack in your basement? No.. Actually.. Lets just be honest with ourselves.. there is nothing sexy about actually having a server rack in your house.
 
I do this all the time for clients.

A 12 bay Dell R510 with a xeon, ~16gb of ram, a Dell PERC H700 raid card (supported almost universally), redundant power supplies, and 12 trays is about $500 on ebay with a 90 day warranty.

I then buy 12 drives, setup the raid, setup the OS and go.

You can buy 3-4 of those Dell units for the price of putting together a sled. The Dell will have better driver support.
 
Why would you spend 1700 on that case? You can get them used on ebay for like 1/3 of that with the correct backplane. You are better off getting some last gen servers being pulled from a datacenter for newer hardware. Something like an E5 xeon with 32-64GB of ram. Should be able to find similar hardware in the case you want for around 800$.

Raid card and drives will be up to you depending on the OS you choose. I went with ZFS, running 6-4TB z2 array for now, along with 4-400 intel s3700 ssds in raid 10 for vm storage. I also have an intel s3500 ssd as a cache drive in front of my z2 array since I built it with slower/lower powered 2.5" drives in a supermicro 826 case.


And DO NOT GET HP ANYTHING for server hardware. You cannot get any drivers/firmware/support without an enterprise license. Avoid that shit like the plague. Dell or Supermicro.
 
Last edited:
I do a lot of UCS work these days, but I've always been a fan of HP's stuff. It just sucks because Its getting to a point where all the big name brand vendors are putting drivers and update behind a pay wall these days. The server version of pay to win. =(
 
I had no clue about the paywall firmware and software updates. That is some shady stuff. I think I may be seriously considering the Unraid approach to this build. The more I consider my requirements I think it will be the best solution considering the VM options. Since I will be doing the build in the fall I am not going to speculate on teh hardware as I am sure there will be some deals the closer I get. I am thinking a 24 bay supermicro with an intel base. I may do some mining, game servers and media/backup stuff in a single case.
 
Back
Top