Storage Server Build Planning - Feedback Appreciated

pirivan

Limp Gawd
Joined
Feb 22, 2009
Messages
346
First, let me say thanks in advance for any feedback, I always appreciate the advice of the knowledgeable users on this forum.

Background: I have an existing Norco 4020 (original version), with an Areca 1880i, old school HP SAS expander and an older Intel motherboard/CPU with Hitachi 2TB drives. I purchased 7x new 8TB WD RED drives to build a larger array only to my dismay to find that they do not physically fit into all the slots of my Norco.

So, given that I have an aging system with an issue, I decided it is time to explore some new options. My parameters are loosely as follows:
1. Support 10Gbs Ethernet in the future. Not built in necessarily but it needs to be 'upgradeable' to 10Gbps
2. I need to be able to connect up backup enclosures that go offsite with at least 10 bays between two enclosures. Currently I use 2x5-bay AMS eSATA backup enclosures that have worked great connecting via a PCI-E x1 card. I would love to re-use these but I am open to replacing them with USB 3.0 enclosures IF the per enclosure price is reasonable.
3. It needs to be quiet as far as a storage server is concerned. I need to be able to make it quiet with appropriate fans if it is not via the stock cooling.
4. Performance isn't a huge concern given that this will primarily just be a storage server, but with 7x8TB drives to start with I'd like rebuild times to be within reason. Mostly it will just stream files but I could see transcoding being required in the future from 4K to 1080p for certain devices until all the clients can handle 4K streaming
5. Price is a factor of course and I am aiming to keep it under $1400 or so. Given the price of good RAID controllers generally I was planning to re-use my 1880i to save money
6. Hot swappable drive bays. I wrestled with the idea of tricking out a Fractal R5 or something but ultimately it just did not feel like the best way to meet my needs.

After some initial research, here are some solutions that met my criteria. I am interested to hear feedback on these to see if I am missing something important and/or if there are better solutions I should be aware of.

Solution 1: Entirely New Server Build
---------------------------------------------------------
The plan here would be to purchase a new Norco 4224 and fill it with brand new parts. Here is an EXAMPLE of some parts I located to get a rough cost estimate (I am not married to these or convinced that they are the best):

1. Replacement 120MM fan ($25.95x3): NF-F12-iPPC-3000
2. Replacement 80mm fans ($5.40x2): MASSCOOL FD08025S1M4 80mm Case Cooling Fan
3. Case ($429.99): NORCO RPC-4224 -> I think this comes with the 120mm fan wall now
4. PSU $120-$180: Not sure here, something fully modular, quiet, 850W, maybe Seasonic etc.
5. Motherboard $285-$342: No idea here, just picked a couple of SM boards out of a hat, ideally they would allow the HP SAS expander to still work: SUPERMICRO MBD-X11SAT-O or Supermicro X11SAT-F
6. RAM $138: Kingston ValueRAM 16GB (1x16G) DDR4 2133 ECC DIMM KVR21E15D8/16
7. CPU $270: Intel Xeon E3-1230 v6 Kaby Lake 3.5 GHz 8MB GA 1151 72W BX80677E31230V6

Ideally with this build I would keep my existing 1880i and could consider replacing the HP SAS expander with a more recent version (HP 12G SAS Expander - $245). Without factoring cables or the SAS expander I am looking at roughly $1500 for this option.

Pros:
1. It's mostly all new parts and thus increases the resale value down the road.
2. I can re-use my old eSATA card and backup enclosures and it has plenty of PCI-E slots for installing a USB-C/thunderbolt 3 card or 10GBps Ethernet PCI-E card down the road
Cons:
1. I have read less than stellar things about Norco 4224 backplanes in some of the reviews and am I already well aware of their "build quality" overall (leaves something to be desired and their drive cage sizes is what put me in this position to start with). Also, I understand that their support is pretty useless.
2. It's been a long time but cabling it all up in the old 4020 was a bit of a PITA so I am not looking forward to doing that over again in a similar case
---------------------------------------------------------

Solution 2: Used SuperMicro Build
---------------------------------------------------------
This plan would be to purchase a used SuperMicro 846BA-R920B that already comes pre-installed with dual CPU's, 2x SQ PSU's (so they should be 'quiet' as I understand it) and 48GB of RAM and replace the fans with quieter options. Here is the 'parts' list:

1. Case/CPU/RAM/Motherboard/PSU ($1098): 4U X9DRI-LN4F+ 24 bay SAS3 2x Xeon E5-2680 8 Core 2.7GHz 64GB SATADom 48GB SQ PS
2. Replacement Fans: No idea, I would need to research what people are using to 'quiet' these cases that are relatively straightforward to install (120MM fan ($25.95x?): NF-F12-iPPC-3000?)

Again ideally I would re-use my 1880i and possibly replace the SAS expander with a new one ((HP 12G SAS Expander - $245). Looking at somewhere around $1200 not factoring in whatever SAS cables I will need. It's possible that I could make a lower offer for the server, but who knows what they might entertain in terms of price.

Pros:
1. I understand SM cases are fantastic and I could escape some of the jankiness of the Norco cases
2. The system would mostly be built. I would need to replace fans and install and cable up the RAID card/SAS expander but hopefully (no idea how SM cases are to work in) it wouldn't be too rough.
3. Should have just enough space for me to install a PCI-E x1 eSATA card, 10Gbps Ethernet card, and the RAID Card/SAS expander and slot in a USB 3.0 card as well.
4. Appears to be a good 'value' if you part out the RAM/CPU/motherboard/case/SQ PSU costs
Cons:
1. I believe this is a CPU/platform from 2011 so any kind of resale value by the time I am done with it is likely approaching zero.
2. Again given that the platform is aging, finding recent drivers could be problematic for motherboard etc.
3. No USB 3.0 onboard
4. I would have no warranty for just about anything in the system
---------------------------------------------------------

Solution 3: DS1817+
---------------------------------------------------------
This is a bit of a wild card. It would be a very quiet and compact solution for my storage needs and I have an existing single bay synology, so I am familiar with the platform. Normally I would consider the 12-bay version but it has not been upgraded recently and has no 10Gbps support, plus it's pricey so it's out. Parts:
1. DS1817+: $949.99
2. 5 Bay USB enclosures: $179 (x2)

So here I would need to buy the device and USB enclosures and that is it. They do have the DX517 expansion units but those are a massive ripoff, $470 for 5 bays seems ridiculous to me.

Pros:
1. Simple to install, setup and configure.
2. Has support (though I have heard it is pretty awful)
3. Lower power utilization than a 4U server and should be reasonably nice looking and quiet out of the box
4. 10Gbps expansion support via a card
Cons:
1. Right away I would fill 7 out of the 8 bays so there isn't any way that I could keep this for many years and keep expanding the array. I would have to dump/resell/buy larger capacity drives or unit with more bays or an expansion unit
2. Expensive cost per # of drives bays you get
3. Low powered CPU compared to other options, could it even transcode 4K if I wanted it to? Hard to say how bad rebuild times might be
4. Requires me to purchase new USB 3.0 backup enclosures. It's POSSIBLE that the DS1817+ would work with the 5-bay eSATA enclosures that I have but I haven't seem them work with anything except the PCI-E x1 adapters they shipped with so this feels pretty unlikely.
5. Will any 5-bay USB 3.0 enclosures actually WORK with this unit for backups? Or will I have to connect them to another PC on the network and then do the backups over 1Gbps from the NAS -> PC -> enclosure (a PITA) or buy a DX517 expansion unit (overpriced).

If you made it this far, I thank you, my apologies for the length. At a minimum it was helpful to me to write out my thought process. At this point I am leaning toward option 2, though I am a bit leery of "upgrading" to a system of that age and of how easily I can really make it a "quiet" server. Option 3 is of course appealing given that it would look svelte and be "simple" but it may be pretty limiting in a variety of ways.

I am interested to hear what people think if they have weighed with similar plans/builds. Thanks again!
 
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
just to through in some more confusion. the amount of hardware i have gone through in my home lab is epic finding out what was good and not good. i have used Dell, supermicro, norco, customized ATX cases to be rack mount, and chenbro. by far my FAVORITE turned out to be chenbro. it is every bit as good with more options, lower cost, and more standardized internally, than supermicro. and so much better than norco it should not be compared. chenbro cases can be 4 slot to 24 slot. and skids are available for 3.5 and 2.5 drives.

the areca is an amazing card but cost wise, after trying EVERY possible combination of RAID from adaptec and LSI, areca, and even some more home-sentric sata cards. and also trying all the common, and some very uncommon SAS expanders, i ended up on reflashed dell perc H310 cards. if you are using the RAID stuff built in to the areca, keep it. but still expanders get expensive and i am a broke foo. the perc HBA cards are amazing for handing buckets of disks to the OS.

continuing on, possibly the best deal on a home use server right now is the AMD Ryzen platform with ECC memory. (gygabyte atx board) tons of cores and FULL VM, iommu, vt-d, etc. now there are lots of other (even cheaper) options. a used supermicro g34 board or something like that. and the low power g34 CPU's are not expensive used so making it fairly quiet would be easyish.

i probably only have 800$ in my build with 16 bay chenbro, 16 amd cores g34 socket, 96 gigs ecc ram, a 10g fiber nic. and 4 1g copper nics, 2 perc h310 hba. not counting the hard drives. and there is still a couple open pcie slots if you needed USB3 or something extra.

also gotta plug proxmox for your new build, might as well make it a VM host too right?
 
Zedicus Thanks for the feedback!

In terms of Chenbro, could you point me toward some 4U 24 slot (or at least 20 slot cases) that are affordable? I am striking out doing some searching initially, what I am finding seems pretty pricey... Maybe even 16 bay I suppose. Can you get 'quiet' PSU's for the Chenbro like you can for the SM? It looks like the Chenbro's use the more non-generic hot swappable PSU's like the SM and unlike the Norco.

Given that I already have an 1880i areca card I will probably stick with it as my RAID controller but is there a better/cheaper SAS Expander than the HP 12GB for $245 that you would recommend to go with the Areca?

In terms of Ryzen, the Ryzen 7 1700 looks pretty decent for $315 for 8 cores. However, I of course then need a solid Ryzen motherboard that includes a minimum of 4 PCI-E slots (Areca, SAS expander and 10Gbps ethernet card and it's full already, no room for the PCI-Ex1 eSATA card). I have been out of the AMD game for a LONG time so I am not sure what the most 'stable' manufacturer is considered to be. Maybe something like the GIGABYTE AORUS GA-AX370-Gaming K7

I would be slightly concerned with the SAS expander working with a consumer AMD board. This is ancient now but years ago I had to do a ton of research to locate a motherboard that would properly power the HP SAS expander, many 'consumer' motherboards wouldn't provide the proper power to the slot and it would not function properly (it's how I ended up with an Intel server style motherboard and a Xeon processor in my current build). I am not at all opposed to Ryzen from a performance standpoint (looks fantastic) just from a compatibility/quality standpoint for the motherboard.
 
you will have to watch ebay for a chenbro case. i will admit retail they are about the same cost as supermicro. but used or overstock new the chenbro cost drops a ton. looky here
looky here

as for the AMD ryzen build, AMD designs a server CPU and then does there best to make it perform as a desktop part. so all their CPUs have the ability to do things like ECC ram and full virtualization, it is just up to the MB to allow or deny those features. that said i have had avery brand of expander over the years and even when i had an HP expander it fired up on every AMD board i put it in. i have NOT tried one on a ryzen build, but i would say odds are in your favor.

yes that gygabyte board would be good, one concern with desktop boards and ram is that while they will use ECC when configured correctly, it MUST be non-reg. and the pcie slot count, they do have a board with 4 full length slots but i am not seeing it with sales availability anywhere at the moment. if you have half height cards you could always plug them in to a 1x slot with an adapter, provided you are not bandwidth limited.

also the PSU in chenbro cases is caged, what that means is even if it has a hotswap module, the cage will come out and a standard ATX psu can be fitted.

(NOTE:) the ebay link seems to be hotswap psu only as it is a custom design, but that chassis is a custom one that is not exactly what you were looking for, just an example anyway.
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
I would avoid USB, eSata, hardware raid and Expander solutions without SAS disks.
My suggestion would be

- Use a storage case with a passive backplane (no expander)
- If you want to add an external SAS jbod backup case, use one with SAS or Sata disks and expander

- use a mainboard like https://www.supermicro.nl/products/motherboard/Xeon/D/X10SDV-2C-7TP4F.cfm
This is the dualcore version. It comes with a 16 channel LSI SAS/Sata HBA, 10 Gbe and can hold up to 128GB ECC RAM

The same board is available with more cores when needed but for storage mainly RAM counts when used as cache.

Use ZFS!
This offers software raid over the LSI HBA with best of all performance and datasecurity.

Regarding OS, use a storage optimized regular enterprise OS:
ZFS is origin and native on Oracle Solaris with best integration of OS, ZFS and storage services especially
when it comes to Windows alike ACL Permissions and integration of ZFS snaps with Windows previous versions.
This is the fastest and most feature rich ZFS option but for commercial use it is quite expensive.

OmniOS and OpenIndiana are free Solaris forks. Beside encryption and ultrafast sequential resilvering
they are comparable to Oracle Solaris (a little slower in my tests).

For them I offer a Web-UI for easy storage management, see my setup howto
http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf
 
Last edited:
The server variant of Ryzen (Naples) is due to be released in a few weeks. It might be worth waiting for if you can.
 
Franko Yeah, I am weighing my options now (so many choices) so I may end up waiting that long just due to analysis paralysis! It will be interesting to see if there are SuperMicro motherboard options for Ryzen (Naples).

In case anyone reading this was curious about an update, I am currently weighing options from Mr. Rackables on eBay, they have some essentially prebuilt Xeon solutions in SuperMicro 846 chassis(s) that look pretty simple to get up and running with. Swap out to Noctua fans and it's ready to roll almost after installing the Areca 1880i. It appears that some of the SM 846 chassis have backplanes with expanders built in, so it's possible that I could eliminate the HP SAS expander from the equation if I went with a BPN-SAS2-846EL1 or BPN-SAS2-826EL1 backplane in an SM chassis and just cable directly from the 1880i -> backplane. However, I need to confirm if using the Areca 1880i with the SuperMicro BPN-SAS2-846EL1 or BPN-SAS2-826EL1 backplane expanders is actually supported/works; I would want to try to confirm that before I pulled the trigger on that solution.

I am also somewhat concerned about the noise from an SM 846 chassis even with the SQ PSU inserted. Hopefully after replacing the fans with Noctua and the SQ PSU's it has an acceptable noise level from a few feet away (I am trying to get it to a desktop noise level essentially). My assumption is that the PSU's are simply redundant so I could run with a single PSU to reduce noise as well.

Some users in another forum (like _Gea) are recommending ditching the Areca RAID controller card and moving to using simply the onboard LSI controller (flashed to IT mode to work as an HBA) connected to a SAS expander backplane without ANY raid controller and instead using FreeNAS/ZFS or one of the other 'software RAID' solutions. The advantage there is the 'portability' of the ZFS pool between multiple OS's that support ZFS and less direct ties for your array to a specific piece of hardware, like the RAID controller. The disadvantage is that if anything goes wrong I know 0 about Linux and I immediately have to hop into the forums to ask questions; whereas I can do some troubleshooting on my own if I stick with the Areca and Windows. Also, I have some real concerns about the eSATA PCI-E x1 port multiplier cards working properly in linux, plus the backup drives I use are all NTFS, so I would need to reformat them I am sure. So I have a few concerns to consider if I want to move to FreeNAS etc.
 
Just a quick note on your Option #3, Synology has updated the DS2415 ... but now it is the DS3617xs+. It is in their business line products and not really intended for home use. The price will likely make your eyes bug out, but then again it is designed for business use. But it does support 2 additional 12-bay expansions and PCIe Network cards up to 2x40GbE. The DS2415 can do link aggregation so you would be able to get 4GbE from it.

If I were in your situation with your budget, I would probably lean towards the building of a new machine.

Personally, I would go towards the Synology for noise, power consumption, warranty, and support reasons. If I found that it was not handling the transcoding of 4K to 1080p content sufficiently, I would pick up an Intel NUC or something similar and use it to run my media server software.
 
Durpity Thanks for the heads up. The Synology are quite nice little packages, just the per bay price you pay is quite high (especially when you take into account the CPU performance level you are getting with it). The points you made are absolutely why I had Synology on the list, primarily the noise and power consumption (also ease of setup). The DS3617xs+ does look great but phew that's a high price compared the 2415+. I am still leaning towards rolling my own again, with an older Mr. Rackables SuperMicro system, I just need to find the time to do a little more research and decide on specific parts/plans.
 
No Problem pirivan. One of the other questions to ask yourself is do you really need 10GbE speeds? Streaming 4K content takes around 25Mbps so considering future proofing for 8K content, a 1Gbps connection would be able to handle 8-10 concurrent streams. Unless you're dealing with mega-files for video editing and/or medical records, 1Gbps will likely be all that you need for the next 10+ years.

As long as the SuperMicro isn't going to be in your bedroom or living room, the noise shouldn't be too bad. However, they do tend to sound about like a gaming tower even on low load. It will get loud under load even with upgraded (quieter) fans.
 
-10G is mainly wanted for data copy and backup for multi-Terabytes what last ages with 1G
- A 19" SuperMicro case is even too loud in the next room, these are server cases for server rooms.

A regular tower case where you can place large low speed fans would be better suited for private rooms,
see some build examples http://www.napp-it.org/doc/downloads/napp-it_build_examples.pdf
 
Durpity and _Gea

I appreciate the feedback, good food for thought. I agree that 10Gbps is absolutely overkill for anything I can or will do with it in the next few years. I am not even planning to purchase a 10Gbps card for some time. However, with the per port price on 10Gbps switches slowly approaching reasonable, I believe it is possible that in 3 years or so I might be interested in purchasing a 10Gbps switch, not because I need it but more because, hey it's cool and the massive speed increase would be neat (so not at all necessary, just for "fun").

Given that my previous server has lasted for 5 years or so (and would have gone longer if the drives worked), it felt prudent to make sure I could slot in a PCI-E 10Gbps NIC at some time in the future. That's why the DS1817+ was appealing for the Synology option given that it would support that addition.

In terms of noise I am a bit concerned about it but reports from people seem to vary in terms of how quiet the SM cases are after installing the SQ PSU's and noctua fans; some people seem to think they are pretty quiet but it appears that you feel this might not be the case. This will be going into an office (where two gaming PC's already reside with acceptable noise levels) where it will be close to me so I don't want it to be too annoying with very high pitched fans etc. The Norco 4020 currently sits in the same location the new server will go and it has masscool fans installed in it that make an extremely acceptable level of 'white' background noise. My assumption has been that after 'quieting' the SM case down it will be similar.

I considered using a Fractal R5 (which are fantastic cases) but 11 drive bays just felt like too few for committing to a tower case that consumes that much physical space and would only leave me with a few free slots initially. Still perhaps it is worth reconsidering. I almost pulled the trigger on a mr.rackables custom build SM server I am still negotiating a bit on price/configuration so I still have a little time to think about it.
 
Back
Top