Help me pick my third mass-store server

lopoetve

Extremely [H]
Joined
Oct 11, 2001
Messages
33,883
I need a third production server to run an application that runs as a VM and takes a pile of drives. I'd prefer to use 2.5" drives for it since you can stack the suckers in to a 5 1/4" -> 2.5" bay if at all possible, but I've had a bitch of a time finding systems with 5 1/4 bays left, and I'm not sure which way to go.

Right now for this I've got a Dell T20 stuffed full of drives (but no external 5 1/4 bay sucks, I'm literally duct-taping shit in), and an old Sandy Bridge gaming system going into a special coolermaster case that'll fit under a shelf.

With the remainder of the resources this system will support normal VM workloads in this cluster, which are a mix of transcoding applications (the mass-store holds raw video/audio prior to recoding), a plex server (feeding from the transcoded data), some game servers, and a mix of other things that are RAM heavy but light on CPU. Trying to keep the entire cluster to at least Ivy Bridge era Xeons. I'd also like this system to be as small and silent as I can get it, since there are 9 other systems in this room where I work/game, and the cooler/quieter it can be the better - also it needs to be a tower as I don't have a rack.

Needs to support a minimum of 6 drives, and at least 4x 1GbE connections. IPMI/IPKVM would be nice, but I have one last KVM connection left as well. I also get a major discount on all Dell hardware, because yeah.

Option 1:
Xeon-D SoC based system. Lower power/cooling than some of the other options, but I'd have to build it instead of just buying something and adding drives/RAM. Not sure if I really feel like building another system, but I'd have a real IPMI via the SM boards which is nice.

Option 2:
Dell T330. Everything I need, add the bay and fill with drives and you're done. It's BIG though - I've got a T310 that runs another set of workloads, and it takes as much space as I'd want to give up to a single system.

Option 3:
Dell T30. Same issues as the T20 I currently have, but more modern, and just... well, stuff shit in again. Not my favorite idea, and no real IPMI either since those don't have DRAC, but hey - it's small.

Option 4:
Buy another used gaming system circa Sandy Bridge/Ivy Bridge from someone on here, strip out the video card, and just use the motherboard. No IPMI again, which means another KVM connection I've got to use. Cheaper than the alternatives I'm sure, and I can put a corsair water-cooling AIO in to keep it quiet.

Thoughts?
 
I'm not really an expert around the VM stuff, but from my experience on the storage stuff for home use, I'd probably go for #1 or #2. I have a couple Dell servers at home, including the T20. If you need a bunch of disks, I think the T30 would be right out. I like the T330 option, it's easy and it's an assembled unit. I have a T110ii for my storage server, and I've really liked it--within its limitations. However, next time I'm building one, I'm probably going to go with a more purpose-driven system such as your option 1. You can get a little more purpose-built machine for about the same cost. Option 4 I'd only consider if I was looking for a system that didn't need to be all that reliable, and low cost was the greatest concern. The turn-key servers seem to be pretty hassle free (with exception to my first T20).

There are others that are more qualified to answer for sure, but these would be my thoughts.
 
Thats a lot of how I was thinking so far- the existing Sandy Bridge system is only because I have the hardware; been using it for years and trust it quite well. I just wish the T330 wasn't so ~big~
 
Thats a lot of how I was thinking so far- the existing Sandy Bridge system is only because I have the hardware; been using it for years and trust it quite well. I just wish the T330 wasn't so ~big~

Isn't there an updated successor to the T110ii? It has limitations similar to the T20, but not as much. Mine has two full-size bays you could install 2.5" adapter bays into for what you need. I also seem to recall there was an option to get internal racks for 2.5" drives on the T110ii instead of 3.5" drives. If that is possible, it would be a very good option. The T110ii is a pretty small unit, very close in size to the T20, and overall fairly quiet. The options are better on the T110ii series, and I believe they are ESXi-approved--as opposed to the T20 series. I have not really looked at Dell options since I got these probably about two years ago now, but that's what I would look for first if the T330 is too large.

There's also the related Lenovo units, and I recall some of them have quite a bit in drive bays already--but are probably similarly sized to the T330, and I am still a little jittery about Lenovo firmware issues.
 
Yeah, but same issues on the T130, the current generation of those - no optical bay for swapping, and only 4 drive bays. I'd have to get out the duct tape again to secure the last two drives and I'm somewhat loathe to do that. Doesn't look like there's a 2.5" rack kit for them - they switched the optical to a slim-design in a custom bay for it, and then just went with a 4-pack on the 3.5" drives.

I'm always REALLY hesitant on anything Lenovo made these days - they're just not keeping up. The ML30 would do the job, but I just can't bring myself to buy HP given who I work for.
 
Yeah, but same issues on the T130, the current generation of those - no optical bay for swapping, and only 4 drive bays. I'd have to get out the duct tape again to secure the last two drives and I'm somewhat loathe to do that. Doesn't look like there's a 2.5" rack kit for them - they switched the optical to a slim-design in a custom bay for it, and then just went with a 4-pack on the 3.5" drives.

Oh, yeah I wouldn't bother with that then. I'd probably lean heavily towards the DIY approach. Building is sometimes a hassle, but it shouldn't take too long. The worst part is probably picking out the parts, but I'd probably just find a SuperMicro with the CPU, network and drive controllers I wanted in whatever form factor and be done. The actual build itself shouldn't take more than an hour, if you're careful. For a storage system, I like that Silverstone (?) case that came out with 8 full size bays in mini-ITX a while back. You could put what, 48 2.5" drives in it?

I'm always REALLY hesitant on anything Lenovo made these days - they're just not keeping up. The ML30 would do the job, but I just can't bring myself to buy HP given who I work for.

I've been weary of HP from some of the firmware issues (paid subscriptions or whatever), but I haven't followed it too closely. I'd probably go DIY for the next server, since the only other option is Dell and to get exactly what I want out of the box with them I'm looking at usually a significantly higher cost.
 
That silver stone sounds perfect. Got a link?

Silverstone DS380. It even has more internal 2.5" drives as well. I haven't used one, but it's on the list if I rebuild.

SilentPC Review: here

Amazon: here


ETA: Huh, looks like they've also released an even smaller case for just 2.5" drives with the cages already: here
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
That does mean I have to stick with the Xeon-D though, since there's no ITX board for E3 v5 out that I can seem to find... Hmm. Choices Choices.
 
Option #4 alternate:
Based on what you proposed for option 4, i'm guessing a 32GB RAM limit isn't a problem.
I'd go with a PowerEdge T110 i or ii and toss an HBA and a quad port NIC in there.
It has 2 x 5.25" bays to hold anywhere from 8 to 16 2.5" drives w/5.25 to 2.5" cages.
You have left over 4x3.5" bays that can easily hold 8 more 2.5" drives.
Though not full IPMI, it does have the simple functions to power on/off remotely.

^^ Thats what I had a few years back.

T110s trend around 100-150 .. the real cost will be to max out the memory.

EDIT: The above is for Gen 1 (Nehalem)
T110 Gen 2's (Sandy Bridge) look to go for $200+ for Xeon E3-1220 series.

What size 2.5" drives are you looking at using (mm) and what capacity (TB)?
 
Last edited:
Option #4 alternate:
Based on what you proposed for option 4, i'm guessing a 32GB RAM limit isn't a problem.
I'd go with a PowerEdge T110 i or ii and toss an HBA and a quad port NIC in there.
It has 2 x 5.25" bays to hold anywhere from 8 to 16 2.5" drives w/5.25 to 2.5" cages.
You have left over 4x3.5" bays that can easily hold 8 more 2.5" drives.
Though not full IPMI, it does have the simple functions to power on/off remotely.

^^ Thats what I had a few years back.

T110s trend around 100-150 .. the real cost will be to max out the memory.

What size 2.5" drives are you looking at using (mm) and what capacity (TB)?

Ram isn't the issue, but the T110ii are all on older CPUs - I'm trying to avoid buying something that is approaching or past EOL. I already made that mistake once with my microserver, would like to avoid doing it again. :)

I'm planning on using seagate enterprise 2.5" SATA drives - they're slightly larger in form factor (12.5 mm, I believe) than your average 2.5" drive, but still will fit in many bays. 1TB per drive for a usable capacity of 12T, ish.
 
Xeon-D seems to be the best fit. There is a mITX E5 MB out there but uses a narrow ILM so cooling options are limited.
 
Not using ZFS for this - it doesn't have the redundancy requirements I need, and it's far more focused on tier-2 workloads, while this is a tier-3/tier-4 mass-store. I've got a ZFS system for tier-2 (it's actually where the output from the processing on these will go live), and a Synology / Unity for Tier-1. Your case links are sweet though - I'm looking through options there, maybe a small SM+E3 V5 would be a better way in that fractal box.

edit: Redundancy/scaling requirements, I should say. The application I'm using is a scale-out app.
 
Last edited:
Ram isn't the issue, but the T110ii are all on older CPUs - I'm trying to avoid buying something that is approaching or past EOL. I already made that mistake once with my microserver, would like to avoid doing it again. :)

I'm planning on using seagate enterprise 2.5" SATA drives - they're slightly larger in form factor (12.5 mm, I believe) than your average 2.5" drive, but still will fit in many bays. 1TB per drive for a usable capacity of 12T, ish.

Okay ..may as well eliminate options 3 and 4 then ...since you have been there and done that.


So 12-16 2.5" 12.5mm Enterprise drives .... but you mention the system/board you seek should have support for a minimum of 6 drives.
Does not add up ... please fill in the blanks.

Do you have or plan to purchase HBA/RAID card or you looking for on-board support ??
 
Not using ZFS for this - it doesn't have the redundancy requirements I need, and it's far more focused on tier-2 workloads, while this is a tier-3/tier-4 mass-store. I've got a ZFS system for tier-2 (it's actually where the output from the processing on these will go live), and a Synology / Unity for Tier-1. Your case links are sweet though - I'm looking through options there, maybe a small SM+E3 V5 would be a better way in that fractal box.

edit: Redundancy/scaling requirements, I should say. The application I'm using is a scale-out app.

Hmm ... if you are considering the Fractal Define R4/R5 then you may as well get the PowerEdge T330 as the R4/R5 aren't that much smaller,
and you said it has everything you want + you get a Dell discount.
 
Hmm ... if you are considering the Fractal Define R4/R5 then you may as well get the PowerEdge T330 as the R4/R5 aren't that much smaller,
and you said it has everything you want + you get a Dell discount.

Crap, looked smaller but you're right. That bugger is bigger than I thought. Far bigger. BAH. Ok, hmm...


Okay ..may as well eliminate options 3 and 4 then ...since you have been there and done that.


So 12-16 2.5" 12.5mm Enterprise drives .... but you mention the system/board you seek should have support for a minimum of 6 drives.
Does not add up ... please fill in the blanks.

Do you have or plan to purchase HBA/RAID card or you looking for on-board support ??

Scale-Out. There are three nodes (up to 12) to start with with 6 drives per node :) I've got servers 1 + 2, I'm looking for 3. Server 1 is the T20 that I'm hacking up to make work (which I don't want to do again), server 2 is spare parts that I have lying around that I'm putting into a case that will fit into a very specific spot - can't easily repeat that.

What I really need is something T20/T30 sized but with a single 5.25" bay (dual preferred, but I can make single work).
 
Understood now... I am a fan of Dell myself but sometime you have to build what you want.
My attempt without a budget in mind and just meeting needs::

In Win Z589 micro ATX Case
http://www.newegg.com/Product/Product.aspx?Item=N82E16811108211
Smaller footprint than T20 with 2 x 5.25" external bays.
Power supply included probably not best quality but should last - just pray it's quiet.


Supermicro X11SSH-LN4F-O
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182996
Just because it's new and not close to EOL ;)
It has 6+2 SATA ports (6 normal + 2 DOM)
4 x 1Gbit NICs (Intel 210)
Up to 64GB RAM if you ever need that.

Intel Xeon E3-1230 v5 SkyLake 3.4 GHz
http://www.newegg.com/Product/Product.aspx?Item=N82E16819117613
Overkill from what you mentioned but was only $50 more than the E3-1220v5 - can swap and save a lil $


Samsung DDR4-2133 16GB/2Gx72 ECC CL15
http://www.superbiiz.com/detail.php?name=D416GE21S
Buy two now and two later as needed.
Or ... get 8GB modules to save $, but be sure you don't want more than 32GB RAM
 
Not using ZFS for this - it doesn't have the redundancy requirements I need, and it's far more focused on tier-2 workloads, while this is a tier-3/tier-4 mass-store. I've got a ZFS system for tier-2 (it's actually where the output from the processing on these will go live), and a Synology / Unity for Tier-1. Your case links are sweet though - I'm looking through options there, maybe a small SM+E3 V5 would be a better way in that fractal box.

edit: Redundancy/scaling requirements, I should say. The application I'm using is a scale-out app.

Isilon? :p
 
Not really an issue. My eventual plans for this lab will be almost 12 total servers. Right now I'm at 3 in the management cluster (A1SAi), 3 in Production, with a fourth and fifth about to stand up. :D
 
Not really an issue. My eventual plans for this lab will be almost 12 total servers. Right now I'm at 3 in the management cluster (A1SAi), 3 in Production, with a fourth and fifth about to stand up. :D

Wow. Can't imagine what workload you are running that would require all that. Guessing this is for more then just personal use.

Maybe you should skip the a la carte model and go straight for a VBlock :p
 
A lot of it is testing, a bit of software development, demoing software/hardware packages, scripting development, app packaging testing, etc. Bit of everything :) I've kept the power and cooling low by carefully picking parts so far - the A1SAi are all passively cooled, and most of the rest are lower-power E3 Xeons. The Unity is the most power hungry thing in the lab - as long as I leave the Tintri turned off. Turning it on would get too noisy though.

Vblock won't fit ;)

Hell, I've got 96 ports of 1G already in here, and I'm debating how to get to 10G in an affordable manner (and no, I'm not going Mellanox - may add that for Infiniband at some point if I have a need, but I want a real 10G switch).
 
Back
Top