Your Thoughts on My Redundant Server Plans

Beta4Me

Weaksauce
Joined
Nov 3, 2011
Messages
103
I originally wanted to put together a server to store my ever-growing media collection. This sort of snowballed into more than just a media server--it would also be the server for HARC Technology (my business).

So now the plan is...
2 x Head Nodes
1 x 4U JBOD Chassis/Storage Shelf (45 x 3.5" Disk Slots)
1 x 2U JBOD Chassis/Storage Shelf (12 x 3.5" Disk Slots)

(NB: I might be missing some totally obvious things, so as such please point them out. Also, I have very little idea what I should have setup for VMware as far as networking etc. go, so I haven't included that and would be really appreciate of any advice as to that.)


SPECIFICATIONS

Head Nodes (x2)
SuperMicro 826A-R1200LPB
SuperMicro X8DTH-6F
2 x Intel Xeon E5645 Hex-Core 2.40GHz CPU
12 x Crucial 8GB DDR3-1333 Reg ECC 1.5V RAM (96GB Total)
2 x Intel 320 SSD 40GB (RAID1 Mirror for ESXi)
3 x LSI SAS 9205-8e HBA (2 for 4U & 1 for 2U using dual-uplinks)
Intel 10GbE AF DA Dual Port Server Adapter (2 x 10GbE) [Is this a good choice?]
2 x Intel Gigabit Quad Port Server Adapter (2 x 4 x 1GbE) [Which one? :confused: I350, I350, ET, ET2, EF...]
(So...this gives 10 x 1GbE + 2 x 10GbE which is derived from 2 x 1GbE (Onboard) + 8 x 1GbE (Cards) + 2 x 10GbE (Card))

4U Storage Shelf
SuperMicro 847E26-RJBOD1
15 x Seagate Constellation ES.2 3TB 64MB
(HDDs: 2 x RAID-Z2 (5+2) + 1 x Hot Spare = 10 x Data + 4 x Parity + 1 x Hot Spare)
(NB: In the future, I would expand my pool by adding blocks of 15 drives based on the same structure as above, so that I can expand to the full 45 drives including 3 hot spares.)


2U Storage Shelf
SuperMicro 826E26-R1200LPB
8 x SSD for L2ARC [I'm thinking: Intel 320 SSD 160GB]
4 x SSD for ZIL (2 mirrored pairs) [I'm thinking: Intel 311 SSD 2GB. Thoughts?]
12 x LSISS9252 (One interposer for each SATA SSD)

Software
On each Head Node:
VMware vSphere Hypervisor/ESXi
Solaris 11 (VM) in Active/Passive configuration [or comparable OS--which should I be going for?] - 2 x 10GbE (1 to switch + 1 crossover between nodes)
VMs Load-Balanced between Nodes:
SBS 2011 Standard (SBS 2011 Premium) - 2 x 1GbE (teamed)
WS 2008 R2 for SQL + VMware vCenter (SBS 2011 Premium) - 1 x 1GbE
WS 2008 R2 for RDS - 1 x 1GbE
WS 2008 R2 for Lync Server - 1 x 1GbE
WHS 2011 - 2 x 1GbE (teamed)
Miscellaneous Temporarily-Run Testing VMs (mostly Windows 7) - 1 x 1GbE
(2 x on-board 1GbE for VMware use) [I'm thinking 1 for management and 1 crossover between nodes for vMotion? No idea what I should be doing here...]


Notes/Questions
- We have a MAPS + TechNet etc. for all the Microsoft production & testing licenses and for VMware we have NFR, so the cost of those are't an issue. All other software, though, would need to be acquired.
- VMware Networking & NIC allocation? (As mentioned above.)
- As per E26 versions of chassis, all drives have SAS2 MPIO implemented with a path to each head node.
- Is Solaris 11 the right choice for a ZFS storage OS to: share media storage directly to clients, maintain the data volume for SBS and hold the VM store for VMware? How do I go about implementing dual-instance redundancy?
- What do I need to do/have to allow for auto-vMotion of VMs from one node to another if one fails? How do I implement vMotion for auto-load-balancing?
- Are my specifications sufficient/overkill/appropriate?
- I'm sure more questions will come... ;)

If any other information is required, please let me know! I appreciate all help and advice, especially as this setup is as much for learning (and I've got a lot of that to do) as it is for production and testing :D THANKS!
 
Last edited:
Oh, I'm also going to be running Dynamics CRM 2011 & MYOB Premier with both shared out (and MSO 2010 Pro Plus) via RDS.
With this setup, I really don't have a plan for backup. What do you think I should implement? Thanks :)
 
Oh, I'm also going to be running Dynamics CRM 2011 & MYOB Premier with both shared out (and MSO 2010 Pro Plus) via RDS.
With this setup, I really don't have a plan for backup. What do you think I should implement? Thanks :)

What kind of budget do you have for your backup plans? Optimally, for both security and backup window (time) considerations, I would suggest a disk to disk to tape infrastructure, with the tape library cassettes sent offsite on a rotating schedule. What kind of workday do you have from a time stance, just 9-5 (yielding ~14 hours for backup) or two shifts?
 
Last edited:
There is practically no budget left... :(

Business hours vary wildly, but its safe to say that work gets down (and/or the servers would get used) between as early as 8am and as late as midnight. So call "backup time" 12am-8am.

Thinking about it, there's about 4 types of data. The VMs, shared/user data, media & whatever else is in the zpool (e.g. software deployments/installers).

- Shared/user data will have online backup
- Media I'm not that fussed about, especially as its the bulk of the storage space, and will backup however I can
- VMs backed up to another PC with lots of cheap HDDs with Veeam (then this needs to go off-site somehow, so tape or removable disk?)
- Everything else from the zpool (no idea what to do with it.)
 
Well, with almost no budget, that is going to put you into a bit of a bind. With 15x3TB Drives, assuming 2 drives lost to parity you have a maximum of ~40TB to backup in a windows of at minimum 8 hours. That gives you a yield of 5TB/Hour, which means you need a constant ~83MB/Second. You have a few choices, you could either build a second ~45TB ZFS box, you you could go with a straight tape library. Keep in mind that most tape libraries are specced at 2:1 compression, and most of your media is uncompressable, so with LTO5 for example, you would need a library that could handle a minimum of 30 Tapes. Any way you go, this is going to cost a lot more than no budget (the tape drive(s) / library are going to be in the 15K+ range alone).
 
Well, with almost no budget, that is going to put you into a bit of a bind. With 15x3TB Drives, assuming 2 drives lost to parity you have a maximum of ~40TB to backup in a windows of at minimum 8 hours. That gives you a yield of 5TB/Hour, which means you need a constant ~83MB/Second. You have a few choices, you could either build a second ~45TB ZFS box, you you could go with a straight tape library. Keep in mind that most tape libraries are specced at 2:1 compression, and most of your media is uncompressable, so with LTO5 for example, you would need a library that could handle a minimum of 30 Tapes. Any way you go, this is going to cost a lot more than no budget (the tape drive(s) / library are going to be in the 15K+ range alone).
It's actually 30TB of usable space as per info in OP, so I'll need 62.5 MB/s.
Based on the data split in my previous post, I reckon I'm looking at no more than 3TB to back up (shared/user data going online and ignoring media collection). I should be able to deal with that on 2 x 4TB 3.5" external HDDs :)
 
WRT networking.... One port dedicated for storage access on each node should do the trick. I'm sure your MOBO has two nice already.... So, i would have a high performance switch set up with two bonded nics coming from the storage box, and one nice from each node. Maybe look into enabling jumbo frames to boost performance a little bit. The other nic on each node could connect to your normal network, and maybe add an addition nic (if needed) to connect your storage server to your normal network.

I've ran openfiler in situations like this with great results.
 
WRT networking.... One port dedicated for storage access on each node should do the trick. I'm sure your MOBO has two nice already.... So, i would have a high performance switch set up with two bonded nics coming from the storage box, and one nice from each node. Maybe look into enabling jumbo frames to boost performance a little bit. The other nic on each node could connect to your normal network, and maybe add an addition nic (if needed) to connect your storage server to your normal network.

I've ran openfiler in situations like this with great results.

I'm not exactly sure what you're suggesting here?
There is no "storage box". There are two heads connected via SAS to the shelves.
 
With crazy HDD prices and a limited budget, I'm now thinking that I might start with a single head node (and therefore skip the 10GbE card) as well as a reduced HDD size and number (i.e. 24 x 1TB) such that I completely fill the 4U chassis' front slots & backplane and put all the SSD's in/on the rear slots/backplane and eliminate the cost of the second chassis and third HBA.
When we decide to upgrade the number of HDDs we can shuffle things around to match with the initial configuration plan and we can also add the second head for HA.
Thoughts on this and everything else? :) Thanks!
 
With crazy HDD prices and a limited budget, I'm now thinking that I might start with a single head node (and therefore skip the 10GbE card) as well as a reduced HDD size and number (i.e. 24 x 1TB) such that I completely fill the 4U chassis' front slots & backplane and put all the SSD's in/on the rear slots/backplane and eliminate the cost of the second chassis and third HBA.
When we decide to upgrade the number of HDDs we can shuffle things around to match with the initial configuration plan and we can also add the second head for HA.
Thoughts on this and everything else? :) Thanks!

Thoughts:

If you are going to defer some of this your initial plan was poor. Looking at the rate that HHD prices were traditionally falling, your plan should have been to buy hard drives as needed rather than all at once.

Given the amount of SSDs in your plan it appeared that cost was not a design consideration. I suppose cost should not be a design consideration now.

I think that the value of the time taken to change out the hard drives in the future might make the decision to buy smaller drives now be more expensive.
 
To save 10's if 100's of thousands, instead of going tape, just go with removable disks. Have backup jobs backup to a disk that is in a drive dock. Get some drive cases off ebay to store them in. (like a tape case but for a HDD) and bring those offsite.

That's what I've been doing with home, works great. May want to get multiple of those dock stations so you can write to multiple disks at once. Hard drives are cheaper than tapes, and you don't need to get a second mortgage to buy a tape drive. Those things are insane expensive. I wanted to go tape when I originally setup my backup strategy but it just made no sense financially. Now hard drives are expensive, but it still ends up cheaper.
 
Thoughts:

If you are going to defer some of this your initial plan was poor. Looking at the rate that HHD prices were traditionally falling, your plan should have been to buy hard drives as needed rather than all at once.

Given the amount of SSDs in your plan it appeared that cost was not a design consideration. I suppose cost should not be a design consideration now.

I think that the value of the time taken to change out the hard drives in the future might make the decision to buy smaller drives now be more expensive.

If you had a look at my OP you'd see I've actually suggested increasing the HDD count but decreasing the size thereby keeping performance up with a somewhat increased spindle count but decreasing overall storage to keep the cost down.

Cost is always a design consideration unless you happen to have the luxury of a totally unlimited budget. I'm working within the framework of a budget which a 50% HDD price rise has over-stretched. All I'm doing is putting my SSD's into the (effectively empty) rear backplane which saves the money of an HBA and chassis until we add more drives.

I don't think a price change and a slight budget decrease and changing plans to work withing this are a reason to criticise my initial specs.
 
To save 10's if 100's of thousands, instead of going tape, just go with removable disks. Have backup jobs backup to a disk that is in a drive dock. Get some drive cases off ebay to store them in. (like a tape case but for a HDD) and bring those offsite.

That's what I've been doing with home, works great. May want to get multiple of those dock stations so you can write to multiple disks at once. Hard drives are cheaper than tapes, and you don't need to get a second mortgage to buy a tape drive. Those things are insane expensive. I wanted to go tape when I originally setup my backup strategy but it just made no sense financially. Now hard drives are expensive, but it still ends up cheaper.
^ This. I like this idea of the dock stations and drive cases. Better than just using 'external drives.'
 
Back
Top