Major Server Expansion

Status
Not open for further replies.

Nexillus

[H]ard|Gawd
Joined
Nov 6, 2012
Messages
1,116
Hello all!

I am asking for some expertise as I want to start consolidating VM applications and storage to a single computer and I know people here have more experience. I am currently sharing a raid 1 setup off my primary desktop using it as my share drive for all data storage on the network and running my VMs.

First, I am not worried about price just wife approval which is currently being negotiated.

Primary Purpose:
Expandable NAS with boot drives as SSDs and data storage array. I would like to start with about 20TB of storage space and eventually expand upwards of 50TB or more.

Secondary Purpose:
I would like to also use as VM server to maintain VM applications such as a Webserver, PFsense, minecraft server and eventually email server. I am currently running two of these off my primary desktop.

Current hardware for possible re-use:
2x 3TB Seagate HDDs
1x 2TB Seagate HDDs

Current network is setup via Fios with a bridged modem, acting as a MOCA and primary is a router Asus RTN-66U. Currently full CAT6 and this will likely be put into a different room and may need to add another switch.

With wanting expandable NAS I was thinking something like a quarter or half rack using 1U mounted equipment or if anyone has any other suggestions. I am up to any OS, I have more experience with windows based OS however I have no problem learning LINUX or UNIX as I always welcome a chance to learn! I have used LINUX and UNIX servers.
 
Correct me if I've missed something but sounds like a typical ESX AIO build to me?
Should be some similar build threads around, but off the top of my head -
Xeon 1230-v3, Supermicro board, 32GB ECC DDR3, your choice of high capacity chassis, passthrough LSI HBA-connected drives to a NAS VM running your choice of OS supporting ZFS. Maybe a few mirrored SSDs for VMs, sets of 10 drives in RAID-Z2 with a few hotspares for your "storage" (assuming this doesn't have any particular performance requirements, just capacity)
Am I wrong?
 
Wow, the forums did not like me today while I was at work. It cut out quite a bit of my post >.<.

Yes, that was part of another section of certain items to purchase. Also to look at either doing a ESX or a ESXi build. I have reserached the difference however was wondering if anyone had any personal experience with ESXI, as I have seen several with ESX.

Second, I had a list of hardware listed too. Here it is since it didn't post the first time.
MB: supermicro X10SLM-F
CPU: Intel Xeon E3-1225V3 3.2GHz
Ram: 32GB RAM ECC
LSI HBA: TBD (not sure will have to research more)
OS: ESX or ESXI (TBD) with Solaris ZFS,
HDD: TBD open for suggestions.
PSU: Seasonic 650W Gold/Plat
 
Oh really, a ready to use ZFS? Hummm seems a bit too good to be true. Notice any limitations?
 
The free version is for end-user and allows commercial use and does not restrict capacity or any OmniOS/ Solaris features. Napp-it free is not crippleware and used in production. Napp-it is not a OpenSource project but all sources are open and commented.

There is a pro version that allows selling or bundling, improves GUI performance with background agents, offers extras like ACL management, remote highspeed replication, realtime monitoring, email support or access to bugfixes. This is where the money comes from for development.
 
Forget about ESX - Everyone uses now ESXi 5.1/5.5u1.
ESX is no longer used - latest version was 4.1
Wow, the forums did not like me today while I was at work. It cut out quite a bit of my post >.<.

Yes, that was part of another section of certain items to purchase. Also to look at either doing a ESX or a ESXi build. I have reserached the difference however was wondering if anyone had any personal experience with ESXI, as I have seen several with ESX.

Second, I had a list of hardware listed too. Here it is since it didn't post the first time.
MB: supermicro X10SLM-F
CPU: Intel Xeon E3-1225V3 3.2GHz
Ram: 32GB RAM ECC
LSI HBA: TBD (not sure will have to research more)
OS: ESX or ESXI (TBD) with Solaris ZFS,
HDD: TBD open for suggestions.
PSU: Seasonic 650W Gold/Plat
 
Consider two boxes instead: one as your file server / NAS, and one as your VM server.
 
Consider two boxes instead: one as your file server / NAS, and one as your VM server.

With two separate boxes, you have a lot of single point of failures like
- your VM-Server
- your NAS
- Your network with cabling

You also need a very fast network (i.e. 10Gb) to connect both if performance is a matter.
This can be done and this is the usual pro config but with a redundant performance network and several ESXi and SAN servers.

If you need some availability, I would also prefer two boxes but both configured as All-In-One/ Napp-in-one
where the NAS/SAN is a virtual SAN used as local but shared NFS storage. This offers best performance/
low latency as internal transfers are done within the ESXi virtual switch in software and every box is completely
independent from the other and can be a failover and a backup system to the other. You only need storage-pass-through,
a slightly faster CPU and some more RAM for the storage VM. As the second system is the backup I would place it
in a different room/location.

Such a config works from outside view like two ESXi servers and two SAN servers with shared storage connected via a 10 Gbe network.
 
I think I will begin with it being a single unit and eventually move it to two units being identical as a redundant backup. I will have to look further into napp-it but from a quick scan looks quite good.

Thanks for the feedback! :D
 
I'm using napp-it with Solaris with ESXi now. It is not exactly AIO per _Gea's concept as my VM storage is from SSD directly (thus lacks the protection offered by ZFS). But I'm quite happy with it. Currently I'm looking into moving into a complete AIO setup.
 
I'm currently using 2 setups, 1 AIO (working perfectly) and 1 2-box solution. The latter is having some iops performance problems. My hardware is old and not the best in town, so that might be the reason for slow performance. With HW upgrade and RAID1 SSD pool, my VMs will probably run smoother.

As far disks, I would choose Hitachi's. According to blackblaze, they don't die as often as seagate or wd:
http://blog.backblaze.com/2014/01/21/what-hard-drive-should-i-buy/

Matej
 
Status
Not open for further replies.
Back
Top