vmware (shared) storage for home lab esxi cluster

vFX

n00b
Joined
Sep 28, 2013
Messages
55
Hi,

In my homelab I have a 2-node esxi 5.5 cluster (E5620 + 24GB RAM each).
The shared storage is provided by a Synolgy RS814 (NFS) with 4x WD RE4 1TB (Raid10).
Link aggregation bound on the NAS, LAG group on the Cisco SG300 and IP Route based balancing on the vSwitch. Everything works fine.

Now I have the opportunity to sell the RS814 but I'm confused what could I buy in replace.

I first looked for a better Synology (with VAAI support), like the RS814+ or the new DS1815+ but I feel kind of stupid to spend about $1000 for a Synology.

Then I started thinking about custom server + free nas, maybe based on Avoton platform?

And what about a couple of HP Microservers G8 in "some sort" of failover storage configuration? I had that idea because an Avoton motherboard costs more than a complete (entry level) Microserver G8 ...

I accept any suggestion :)
 
Those new Avoton boards look like they'd make for a great storage platform. I vote build your own storage server - those NAS devices for sale always looked to me to be overpriced and under performing.
 
If you're looking to save money, just build your own box. Possibilities are endless.

I ran with Windows Server 2012 R2, tiered Storage Spaces, and Starwind iSCSI Free for a while. Worked great.

<shameless plug>I have a Synology DS1812+ for sale as well. :)</shameless plug>
 
I'd vote to build your own as well, might wanna mention what RAID do you want to do (Keep it RAID 10 or go something like ZFS)
 
1x 3U Supermicro 16bay Server X7DB8+ 2x Intel Quad Core L5420+16GB
1x QLogic Dual Port Fiber Channel PCI-E HBA
2x IBM ServeRaid M1015 PCI-E 8-port SAS controllers
2x QLogic QLE2460 Fiber Channel HBA
2x 1M LC-LC DUPLEX 50/125 MULTIMODE FIBER OPTIC CABLE PATCH CORD
4x Serial ATA Breakout Cable

$631.73 total

Running OmniOS + NappIT you would have a 16 bay fiber channel head that could serve each server FC LUN's at 4Gb/sec. Bigger and badder than any pre-made out there. Complete overkill, but you won't build another storage server for quite a while.

You could even go with a dual port 10Gb nic like a brocade 1020 in the storage server and use a brocade 1010 in each ESXi box using direct attach cables, but it would be a bit more expensive. Only $100 more or so.

I run a similar setup for 4 servers at home. Check it out here.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
That is a killer storage build for the price. Really wouldn't need much time to get it online either.
 
thank you all for the suggestions.

bds1904 thank you for spending some time for a complete config.

Update: I have got 3x ConnectX-2 IB/VPI (40Gbit) + cables and I have also another server with very similar hardware of the esxi nodes, the only difference is the CPU which is a nehalem E5530 (actually it's a dual socket server)

I have also a lot of RAM modules of 4GB, so 24GB if I use just one CPU and 48 if I decide to go for the dual.

that said, I'm very undecided about:

1. the software / OS
2. how to use the ConnectX2: "direct" IB? IBoverIP? Other options? (I don't have much experience with infiniband at the moment)

I don't have enough storage to go over 10Gbit so I don't necessary need to run the cards at 40Gbit.
 
You could always avoid shared storage and try out VSAN... . I'm slowly migrating to this setup now.
Or local storage with a flash tier for cache. Maybe Pernix lovin..

I understand the need for share storage for HA and fully automated DRS, but if you just need "migration" ability, vCenter supports shared nothing migrations.

Just a thought to start conversations....

Nick
@vmnick0
 
starwind single node non-ha setup or their free two node clustered vsa appliance for vmware?

did you try a failover feature with a 2 sylonlogy boxes?

If you're looking to save money, just build your own box. Possibilities are endless.

I ran with Windows Server 2012 R2, tiered Storage Spaces, and Starwind iSCSI Free for a while. Worked great.

<shameless plug>I have a Synology DS1812+ for sale as well. :)</shameless plug>
 
the only problem with this setup is it's a single controller appliance so it's a single point of failure and you cannot also effectively use memory for aggressive write back cache

but if uptime and performance are fine for you $600 is a killer price :)

( ... )

$631.73 total

Running OmniOS + NappIT you would have a 16 bay fiber channel head that could serve each server FC LUN's at 4Gb/sec. Bigger and badder than any pre-made out there. Complete overkill, but you won't build another storage server for quite a while.

You could even go with a dual port 10Gb nic like a brocade 1020 in the storage server and use a brocade 1010 in each ESXi box using direct attach cables, but it would be a bit more expensive. Only $100 more or so.

I run a similar setup for 4 servers at home. Check it out here.
 
you cannot also effectively use memory for aggressive write back cache

huh? writeback cache in memory works just fine in my setup, as does a ZIL if you go that route. One of my pools has a ZIL, the other doesn't.

Care to explain a little?

Just for reference with the write-back cache enabled I can max 2 4Gb links (360MB/sec) simultaneously with ease, read and write. If i disable it and use a ZIL & force sync-writes everything goes to an SSD ZIL just fine.
 
Love my Synology boxes. Sure..they cost a little more but they work great, have a very good UI, and I don't have to screw with anything. Plus they do VAAI for NFS now with DSM 5.1.
 
I played a bit with starwind for HA until I discovered you need windows datacenter or enterprise. $$$. no thanks...
 
I think you can do either depends on how much time you want to dedicate to managing it. For me, I use both prebuilt NAS and Nexenta, prebuilt NAS (EMC PX4 300d with SSD Cache) for Infrastructure VM's and Nexenta for my Cloud Storage but I would say i'm not the "average" case, I also have all Supermicro Sleds. For me, having "enterprise" features and systems is great but there are also downsides to that, noise, heat, electrical..etc.

If you just want something that works, is validated on VMware, and is easy to manage, Synology is where it's at, but if you want something that can deliver more performance with additional data services and you don't care about manageability, you can go Nexenta CE or OI w/Napp-it, FreeNAS..etc.
 
2. how to use the ConnectX2: "direct" IB? IBoverIP? Other options? (I don't have much experience with infiniband at the moment)

I would love it if ESXi supported IPoIB, but it is not the case.
 
Something I'm looking at for non-HA shared storage is a simple CentOS 7 box with 2 SSDs in RAID 0 with an LVM thin pool and targetcli for serving up iSCSI targets. Haven't looked into LVM SSD caching, but that could be a possibility if I needed the space (as an alternative to ZFS). Not sure if CentOS 7 has all of the support necessary for VAAI. I know that's supported by the upstream liotarget framework, at the least.
 
I would love it if ESXi supported IPoIB, but it is not the case.

Define supported? Mellanox supports the OFED IPoIB driver and i have used Connect-X cards with the Mellanox OFED drivers and the only issue i really had was due to my switch not having an OpenSM Manager. However using Raphaels ESXi build worked it wasnt stable for vSAN but when i used a isolated dummy box as a OpenSM manager didnt have any issues.

using it for vMotion was badass! Infra/Storage was unstable till i got openSM on a separated box so i could reboot hosts with ease.

Now i have 56Gb IPoIB gear running iSCSI/NFS/VSAN and vMotion No issues, the secret sauce is in the switch (OpenSM)!!
 
sorry for delayed response ((

block-level write back cache in ram works fine for you up to the point when your storage reboots (cpu failure, ram failure, power failure etc)

then you have extremely hard times getting back gigabytes of last transactions data (and more important - metadata!!)

huh? writeback cache in memory works just fine in my setup, as does a ZIL if you go that route. One of my pools has a ZIL, the other doesn't.

Care to explain a little?

Just for reference with the write-back cache enabled I can max 2 4Gb links (360MB/sec) simultaneously with ease, read and write. If i disable it and use a ZIL & force sync-writes everything goes to an SSD ZIL just fine.
 
this is false statement.....

starwind works with a free hyper-v server and vmware esxi just fine

I played a bit with starwind for HA until I discovered you need windows datacenter or enterprise. $$$. no thanks...
 
the same in the test lab here

we got 56 gb converged (ib & ethernet) gear from mellanox and discovered we cannot do native 40 gbe and only 10 gbe (( ip-over-ib however does 33-35 gbps which is damn close to native 40 gbe performance

ip-over-ib does not suck anymore )) at last with mellanox ))

Define supported? Mellanox supports the OFED IPoIB driver and i have used Connect-X cards with the Mellanox OFED drivers and the only issue i really had was due to my switch not having an OpenSM Manager. However using Raphaels ESXi build worked it wasnt stable for vSAN but when i used a isolated dummy box as a OpenSM manager didnt have any issues.

using it for vMotion was badass! Infra/Storage was unstable till i got openSM on a separated box so i could reboot hosts with ease.

Now i have 56Gb IPoIB gear running iSCSI/NFS/VSAN and vMotion No issues, the secret sauce is in the switch (OpenSM)!!
 
this is false statement.....

starwind works with a free hyper-v server and vmware esxi just fine

can you give more details,please? i could swear when i looked at the HA stuff, it required running on two windows datacenter or enterprise platforms. maybe something change since then, or maybe i was just not looking in the right place :)
 
that's easy )) 'system requirements' section on their site

https://www.starwindsoftware.com/system-requirements

"StarWind Software recommends using the latest Server-class Windows Operating Systems. StarWind supports all Windows Operating Systems from Windows Server 2008 to Windows Server 2012 R2, including Server Core editions and free Microsoft Hyper-V Server."

free hyper-v server is listed so nobody stops you from using it bare metal on inside exsi virtual machine ))

p.s. desktop operating systems are not on hcl but can be also used for test & development

i personally used to spawn their clustered two-node installation on a pair of windows 8 vm's managed by vmware vision on my macbook pro

lack of native iscsi stack on macs is killing (( sns stuff is damn expensive and ported open source is junk ((

can you give more details,please? i could swear when i looked at the HA stuff, it required running on two windows datacenter or enterprise platforms. maybe something change since then, or maybe i was just not looking in the right place :)
 
I'm somewhat confused. So I downloaded the v8 installer and put it on a win 2008r2 vm to work from. I then fired up the 'deploy vsan' wizard. I have the choice of a win 2008r2 iso I already had on the vsphere hosts or the hyper-v 2008 I just downloaded. In both cases, I give the thing a 64GB vmdk on the vsphere host I am deploying it to. At some point, the wizard hangs with 'Installing OS...'. If I open a vsphere console window for the newly created disk, it is waiting for input. In the case of win2008r2 it is asking me for the type of server install to do. For hyper-v, it is in a blue configuration screen and I have no idea what to do. Does this deployment tool have any documentation?
 
i don't know i have always been doing manual installation in case of starwind and ovf deployment in case of hp vsa and never liked their 'automated' tools

if vsa does not come in the form of the ovf it's done wrong :)

I'm somewhat confused. So I downloaded the v8 installer and put it on a win 2008r2 vm to work from. I then fired up the 'deploy vsan' wizard. I have the choice of a win 2008r2 iso I already had on the vsphere hosts or the hyper-v 2008 I just downloaded. In both cases, I give the thing a 64GB vmdk on the vsphere host I am deploying it to. At some point, the wizard hangs with 'Installing OS...'. If I open a vsphere console window for the newly created disk, it is waiting for input. In the case of win2008r2 it is asking me for the type of server install to do. For hyper-v, it is in a blue configuration screen and I have no idea what to do. Does this deployment tool have any documentation?
 
Back
Top