Virtual or bare metal SAN?

lordsegan

Gawd
Joined
Jun 16, 2004
Messages
624
I have a small home lab (two ESXi servers) and I want to create a SAN to store the VMs.

I already have a third computer that can hold up to 8 drives that I plan to use as the SAN. I would like to know the pros and cons of virtualizing the SAN system itself. I was thinking of using OmniOS (ZFS) and Comstar for the SAN. Also interested in feedback on that OS for the SAN.

Thanks!
 
I have no advice on the actual software you mention, but I will touch on the virtualization of the SAN.

Virtualizing the SAN will add additional layers of complexity to your design. Additionally, virtualizing it will not directly improve its performance.

The real question is why would virtualizing the storage in the first place benefit you? There should be a very good reason to do so that outweighs the additional complexity and likely performance degradation you will experience.

With the setup you describe, you cannot make the storage highly available. The only other reason is if the server you would use for the SAN is so powerful, and has so much memory that you would be wasting a large amount of resources if it only functioned as a SAN (and you do infact need those resources for other VMs). Only you can really answer that last one.

This is all assuming you have actually confirmed your hardware for the SAN could be virtualized in the first place by verifying the HCL from VMware for the hardware you will be using. My advice would be to not do it unless you really could use "extra" resources that the SAN VM would not need on that server if it was made a host.
 
Last edited:
I have no advice on the actual software you mention, but I will touch on the virtualization of the SAN.

Virtualizing the SAN will add additional layers of complexity to your design. Additionally, virtualizing it will not directly improve its performance.

The real question is why would virtualizing the storage in the first place benefit you? There should be a very good reason to do so that outweighs the additional complexity and likely performance degradation you will experience.

With the setup you describe, you cannot make the storage highly available. The only other reason is if the server you would use for the SAN is so powerful, and has so much memory that you would be wasting a large amount of resources if it only functioned as a SAN (and you do infact need those resources for other VMs). Only you can really answer that last one.

Hi RabbiX, you are on the right track in terms of my thinking. I don't want extra complexity for my SAN (which is also why I run my router on bare metal).

I was thinking of using a quad core xeon CPU for the SAN box, and that server actually is a modern/current system that can support a second CPU on the Mobo. As a result, it is definitely "overkill" for the SAN.

My two VM hosts are single CPU machines, but the CPUs are faster 3.3ghz, compared to the 2.4ghz xeon in the SAN box.

I think I will likely just run the SAN on bare metal, with the knowledge that quad core at 2.4ghz is overkill, but that's OK.
 
I use about 10 ESXi servers (mostly licensed but also some free ESXi) at the university where I work.
We have a fast 10 GbE network but every ESXi server has its own virtualized NFS storage (OmniOS ZFS, napp-in-one).
To decide, if you go this route, you must know the pro and cons

Disadvantages of a virtualized SAN

- You need capable and proven known to work hardware (like SuperMicro server hardware, Xeon, LSI HBA)
- You can virtualize the storage OS. There is no more overhead than with any other guest OS
but you should never virtualize storage. You should/must pass-through the controller and disks.
- Your server needs extra RAM and CPU (comparable to that what would be needed on a barebone SAN for a single server)
- You must care about startup/shutdown order (storage OS first on startup and last on shutdown)
- You should use NFS as datastore as it is does auto-reconnect (on startup ESXi must wait until NFS is up)

Advantages of a virtualized SAN

- Mainly no single point of failure like with one SAN for several ESXi servers
My napp-in-ones are either working (ESXi + storage) or down. In this case, I can either move the storage to
another napp-in-one where I can import the pool or I can start a VM from a backup server with replications.
For planned maintenance we can do a storage move/VM move within ESXi or simply by a offline copying/moving the VM folder.
-Performance between ESXi and storage a this is done software within the ESXi virtual switch (up to several GB/s)
so you do not need expensive and redundant 10Gb/IB/FC networking (although it is very useful for backup/move/clone)

If you want to try, you can download my free ESXi ready to use storage VM based on OmniOS from napp-it.org
(download the storage appliance, unzip, upload to you local datastore, run)
 
In my opinion, it depends what your trying to achieve. What is your home lab used for, what level of testing? If your simulating what is common in enterprise and focusing across all aspects (SAN included), virtualization might not give a good simulation, however it will give you flexibility if your testing HA, replication, etc. but you can part virtualize for these types of things. On the other hand if your testing stuff unrelated to a SAN then Visualization will give some benefits.

One of the benefits with virtualising the storage is 10GB network to all VM's on the same host (depending on the vNIC) and processor sharing. Secondly, I would strangely suggest passing though the controller to the VM, which will give direct access to the disks, however your processor needs to support VT-d. I don't feel this is over complex, you just need to be thoughtful of chicken and egg scenarios. Depending what you do on the OS front, this will give you the option to migrate the Virtualized Storage VM to a physical box at a later stage.

Personally, I'd probably virtualise the storage box. To lab test enterprise really needs enterprise equipment in whatever the lab is simulating... but that just depends what your simulating.
 
Back
Top