Home esxi and storage

Nicklebon

Gawd
Joined
May 22, 2006
Messages
979
I've got an old Dell 2900 at home that is currently handling my VM and storage needs. to say it is getting long in the tooth is an understatement. I'm likely going to replace in the near future with a Supermicro Xeon-D platform. ATM I have not decided which one. When I put the current box into service I just used the Dell's PERC raid card and built a 6TB array and then broke into a few VMFS datastores. While that was easy it does not seem the most efficient use of space. I run several small linux VMs for speciifc needs but most of the storage is used by a Windows server that acts as central storage and backup the whole house. The Windows Server then encrypts and backs up important data to S3.

How are folks handling storage on self contained esxi boxes these days?
 
It really is a mixed bag. Some still use local storage. Most use FreeNAS, ZFS, pre-built units like a Synology, etc. It really depends on your budget, time desired setting it up, and purposes for said storage. Are you still wanting to use local storage in your new box?
 
Yes, self contained. Since this box will be on 24x7 there is zero need or desire to add another box that is on 24x7. I do have an older netapp fas2040 that I use exclusively for local backups and even with 4Gbps fcal there is a noticeable hit vs local storage and even at its age it still outperforms most reasonably priced modern consumer nas in real world use cases.

I do expect there may be an advantage to using something along the lines of FreeNAS in a VM with the local storage passed through and presented to esxi as nfs or iscsi. I was hoping to see someone chime in with experience there.
 
A NAS/NAS with shared NFS or iSCSI storage as an ESXi datastore gives a better performance and usability for VM access/ clone/backup/move. If you use ZFS this is paired with a higher data security due checksums and the CopyOnWrite filesystem, an intelligent readcache from RAM or SSD and a secure but fast write behaviour with a ZIL/Slog and versioning with Snaps and online replication based on snaps.

If your ESXi server has enough power and RAM for the extra load, you can virtualize the storage VM with dedicated access via pass-through to HBAs and disks. As connectivity between ESXi and storage is in software, you get several Gbit/s storage performance.

I do this for many years with Solaris based appliances as I prefer ZFS on Solaris where it comes from.
I also offer a ready to use storage appliance with OmniOS, a free Solaris fork and my napp-it as a free download. see my HowTo http://www.napp-it.org/doc/downloads/napp-in-one.pdf
 
Last edited:
A NAS/NAS with shared NFS or iSCSI storage as an ESXi datastore gives a better performance and usability for VM access/ clone/backup/move. If you use ZFS this is paired with a higher data security due checksums and the CopyOnWrite filesystem, an intelligent readcache from RAM or SSD and a secure but fast write behaviour with a ZIL/Slog and versioning with Snaps and online replication based on snaps.

If your ESXi server has enough power and RAM for the extra load, you can virtualize the storage VM with dedicated access via pass-through to HBAs and disks. As connectivity between ESXi and storage is in software, you get several Gbit/s storage performance.

I do this for many years with Solaris based appliances as I prefer ZFS on Solaris where it comes friom.
I also offer a ready to use storage appliance with OmniOS, a free Solaris fork and my napp-it as a free download. see my HowTo http://www.napp-it.org/doc/downloads/napp-in-one.pdf

Awesome article!! Thanks for sharing.
 
A NAS/NAS with shared NFS or iSCSI storage as an ESXi datastore gives a better performance and usability for VM access/ clone/backup/move. If you use ZFS this is paired with a higher data security due checksums and the CopyOnWrite filesystem, an intelligent readcache from RAM or SSD and a secure but fast write behaviour with a ZIL/Slog and versioning with Snaps and online replication based on snaps.

If your ESXi server has enough power and RAM for the extra load, you can virtualize the storage VM with dedicated access via pass-through to HBAs and disks. As connectivity between ESXi and storage is in software, you get several Gbit/s storage performance.

I do this for many years with Solaris based appliances as I prefer ZFS on Solaris where it comes from.
I also offer a ready to use storage appliance with OmniOS, a free Solaris fork and my napp-it as a free download. see my HowTo http://www.napp-it.org/doc/downloads/napp-in-one.pdf


I liked the idea of this and have installed it on a test bed and run into some issues. It is running on a Dell R710. I am booting off a 32GB usb key and have Napp-it running from a laptop drive running on internal SATA B. Currently I am using a perc6i and passing the 6 disks through to napp-it. I know this not ideal but for testing it seems okay. I am sharing it back to vmware via NFS with no issues all under vmware 6.0u2. All this appears to work well. What is not working well is the SMB share. I've got a Win2k8 VM on the old 2900 with several TB of data. When I try to copy some of this data to the mapped smb share on the napp-it box it drops the share, sometimes I can get it back for a bit but eventually the copy just hangs and a reboot is required. Any thoughts at all?
 
What is your nic type on the NIC for the NAS VM?

The Napp-it VM has 2 vnics, an E1000 and a vmxnet3. In this case the traffic is flowing over the E1000. Please note I have no issues copying large amounts of data to the Napp-it smb share over the same vnic from guests on the same host as the Win2k8 vm all flowing over the same E1000.

I'll also add that during the course of troubleshooting I have followed the instructions from here: https://support.microsoft.com/en-us/kb/2696547 to disable smb2. I also shared the data on the win2k8 to a win2k3 guest on the same host and copied it to the napp-it box from the 2k3 guest. In all cases the share is mapped as \\192.168.9.213\smb-data
 
Last edited:
I use Solaris with ZFS/Comstar for most of my home needs. But I suggest you some great products from EMC which are free for non-production use.

EMC Unity VSA
EMC Unity VSA - EMC Store
Community edition gives 4TB space, NFS/SMB(includes SMB3) and iSCSI

EMC DataDomain Virtual Edition
Data Domain Virtual Edition Download | EMC
Community edition gives 0.5TB capacity (actual usable is 357GB), supports NFS/CIFS and DDBoost.
DD is designed as dedup backup target.
 
I liked the idea of this and have installed it on a test bed and run into some issues. It is running on a Dell R710. I am booting off a 32GB usb key and have Napp-it running from a laptop drive running on internal SATA B. Currently I am using a perc6i and passing the 6 disks through to napp-it. I know this not ideal but for testing it seems okay. I am sharing it back to vmware via NFS with no issues all under vmware 6.0u2. All this appears to work well. What is not working well is the SMB share. I've got a Win2k8 VM on the old 2900 with several TB of data. When I try to copy some of this data to the mapped smb share on the napp-it box it drops the share, sometimes I can get it back for a bit but eventually the copy just hangs and a reboot is required. Any thoughts at all?

Some hints
- use my preconfigured image with OmniOS 151018 and SMB2, vmware tools and some basic tunings activated (larger tcp, nfs and vmxnet3 buffers)
- use vmxnet3 on OmniOS as this is much faster with a lower CPU load than e1000, use vmxnet3 on virtualized guests
- use the e1000 only for management
- SMB version should not be that relevant, I would keep the default SMB 2.1 (only on OSX, there is a huge difference and SMB 2.1 is mandatory)

- disable sync on the NFS filesystem for ESXi VMs, then enable sync and compare, add a Zil like an Intel S3700 to reduce the performance gap
- disable sync always for SMB shared filesystems
- disable int_throtteling in the Windows nic driver settings
- do not use speed enhancers on Windows like Teracopy, sometimes Windows nic drivers are critical, try newest drivers, prefer Intel nics on server (ESXi) and clients.
 
Some hints
- use my preconfigured image with OmniOS 151018 and SMB2, vmware tools and some basic tunings activated (larger tcp, nfs and vmxnet3 buffers)
- use vmxnet3 on OmniOS as this is much faster with a lower CPU load than e1000, use vmxnet3 on virtualized guests
- use the e1000 only for management
- SMB version should not be that relevant, I would keep the default SMB 2.1 (only on OSX, there is a huge difference and SMB 2.1 is mandatory)

- disable sync on the NFS filesystem for ESXi VMs, then enable sync and compare, add a Zil like an Intel S3700 to reduce the performance gap
- disable sync always for SMB shared filesystems
- disable int_throtteling in the Windows nic driver settings
- do not use speed enhancers on Windows like Teracopy, sometimes Windows nic drivers are critical, try newest drivers, prefer Intel nics on server (ESXi) and clients.

I am using the suggested version. I have at your suggestion added an additional vmxnet3 for off box connections and moved the e1000 to an isolated vswitch. This appears to have caused havoc. I no longer recieve a login prompt from the webui and while I can map the smb share I cannot actually open it.
 
Back
Top