All-In-One (ESXi with virtualized Solarish based ZFS-SAN in a box)

Hi _Gea,

What kind of speeds are you seeing on your NFS? I'm currently using a separate NAS but speeds are not as good as I was hoping for so I'm debating whether I should move to a virtualized NAS/SAN. Are you using NFS in sync or async mode?

Thanks

Its hard to answer because you have not

- defined workload and use case
- expected/needed performance
- your hardware

ex:
If you have some Macs for videoproduction, it is senseless to virtualize a NAS. It also does not matter to think about sync and async usage of NFS - It is async per defailt. That is ok because if you have a power outage during update of a multi-GB videofile you can expect that the file is damaged and you need a working snap or backup.

If you use a database with critical date or ESXi that use NFS as shared storage, you cannot allow a dataloss on power outage. In this case, you need sync write - even if it is 100x slower or you need a 3k expensive Zeus RAM disk for best sync performance.

If you like to have SAN like features like ultra fast RAM-based data caching, snaps, fast multiprotokoll access to your date, shared storage etc. with ESXi, you need a NAS/SAN.

If your usage has a lab-character or if your traffic is mostly between SAN and ESXi over the ESXi virtual switch, then you may think about all-in-one because you do not need expensibe FC SAN hardware to have multi-GB transfer rates with a low latency between SAN and ESXi. From outside view all-in-one is identical to a conventional scenario with ESXi servers and a dedicated shared SAN connected via high-speed hardware.

Not to talk of the other advantages like less, hardware, less energy, less cabling
with the minus of a slightly higher complexity, less performance compared to two highend boxes and different update strategies
 
Last edited:
I have improved my all-in-one concept to support mirrorred ZFS boot-disks
This gives a better uptime and allows to update ESXi independently from Omni/OI

basic steps:
- you need two Sata disks for ESXi (optionally an additional USB stick for ESXi)
- install ESXi on first disk or on an USB stick as usual (or use an extra third small Sata SSD)
- install vsphere on a PC
- use vsphere to create a datastore on both disk
- use vsphere to create a virtual disk (20 GB+) on both datastores
- Install Omni/OI on the disk on the first datastore (which is first in boot-order)
- Install napp-it, connect via browser and http://ip:81
- Goto menu Disk - mirror bootdisk and mirror rpool to your second disk

optionally
- Edit ESXi VM-settings to modify Bios of this VM: setup boot order to boot from both disks, with second first

If you need to use another disk with a different ESXi install, it does not affect Omni/OI beside an optional remirror
If you use an USB stick for ESXi, you have a fully independant mirrorred Storage VM

see
http://napp-it.org/manuals/all-in-one.html
 
Last edited:
_Gea do we need 2 identical sata drives or doesn't that matter?
Like would it be possible to use an SSD and a HDD or would the HDD slow the SSD down?
 
it shouldn't matter if they are identical, SSD will help some over SATA, but sizing won't matter as you are letting ESXi own the disk and just presenting a vmdk disk to the VM from ESXi.
 
Would any one be able to provide a picture of ESXi host networking dia to show how the napp-it in one connects to the ESXi 5.x, should you set up its own VMkernel Port that uses a seperate IP subnet or VLAN Tag to seperate inbound trafic to the Management Network VMkernel Port. Used vmxnet 3 network config for Napp-it In One nic. We have a ESXi 5.0 box that I am trying to V2V with vCenter Converter 5.5 to move Linux - Windows hosts to the new ESX 5.5 Napp it in one system, but this fails. The system is running on a Dell R710 with a Dell ISO ESX 5.5 build, but have installed the latest tg3-3.135b.v50.1-1502404tg3-3.135b.v50.1-1502404 drivers for the Broadcom Card, I have also fitted an Intel 82571EB GgBit but still the same sort of time outs. THe system is there to recover host incase of mayor fail on our new HP ProLiant DL360p Gen 8 system that connect to the disk Array over 10GB backbone. Last has anyone made a video on how to fully install "napp it in one" to show further information on the points above.
 
Last edited:
It is basically quite simple

If you have a basic ESXi setup with one physical nic and one virtual switch, all VMs (napp-it and others) are connected to this v-switch as well as the management network.
If you add vlans or more physical nics or more v-switches, this may be more complicated as you must decide about the internal virtual cabling.

If you want to move VMs to your ESXi server (napp-it VM or other VMs) this is best done by just copying the VM folders either to a local datastore (usually the napp-it virtual SAN appliance) or to a shared NFS datastore (provided by a running napp-it).

You can copy the files either via the ESXi filebrowser (ESXi vsphere configuration - storage > right click on a datastore) or via SMB when using napp-it with NFS + SMB.
Import is only a right click on the .vmx file (use the ESXi filebrowser. No need for a V2V tool.
 
I did try to copy the vmdk folder ot the storage NFS on Napp-it in One, then import with the vmx file, but this failed on boot reporting Wrong disk type on controller, Disk error 7.. had this type of problem before and involves using vmware tools in cmd line mode to convert the disk, tried changing the version number in the vmx file but still failed to but, what I have done is increased the napp-it in one from 4 GB to 8GB this allows it to pass p2v, then followed http://defaultreasoning.com/2011/04/14/dell-poweredge-r710-bios-settings-for-vmware-vsphere-4-x/ on the ESX 5.0 for bios settings. Using V2V on the ESX 5.0 to the HP Solution works from ESXi 5.x to ESX5.1 vCenter passing data, this worked. There maybe slow write on the Perc 6/i pass with SATA drives, not using SAS. But still have another R710 to configure in the same way, with four 1TB SAS drives, that being the existing ESX 5.0

Call "PropertyCollector.RetrieveContents" for object "ha-property-collector" on ESXi "192.168.41.105" failed.

Time out error as above.
 
Copy + import VMs with same or newer ESXi is working mostly without problems with default disk controller settings.
With special settings you may need to select another disc controller and edit the .vmx file or you may use a v2v tool that cares about boot disk settings.

You can google for "Wrong disk type on controller + esxi" for some suggestions or read about v2v ex http://blog.pluralsight.com/vmware-v2v-migration.
 
I have both of the Vm - Zimbra 8.05 network upgrade + moodle vle test lab ported to napp it in one used combo method of direct copy from data store to share on NSF on the napp it and some moved over to the hp esx 5.1 vcenter . Other question, if you have a 10 gb link from your napp it in one to your core switch, and you set the nap it box with vmxnet3 setting does it give the same data transfer from the esx 5.5 - napp it in one NSF share. I say this as our hp vcenter shows 10 gb speed link when vmxnet3 is used on windows 2008 r2 host inside esx 5.1 hp server.
 
On a physical network, 1 GbE or 10 GbE determines the max speed. Within ESXi you have a virtual network in software. Even when the e1000 vnic displays 1 Gb and a vmxnet3 a 10 Gb virtual nic, this has nothing to do with the real achievable performance.

Even the e1000 can be faster than 1 Gb on a fast machine and 10 Gb with vmxnet3 is hard to achieve in software even on a very fast machine. Only thing you can say is that the vmxnet3 vnic needs less cpu power than the older e1000 and is the faster one.

Main reason for the e1000 is that it was very stable on most hardware (e1000 may give trouble with ESXi 5.5) wheras the vmxnet3 should be tested some time under load prior use.
 
My dell r710 has 20 gb mem for esx 5.5 from this I installed esx 5.5 on the dell r710 sata port b, then used a mod USB to sata to power the 60 ssd drive. The napp it in one was installed on this ssd drive under its data store from the remainder of disk space. If the ssd drive was 120 gb and the napp it in one used most of the disk space would this improve speed on the NSF share. How does the cache zil, pool work on partitions of the napp it on appliance . Does adding more mem to the napp it in one ( set at 8 gb ) help with speed transfere to - from the NSF share. Thanks for replies to my post, gives me ideas for testing, no good with words, YouTube videos explain more to me. The napp it in one is great way to build your esx solution. Building a hardware list for home use.
 
My dell r710 has 20 gb mem for esx 5.5 from this I installed esx 5.5 on the dell r710 sata port b, then used a mod USB to sata to power the 60 ssd drive. The napp it in one was installed on this ssd drive under its data store from the remainder of disk space. If the ssd drive was 120 gb and the napp it in one used most of the disk space would this improve speed on the NSF share.

Performance and size of rpool does not affect performance of data pool/shares

How does the cache zil, pool work on partitions of the napp it on appliance .

You can partition a boot SSD and use it as boot + ZIL disk. You should not use the complete space (overprovision for performance). The price is a more complicated setup and therefor usually not suggested. For a home server you may do backups more often and disable sync.

Does adding more mem to the napp it in one ( set at 8 gb ) help with speed transfere to - from the NSF share.

More RAM = more readcache. This improves reads and writes due to less read traffic on disk. If you use e1000, you may try the vmxnet3 vnic as well.
 
This info maybe of use to some one, understand the old 3ware card is not the right way to go for Napp-it, but making use of old - mix hardware

Set your Raid card to JBOD mode
Alt-3
Access settings to the array controller and set as jbod.
This will clear out any previous raid settings, BACKUP DATA before you carry out this operation.
Then select each drive with return and create array in Single mode to ech disk
Install Driver for 3ware 9650se
http://mycusthelp.info/LSI/_cs/Answ...973237NYTELWTINANWPFUXUGKQGJBCZLFJHF&inc=7478

ftp://tsupport:[email protected]/private/3Ware/downloads/9.5.3-Codeset-Complete.iso
Download 9.5.3-Codeset-complete.iso

Here is the add_drv input you need to use.

If you get an issue with the driver rem_drv tw removes

You also need to make sure the drivers are in the following locations:

amd64: /usr/kernel/drv/amd64

Mount the ISO image Magic ISO Virtual DVD
F:\packages\drivers\opensolaris\amd64

Use WinSCP to copy tw file to /tmp

cp /tmp/tw /usr/kernel/amd64

add_drv -c scsi -i '"pci13c1,1004"' tw
:)
 
Last edited:
Call "HostStorageSystem.RetrieveDiskPartitionInfo" for object "storageSystem"

Any help on this, when I created my Napp-it in one I used the perc 6/i passthrough on the DellR710, but when I use the vShpere client I get time out errors as above. I have shutdown the napp-it-13b and copy this VM to another datastore and the machine is working fine. Has the ESX 5.5 problems reading the NFS Datastore provided by napp-it-13b. Another issue was with the Perc 6/i passthrough there was a configuration issue related to host mem of the Napp-it-13b allocated mem, fixed this by allocating 8GB to host locked from access by other hosts, could this be the problem. Another thing I may have done after thinking back was increasing the storage array from 3 X 1TB to 4x 1TB in total in raidz1, should I delete this and re-create. Thanks
 
Last edited:
OmniOS NFS + ESXi 5.5 is fine.
I would first check for a permission problem.
Set the vm-Folder recursively to everyone@=modify (prefer ACL, optionally set 777)

With pass-through, you must assign fix RAM to a VM (cannot be assigned dynamically)

With 4 disks, I would prefer a Raid-10 over a Z1 as it offers better read and I/O performance and a better reliabilty as two disks in different mirrors are allowed to fail wheras a second disk failure on a Z1 is always a whole dataloss.
 
Thanks for the feedback on this, not sure how to set 777 in napp it on acl will take a look on this. Understand setting two sets of mirror, but do I then pair them as raid 10 by adding these together in raid 0 thus if four 1 tb drives set two mirrors in raid 0 gives total 2 tb storage with two disk fail in each mirror. If two disks fail in one mirror does that then loose data on the raid 0 ( raid 10 )
 
Thanks for the feedback on this, not sure how to set 777 in napp it on acl will take a look on this. Understand setting two sets of mirror, but do I then pair them as raid 10 by adding these together in raid 0 thus if four 1 tb drives set two mirrors in raid 0 gives total 2 tb storage with two disk fail in each mirror. If two disks fail in one mirror does that then loose data on the raid 0 ( raid 10 )

If you want to build a Raid-10 with napp-it:
- create a new pool from a mirrorred vdev of 2 disks (2 way mirror): menu pools - create pool
- extend the pool by adding more vdev mirrors: menu pools - extend pool

ZFS will always stripe data over all vdevs so if any vdev fails the whole pool is lost.
In case of a raid-10 build from 2way mirrors, one disk in each vdev is allowed to fail.

If you want to set permissions or ACL to a everyone ist allowed rule with napp-it:
- open menu zfs folders and click on the filesystem row under "Folder-ACL"
- click on "reset ACL's" below the ACL listing
- select modify + recursively on files and folders

acl.png
 
Last edited:
napp-it autosnap: include ESXi hot snaps in ZFS snaps

New function in napp-it dev (0.9f4=unstable 0.9, free download if you want to try)
autosnap: include ESXi hot snaps in ZFS snaps on NFS datastores

The idea:
- create a hot ESXi snap with memory state (remotely via SSH, datastore on NFS/ZFS)
- create a ZFS snap
- delete ESXi snap

in case of problems
- restore ZFS snap
- restore hot ESXi snap with Vsphere

see menu Services >> SSH >> SSH keys and menu Jobs >> ESXi hot-snaps [Help] and
http://napp-it.org/downloads/changelog_en.html


Are there any problems to be expected? like the comment I already got at STH:
- with Vcenter you may need to re-register the VM after restore as
Vcenter was aware that you have deleted the snap.
 
I have uploaded a new preview ova template for ESXi 5.5-6.5
to deploy a virtualized ZFS storage server on ESXi.

Disk size 40GB/ thin provisioned 4,5GB
e1000 (management) + vmxnet3 (data) vnic
OmniOS 151024 CE stable with NFS, SMB and FC/iSCSI
napp-it 17.06 free/Nov. edition
Open-VM-tools, midnight commander, smartmontools 6.6 etc
TLS mail enabled
Basic tuning

HowTo: http://napp-it.org/doc/downloads/napp-in-one.pdf
Download: napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux : Downloads
 
The new Intel Optane 900P is a game-changing technology for Slog devices in a barebone ZFS server
and an AiO setup

see some performance values

Code:
AiO setup with Slog or Optana as ESXi vdisk

Disk Pool 8k sync/unsync /s random sync/unsync/s seq sync/unsync/s dd sync /unsync /s
no slog 520K / 1.9M 1.6M / 65.8M 41,8M/ 1024M 283M/ 939M
Optane Slog 1.6M / 1.9M 39.4M/ 68.4M 512M / 1023M 849M/961M

SSD Pool 8k sync/unsync /s random sync/unsync/s sequ sync /unsync /s dd sync/unsync/s
no slog 1.5M/ 1.9M 16M / 50.2M 341M/ 1023M 423M/ 806M
Optane Slog 1.6M / 1.9M 38.2M/50.2M 512M/ 1023M 731M/ 806M

Optane Pool 8k sync/unsync /s random sync/unsync/s sequ sync /unsync /s dd sync/unsync/s
one 900p 1.6M/ 1.9M 32M / 75M 511M/ 1023M 711M/ 1.1G


My suggested AiO setup now


- use an USB stick to boot ESXi
- create a local datastore on an Intel Optane 900P and place the napp-it storage VM onto
- Use an LSI HBA or Sata in pass-through mode for your datadisks
- add a 20 G vdisk on the Optane datastore to the napp-it storage VM and use as Slog for your datapool
- add a vdisk for L2ARC (around 5 and no more than 10 x size of RAM)

http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
 
Last edited:
ESXi 6.7 u3 is available for download (search via Google, VMware site makes me crazy to search even as a paying user), https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-67u3-release-notes.html

Main advantage for All-in- One:
ESXi 6.7u2 comes with a bug that hinders the deployment of my ova template. This is fixed now


# download file from VMware: "update-from-esxi6.7-6.7_update03.zip"

# create datastore1/updates (ESXi filebrowser)
# upload zip

# stop all VMs
# enable esxi shell und ssh

# switch to maintenance mode, connect via putty
# per putty: esxcli software vib update -d /vmfs/volumes/datastore1/updates/ESXi670-201806001.zip

# reboot
# end maintenance mode

see
VMware ESXi updaten – Thomas-Krenn-Wiki
 
Last edited:
Up to now you can build a large ZFS datapool from cheap disks. ZFS can improve performance due its superiour rambased read and write caches. Or you can build small high performance datapools from expensive NVMe to guarantee performance even on first access what a cache cannot offer.

Now ZFS offers Allocation Classes with special vdevs. This allows to build ZFS pools from slow and cheap but huge diskbased vdevs and extend this with small high performance vdevs for metadata, small io or single filesystems that needs a higher and guaranteed performance ex a filesystem for VMs.

See benchmarks and use cases
http://napp-it.org/doc/downloads/special-vdev.pdf
 
All in One System (ESXi incl. free version + virtualized OmniOS ZFS SAN appliance): autoboot of storage VM + other VMs on NFS

When I brought up the AiO idea more than 10 years ago, autoboot of VMs on a delayed NFS storage was trouble free. Just set a delay for VMs on NFS to allow OmniOS with a NFS share to boot up and provide NFS. With a current ESXi this simple setup no longer works as it seems that ESXi checks availability of a VM prior the bootup delay.

Workaround:
- Create a dummy, empty VM on local datastore
- Autostart this dummy VM after the OmniOS storage VM with a delay long enough to boot up OmniOS and auto reconnect NFS on Esxi, ex 200s
- Autostart other VMs from NFS on ZFS.
 

Attachments

  • autostart.png
    autostart.png
    90 KB · Views: 1
Last edited:
Manage ESXi via SOAP ex create/delete ESXi Snaps

soap.png


I came up with the idea of AiO (ESXi server with virtualized ZFS/NFS storage and VMs on NFS and pass-through storage hardware) around 2010. This was the first stable ZFS storage solution based on (Open)Solaris or a lightweight minimalistic OmniOS. Others copied the idea based on Free-BSD or Linux.

From the beginning ZFS snaps offered a huge advantage over ESXi snaps as they could be created/destroyed without delay and initial space consumption. Even thousands of snaps are possible while ESXi snaps are limited to a few shorttime ones. Combined with ZFS replication a high speed backup or copy/move of VMs is ultra easy. That said, there is a problem with ZFS snaps and VMs as the state of a ZFS snap is like a sudden powerloss. There is no guarantee that a VM in a ZFS snap is not corrupted.

In napp-it I included an ESXi hotsnap function to create a save ESXi snap prior the ZFS snap followed by a ESXi snap destroy. This includes an ESXi snap in every ZFS snap with hot memory state. After a VM restore from an ZFS snap you can go back to the safe ESXi snap. Works perfectly but handling is a little complicated as you need ssh access to access esxcli. Maybe you have asked yourself if there is no easier way and there is one vie the ESXi SOAP api similar to the ESXi web-ui.

Thomas just published a small interactive Perl script for easy ESXi web management via SOAP. It even works with ESXi free, see ESX / ESXi - Hilfethread


1. install (missing) Perl modules

perl -MCPAN -e shell
notest install Switch
notest install Net::SSLeay
notest install LWP
notest install LWP:rotocol::https
notest install Data:umper
notest install YAML
exit;

complete list of needed modules:
Switch
LWP::UserAgent
HTTP::Request
HTTP::Cookies
Data:: Dumper
Term::ANSIColor
YAML
LIBSSL
Net::SSLeay
IO::Socket::SSL
IO::Socket::SSL::Utils
LWP:rotocol::https


Howto:
Update napp-it to newest 23.dev where the script is included

example: list all datastores
perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/soap/VMWare_SOAP.pl list_all_datastores --host 192.168.2.48 --user root --password 1234

Attached Datastores "63757dea-d2c65df0-3249-0025905dea0a"
Attached Datastores "192.168.2.203:/nvme/nfs"


example: list VMs:
perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/soap/VMWare_SOAP.pl list_attached_vms --host 192.168.2.48 --user root --password 1234--mountpoint /nvme/nfs --mounthost 192.168.2.203

Attached VM ID "10" = "solaris11.4cbe"
Attached VM ID "11" = "w2019.125"
Attached VM ID "12" = "oi10.2022"
Attached VM ID "14" = "w11"
Attached VM ID "15" = "ventura"
Attached VM ID "16" = "danube"
Attached VM ID "9" = "omnios.dev.117"

example: create snap
perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/soap/VMWare_SOAP.pl create_snapshot --host 192.168.2.48 --user root --password 1234--mountpoint /nvme/nfs --mounthost 192.168.2.203 --vm_id 9 --snapname latest --mem --no-quiesce --snapdesc latest

example: list (latest) snap
perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/soap/VMWare_SOAP.pl list_snapshot --host 192.168.2.48 --user root --password 1234 --vm_id 9

I will use the script to work together with a normal autosnap job in a future napp-it. Up to then you can create a jobid.pre (ex 123456.pre) in /var/web-gui/_log/jobs/ with a script to create the ESXi snap and a jobid.post to destroy the ESXi snap after it was included in the ZFS snap.

update
I have added a SOAP menu in lateste napp-it 23.dev

soap2.png


data.png
 
Last edited:
Update: ESXi Soap management in current napp-it 23.dev
https://forums.servethehome.com/ind...laris-news-tips-and-tricks.38240/#post-367124

implemented_actions = ('summary','ssh_on','ssh_off','poweron','shutdown', 'reboot','kill','mount','unmount','create_snapshot', 'remove_snapshot','last_snapshot','revert_snapshot','list_attached_vms','list_all_datastores');

It is now possible to manage and automate ESXi via scripts (VMs and ESXi snaps) from napp-it
 
New feature in napp-it 23.dev (Apr 05):
ZFS autosnaps and ZFS replications of ESXi/NFS filesystems with embedded ESXi hot memory snaps.

If you want to backup running VMs on ESXi, you mostly use commercial tools like VEEAM that supports coalesce (stop a filesystem during backup) or can include ESXi hot memory state.

If you use ZFS to store VMs you can use ZFS snaps for versioning or to save and restore them either via a simple SMB/NFS copy, Windows previous versions or ZFS replication. This works well but only for VMs at down state during backup as a ZFS snap is like a sudden power off. There is no guarantee that a running VM becomes not corrupted in a ZFS snap. While ESXi can provide save snaps with coalesce or hot memory state, you cannot use them alone for a restore as they rely on the VM itself. A corrupt VM cannot be restored from ESXi snaps while you can restore a VM from ZFS snaps. As ESXi snaps are delta files they grow over time so you should under no circumstances use more than a few ESXi snaps for no longer than a few days.

So why not combine both. Unlimited ZFS snaps with the recovery options of ESXi snaps. This can be achieved if you create an ESXi snap prior the ZFS snap that then includes the ESXi snap. After the ZFS snap is done, the ESXi snap can be destroyed.

Napp-it 23.dev automates this


Howto setup:

- update napp-it to current 23.dev
- add the needed Perl modules to OmniOS,
see https://forums.servethehome.com/ind...laris-news-tips-and-tricks.38240/#post-367124
- Enter ESXi settings (ip, root, pw and NFS datastores) in napp-it menu System > ESXi > NFS datastore

-list autosnap or replication snaps in napp-it menu Jobs
Click on the jobid to enter settings, add the ip of the ESXI server
- run the autosnap or replication job
Each ZFS snap will then include an ESXi snap. As a VM is stopped for a few seconds run this at low usage times.
- click on replicate or snap in the line of the job to check log entries

Restore a VM in a running state:
- shutdown all VMs
- restore a single VM folder from a ZFS snap, either via SMB/NFS copy, Windows previous versions,
filesystem rollback or replication

ESXi will see the ESXi snaps after a vim-cmd vmsvc/reload vmid (Putty) or reboot
- power on a VM and restore the last ESXi snap. The VM is then at the state of backup time in power on state.


more,
https://www.napp-it.org/doc/downloads/napp-in-one.pdf
https://forums.servethehome.com/ind...news-tips-and-tricks.38240/page-2#post-372432
 
Last edited:
Update: Method to include ESXi hotmemory or quiesce snaps in ZFS snaps
Up from newest napp-it 23.dev ssh and soap are suppoerted

Why:
ESXi snaps are save. They can include memory state (restore to online state) or
quiesce where the guest filesystem is freezed during a snap (require VMware tools).
ESXi snaps are limited in number (only a few) and age (only for a few days).
You cannot use ESXi snaps for backups as you cannot rollback when the main VM file is corrupted.

ZFS snaps are not limited in numbers or age. As a ZFS snap includes all files at snaptime,
you can backup and fully restore a VM from a ZFS snap. But as a ZFS snap is like a sudden
powerloss, a VM in a ZFS snap is not save and can be corrupted.

The solution is to include save ESXi snaps within your ZFS snaps.
A VM restore is then:
- power down VM
- restore VM folder via Windows SMB and previous version or ZFS rollback
- reload VM settings via Putty and vim-cmd vmsvc/reload vmid or
napp-it menu System > ESXi > SSH: list snaps
- restore ESXi snap via ESXi webmanagement

new in current napp-it 23.dev from today: ESXi remote management via soap or SSH

zfs_esxi_snaps.png
 
Update
napp-it 23.dev (apr.30) can include ESXi snaps (quiesce or hotmem) in ZFS snaps (replications or autosnap) and restore a VM from a ZFS snap with the option to rollback to last ESXi snap


Setup
Use a ZFS filesystem via NFS to store VMs
Update to napp-it 23.dev (napp-it free, use an evalkey from napp-it.org to update to .dev).
Configure SSH (see menu System > ESXi)
Add ip of your ESXi server in autosnap or replication job settings (esxi_ip)
Create ZFS snaps

Restore a VM from a ZFS snap
Use menu System > ESXi Y> VM restore, select a VM and a snap to restore

restore.PNG


Update:
You can download a ready to use ESXi .ova template with current OmniOS 151046lts and napp-it.dev with Perl modules for TLS and SOAP installed
https://www.napp-it.org/downloads/napp-in-one.html
 
Last edited:
Back
Top