file Share ESXI

Modder man

[H]ard|Gawd
Joined
May 13, 2009
Messages
1,770
Is there any safe way that a file share can live in ESXI without direct access to the drives. I would rather not do PCI passthrough or RDM. I am exploring my option with this because I was really looking foreward to putting my FreeNAS onto my esxi box. It seems on further research this is a very bad idea.

hoping some of you have a little more experience and can point me in the right direction.
 
Yes you can run it using normal vmdk virtual drives but you get none of the benefit of freenas/zfs.
 
I am using an Hp N40l, it does not support passthrough. I had hoped to run more than just my NAS on this box but it is kinda looking like that will be all I can run on it.
 
Yes you can run it using normal vmdk virtual drives but you get none of the benefit of freenas/zfs.

not totally true. you can still have multiple vmdks and get the benefits of error detections and redundancy. a pita if you need to replace a failing 'disk' but better than nothing...
 
Not all - obviously not the boot/system disk. My point was that you can get compression, error detection&correction, snapshots, etc all that with disks that are virtual. ex: if you had HW that did not support passthrough, you could connect 4 disks to esxi, create a separate datastore on each, create a thick almost-max size vmdk on each datastore, add each of those 4 vmdks to the virtual NAS and create a zfs raid10 on it and it would work just fine.
 
The vmware guy on the virtualization forum (lopoetve?) says that the perf difference between virtualized disks and passed-through ones is not significant. I have no tested myself, but I have no reason to doubt him.
 
Why use ESXi though?

Why not run FreeNAS on the server and then install the VirtualBox plugin for running other VMs.

Then your storage can be native.

Or do what I do. I have a Debian 7 server and I installed ZFS on Linux as well as use KVM via Proxmox for all my VMs. I also use LXC for my other Linux containers (like ubuntu) which let me install and manage newer software than Debian stable would otherwise allow. Plus it's just good to keep the OS clean and simple.

I've been loving this setup personally. IMO Proxmox is better than ESXi. Problem with ESXi free is you can't do things like backup live VMs. With Proxmox this is not a problem. KVM can take snapshots of VMs that are powered on and fully back them up and everything.
 
Last edited:
Really the only reason for me to use esxi is that I use it daily for work. I am looking to learn more of the in depth pieces of virtualization while building myself a usable nas. I am already reasonably familiar with vmware's stuff from a surface level and would like to dig in much deeper. I dont like the thought of just running freenas, or any file server as the box could do so much more simultaneously. After some further reading it looks like I can give a virtual machine Direct access to sata drives and bypass esxi all together. This seems like a solid way of accomplishing what I am looking to do.

NOTE: The NAS will not be used as a datastore for ESXi so I do not have that headache to deal with. The NAS primarily handles media streaming in the house.
 
Last edited:
I found an thread on doing Sata device passthrough. ITs not the same as PCI passthrough but The guest OS seems to have direct access to the drives. That should solve the problem of running it in a VM correct?
 
PCI pass-through is the only really reliably way that I would trust to work correctly and for data reliability.
 
That is what everyone says but why is that? Nobody really seems to have a solid reason other than it is a really bad idea.
 
Because ZFS expects simple. It expects to be talking to the disk directly. Using any other mechanism to preset the disk to the OS means the disk is running through some other software that can be buggy or incompatible or worse like do some weird caching or weird error handling.

If ZFS gets lied to by some higher-level software then It can't do its job you will probably eventually lose data.
 
Last edited:
I'm sorry, but this is just silly. So you are telling me that the virtual disk layer in vsphere is buggy and/or incompatible? As long as it presents a virtual disk to the guest that obeys all of the requisite SCSI standards, there shouldn't be an issue. The reasons I can think of NOT to do this are (small) performance hit, as well as being a PITA to replace a failed disk.
 
I'm just saying I don't recommend it and don't come complaining to us if things go wrong. I've seen ZFS arrays hosed for inexplicable reasons that were on on vmdks. You can certainly do what you want. And it will "work" until it suddenly doesn't.

I've seen VM software say it did something with a disk when it really didn't do said thing to the disk and this is just very problematic for what ZFS is expecting. Only way around the abstraction layer is PCI Passthrough.
 
Well I was able to get the disks to show up but they show 0 bytes in freeNAS. IT works fine in all the other OS's i tried it in. SHould I do a linux based zfs then?
 
Back
Top