Best file system for Linux based VM environment

Red Squirrel

[H]F Junkie
Joined
Nov 29, 2009
Messages
9,211
I will be building a VM server, possibly two so I can do clustering. Wondering what is the best setup for file storage? I currently have a Linux server using mdadm raid and NFS, but NFS kinda sucks. I hate how the permissions work and overall it's just a mess to deal with. You have to sync user ID and group IDs across all machines, root does not work without enabling root squash which is a security issue etc. Heck even with regular users it's not secure because there's no actual authentication other than UID/GIDs. Yeah there's kerberos/ldap but that's super complicated to setup.

What I want is a network based file system where the machine needs to authenticate, but once the drive is mapped, then EVERYONE on the system has access to it. KVM/Qemu and most other solutions run as root, so at very least, root needs access.

What about iSCSI? From what I understand though I need a cluster aware file system if I want to map iSCSI luns on more than one system. Is there any such file systems for Linux?

I have not decided yet on a VM solution, but it will probably be Proxmox, or maybe just using Qemu/KVM directly. Playing with it on my current machine, and I just realized just how much of a mess the whole NFS/permission scheme is. Idealy I probably should use iSCSI anyway, but I want to make sure that will work fine if I introduce multiple machines.
 
You could setup LDAP+Kerberos to distribute UID and GID across all the servers. Then they'd always be in sync.
 
Bah, tried to edit my post, but the forum keeps redirecting me to a reply window after entering my edit.

Anyway, you could skip the Kerberos. It's not needed for LDAP with NFS. It does improve security though.

That said for clustering, GFS and OCFS2 are popular options. Cluster LVM is also an option if you want to use LVM volumes for each VM.
 
The issue with NFS though is if the VM software is running as root (which it has to given the nature of it) then by default root can't access NFS anyway without enabling root squash which is a security risk.

Guess the other solution is to put storage on a separate vlan. Treat it more like a SAN. I don't have any threats on my network anyway, but like to use proper conventions.
 
If you run KVM/Qemu you don't have to run as root. On my Debian VM host, libvirt by default runs VMs as libvirt-qemu, Red Hat doesn't use root either. In fact, you could run the VMs as any user so long as the user has permission on /dev/kvm.
 
Start simpler first - what Hypervisor are you planning on running? That changes a LOT of your options.
 
-mapall=vmstorage

Who says you have to map root to root? Do you actually need distinguished permissions?
 
Start simpler first - what Hypervisor are you planning on running? That changes a LOT of your options.

I'm not sure yet, still looking at options but it will probably be kvm/qemu.

-mapall=vmstorage

Who says you have to map root to root? Do you actually need distinguished permissions?


Wait, is it possible to map all NFS access as a single user/group? where do I put that option? That would save SO MANY headaches everywhere.
 
I think I figured it out, here is how the line looks for future reference (it seems no doc anywhere shows a real example, hate that)

Code:
/volumes/raid1/vms_lun1              borg.loc(rw,all_squash,anonuid=1046,anongid=1046)
/volumes/raid2/vms_lun2              borg.loc(rw,all_squash,anonuid=1046,anongid=1046)
 
I just don't understand the NFS issue. export to specific IPs, who cares if root has all on those specific IPs, which ideally would be your Hypervisor hosts. No one else can access the FS? Unless I am too drunk and not reading your whole post correctly. I am not clear on what your actual goal is.
 
I just don't understand the NFS issue. export to specific IPs, who cares if root has all on those specific IPs, which ideally would be your Hypervisor hosts. No one else can access the FS? Unless I am too drunk and not reading your whole post correctly. I am not clear on what your actual goal is.

If by chance my network was compromised then NFS is worthless for security. It's more about using proper security measures. The odds of it happening are extremely slim, and if it did that would be a huge issue on it's own anyway.

But think what I'll end up doing is just saying screw it, and use all_squash for all shares, makes everything WAY easier. Now that I know about that feature it changes everything. No more screwing around with logging directly into the file server to fix permissions all the time because I can't access stuff that was copied by another user.
 
Back
Top