folding linux VM

FLECOM

Modder(ator) & [H]ardest Folder Evar
Staff member
Joined
Jun 27, 2001
Messages
15,814
has anyone made a barebones folidng VM for VMWare?

wanted to see if there was anything out there turnkey before trying to make my own
 
can these be converted to VMWare?
 
Flecom, if you are looking to use this on your 6100, I think you'll need the Virtual Box version. VMware still only supports 8 cores max. Each of your nodes will have 16 logical cores.
 
I dont think ESXi has that limit?

It let me make a machine with 2 sockets 8 cores/ea total 16 cores?

the 6100 will have 4 ESXi 5.1 installs (one per node)
 
ah it appears that the free ESXi does have an 8 core limit per VM

does it matter?

should I run 2 FAH VMs per node?

anyone use oracle VM server for x86? seems like that is another bare metal hypervisor without cpu limitations?
 
I've looked into this a bit for one of my boxen, and it seems that the best way to go would be to fold on the metal and virtualize on top of it (but this means no ESXi). However, I've been having issues getting ZFS to work well on Ubuntu, so I shall subscribe and observe, as I will likely nneed to move over to Slowlaris for my metal OS for this box..
 
problem is I need these boxes to do other stuff, these will be servers running other applications... so would be a lot more difficult to do virtualization on top of a host OS
 
I haven't really tried folding but I do run BOINC on a linux VM on top of ESXi for my ZFS server. There is a small loss in performance but I don't have all that many VM's running too many complicated things. The only thing I noticed is how it handles hyperthreading is a little weird and doesn't really acknowledge them to well.

As far as licensing goes with ESXi 5.1 you technically have a 60 day trial for all the features including multiple sockets. I can go into more detail if you would like on this type of setup.
 
Running 2 x 8 VCPU VMs instead of 1 x 16 VCPU VM isn't the end of the world. You will lose some ppd, but your sanity when trying to set everything up to do what you need outside of folding may very well be worth it.

Oracle VM Server does look promising - definitely worth trying out. The support guest OSs worries me a little, but ESXi probably also has a limited list of "official" support. 128 VCPUs for a guest will be plenty, though. Let us know how it goes.
 
I'm currently running ubunutu on a Oracle VirtualBox setup with a pair of l5520's. I could upload a copy if you wanted. It's not completely trimmed down, as I haven't finished the tuning yet, but I'm seeing about a minute longer speeds than when I ran it native.
 
I've looked into this a bit for one of my boxen, and it seems that the best way to go would be to fold on the metal and virtualize on top of it (but this means no ESXi). However, I've been having issues getting ZFS to work well on Ubuntu, so I shall subscribe and observe, as I will likely nneed to move over to Slowlaris for my metal OS for this box..

What issues are you having with ZFS on Ubuntu? I have been running ZFS on Ubuntu w/ ZFS on Linux for almost a year now.

I was initially building all the packages myself, but now I just have the custom PPA added and it handles itself beautifully, rebuilds everything when the ZFS/Kernel version changes and seems to do a very good job of it

I am running this setup this way specifically so that I can fold on it. Ubuntu on the bare metal, with ZFS for my storage, and then folding. I do not run additional virtualization on top of that, but I do not have the need, and of course you could if you wanted.
 
What issues are you having with ZFS on Ubuntu? I have been running ZFS on Ubuntu w/ ZFS on Linux for almost a year now.

I was initially building all the packages myself, but now I just have the custom PPA added and it handles itself beautifully, rebuilds everything when the ZFS/Kernel version changes and seems to do a very good job of it

I am running this setup this way specifically so that I can fold on it. Ubuntu on the bare metal, with ZFS for my storage, and then folding. I do not run additional virtualization on top of that, but I do not have the need, and of course you could if you wanted.

Perhaps I need a fresh install/upgrade to my current version. I'm on 10.10 right now and hand built ZFS. I'm having a problem when it is under heavy load, it will take a dump and tie up a couple of processors, bringing the whole system down - Specifically CPU#0 stuck for 61 seconds due to spl_kmem_cache and also the perl library. It seems to be a "known bug" for the Ubuntu version from what I can tell. The heavy load that it happens under is with a virtualbox WHS2011 server running on it that is backing up PC's around the house - usually about 500GB into the backup it will stain its pants.

What version of Ubuntu and specifically which ZFS installer did you use to get it working?
 
I am on 12.10, but started with 12.04 and I think a v3.2 kernel.

When I was on 12.04 I downloaded the ZFS and SPL tar.gz files from ZFSOnLinux.org and just followed the instructions there to build/install.

Then I upgraded the box to 12.10 and moved to v3.5 kernels, and at some point in time switched over to using the ppa (so apt-get manages my zfs install/upgrade/etc now) which is a lot easier.

I would bet that you are just suffering from using such an old version of Ubuntu/Linux Kernel

Currently I am on the latest ver of everything (kernel/zfs)

I am using the stable zfs ppa tree (not the daily tree)



Are you familiar with adding PPA's to Ubuntu? I would highly suggest migrating to the Ubuntu PPA method of installing/updating ZFS.

Would you be comfortable upgrading the kernel? How about the whole distribution?

If you are comfortable with doing all of this then I would say:
-Remove ZFS in it's current form (You aren't booting from it are you?)
-Upgrade Ubuntu to v12.10
-Then add the PPA for ZFS and re-install ZFS
-You then should be able to pull in your volume(s)





You can add the PPA by doing this: (see https://launchpad.net/~zfs-native/+archive/stable)

Code:
sudo add-apt-repository ppa:zfs-native/stable

Then ensure that your sources.list contains the following:
Code:
## ZFS For Linux
deb http://ppa.launchpad.net/zfs-native/stable/ubuntu quantal main
deb-src http://ppa.launchpad.net/zfs-native/stable/ubuntu quantal main

NOTE: You will need to replace "quantal" with your specific version.
 
for reference I ended up using Citrix XenServer and installing Ubuntu 10.4 manually using the guides here, working great so far!

looks like the box will be able to squeeze about 100kppd... will see what happens when I actually load up some VM's on there, I expect to take a decent hit :(

XenServer also lets you assign CPU priority to VMs which was very important since these boxes will be running some production VMs along side the F@H VMs using the unused CPU cycles
 
the free version?

I made a VM with 16 cores and when I tried to start it it would not let me, told me max 8 vcpu's per vm

also you can see here

http://www.vmware.com/files/pdf/vsphere_pricing.pdf

8vCPU/VM

VMWare supports more than 8 cores, but you can only assign max 8 to any one VM in VMWare free
 
I've had good results running a 24 cpu vm with virtualbox
 
Back
Top