FAQ: Virtualization - What is it?

I haven't messed with virtualization for a year or two. Back then I was running a Linux box with a virtualized XP session. The idea was to game while in Linux so I wouldn't have to boot into Windows. IIRC the problem with this was that a virtualized XP cannot get direct access to the hardware and therefore cannot enable 3d acceleration. Has this been fixed?
 
I haven't messed with virtualization for a year or two. Back then I was running a Linux box with a virtualized XP session. The idea was to game while in Linux so I wouldn't have to boot into Windows. IIRC the problem with this was that a virtualized XP cannot get direct access to the hardware and therefore cannot enable 3d acceleration. Has this been fixed?

No, this is not "fixed" in any virtualization solution, as of yet. The DX9.0C hardware support that the other poster is referring to is still based on virtual hardware. There's no way to grant access to hardware directly from a guest VM under any platform. The hardware that gets DX9 and SM2 functionality is still a set of virtual hardware, not related or linked to the underlying physical hardware. Also, it is not yet possible to choose the type of virtualized video adapter on any platform (for the guest OS). Each solution has it's own adapter that it gives to VMs.
 
There's no way to grant access to hardware directly from a guest VM under any platform.
Not entirely true: Xen can delegate control of a PCI device to the guest entirely. This doesn't work with video cards, as far as I know, but they're kind of a special case; I don't know of any NICs that refuse to work with those from other manufacturers, but try to drive 3d with one card from ATI and one nVidia and it's trouble.
The hardware that gets DX9 and SM2 functionality is still a set of virtual hardware, not related or linked to the underlying physical hardware. Also, it is not yet possible to choose the type of virtualized video adapter on any platform (for the guest OS). Each solution has it's own adapter that it gives to VMs.
Sure, but this is not particularly bad or unusual; VMware emulates PIIX IDE controllers or BusLogic or LSI SCSI controllers, and AMD PcNet NICs. I don't have any of those things. Neither do I have a VMware-branded video card, but if it provides the functionality I want I don't care what Windows thinks it is.
 
I can tell you right now that ESX no longer has a linux kernel running under it. The COS (Console OS) is a RHEL3/4 based Virtual machine, but the underlying software is now pure vmkernel - the linux kernel is only used to bootstrap the vmkernel. This will become more apparent with later versions, where you'll actually be able to control the COS vm far more directly.
 
Hey, I didn't know this got a sticky...woot!

Regarding the underlying hardware being exposed to the VM, okay, I will concede your point (unhappy_mage) on Xen being able to delegate a PCI device to a specific VM based on device ID's, but this will likely cause issues for most that are moving to virtualization platforms. There are simply some things that should not be virtualized, IMO (and for most part, this is covered under most virtualization "best practices" guides), specifically, most programs that require direct hardware access to certain hardware types are usually not good candidates for virtualization. The biggest argument is that tying a virtual machine to a local resource prevents you from moving that virtual machine to another host (while running, at least) because part of the VMs hardware configuration is tied to the PCI Device ID for that hardware, which is only valid on one host.

Regarding your citation (still talking to unhappy_mage) of hardware being presented to the OS, no...most of us won't care. We're not after blazing performance from VMs, and don't really care what it thinks it has (in most cases, that is) for hardware, that's not really the point of virtualizing (most of the time, there are exceptions, like virtualizing ESX servers).

Iopetve - I have heard this as well, but have yet to see any documents (official ones, from VMWare) supporting this claim. All I have heard is VMWare employees (one after the other) getting pretty visually upset when you mention the underlying Linux kernel that we are essentially locked out of. They still have some controls allowable in ESX, and it can be somewhat modified, but for the most part, it is locked down (in the sense that you can't just find plugins/apps that work for a 2.x kernel and install them on an ESX host). ESXi's lack of a service console all together is great for my clients that require a more secure environment, but most that are running it wish they had some level of service console interaction available to them. The line is a bit too hard in ESXi,IMO, but we'll see what VMWare does with this, down the road.
 
Quick question:

I recently stumbled upon instructions on how to install ESX in VMware Workstation 6. Is the process similar for Workstation 6.5? The reason I ask is that there is no option for Unix OS installs on the new VM creation wizard...

edit: Hurf he had Linux selected. Youtube needs to up the quality of their videos :)
 
Yes, it is similar. You will need a CPU that can do either EMT64 or Hyper-V to start, and it will have to be enabled in BIOS. You also have to Set up the VM in Workstation as a RedHat 2.4 64bit VM, and expose the virtualization flags to the VM (options tab), or you will get a PSOD on VM boot. The link above are the instructions to use.
 
Iopetve - I have heard this as well, but have yet to see any documents (official ones, from VMWare) supporting this claim. All I have heard is VMWare employees (one after the other) getting pretty visually upset when you mention the underlying Linux kernel that we are essentially locked out of. They still have some controls allowable in ESX, and it can be somewhat modified, but for the most part, it is locked down (in the sense that you can't just find plugins/apps that work for a 2.x kernel and install them on an ESX host). ESXi's lack of a service console all together is great for my clients that require a more secure environment, but most that are running it wish they had some level of service console interaction available to them. The line is a bit too hard in ESXi,IMO, but we'll see what VMWare does with this, down the road.

That's because there IS no underlying linux kernel. :) It simply doesn't exist anymore.

An initrd bootstraps the vmkernel, which is a custom kernel and has very little to do with linux at all - it's totally proprietary and a bare metal hypervisor. The linux kernel that then boots is within a VM - if you look at the logs, the vmkernel starts a single, specialized world for the console as soon as it's up and the init you see is that virtual machine booting. You cannot control that world like a normal VM since the management agents load and take control of the vmkernel right after it loads.

There is no longer an underlying linux kernel at all - it's simply not there. 2.5.X had it still, but the entire boot process has been changed. Since then.

There are several custom IPC calls that are given to our RHEL COS that allow it pseudo-access to the hardware for specialized drivers and passthroughs to the vmkernel, but it's not touching the bare metal and there's no linux underneath anymore.

As the new versions roll out you'll still see a SC for now, but they'll be even more separate from the system than they are now.
 
That's because there IS no underlying linux kernel. :) It simply doesn't exist anymore.

An initrd bootstraps the vmkernel, which is a custom kernel and has very little to do with linux at all - it's totally proprietary and a bare metal hypervisor. The linux kernel that then boots is within a VM - if you look at the logs, the vmkernel starts a single, specialized world for the console as soon as it's up and the init you see is that virtual machine booting. You cannot control that world like a normal VM since the management agents load and take control of the vmkernel right after it loads.

There is no longer an underlying linux kernel at all - it's simply not there. 2.5.X had it still, but the entire boot process has been changed. Since then.

There are several custom IPC calls that are given to our RHEL COS that allow it pseudo-access to the hardware for specialized drivers and passthroughs to the vmkernel, but it's not touching the bare metal and there's no linux underneath anymore.

As the new versions roll out you'll still see a SC for now, but they'll be even more separate from the system than they are now.

PM me, please. I am interested to know your background, and where you get your information from. It's not relevant to this thread, so let's take it offline. This is good information for the thread, though. Thank you.
 
very informative, thank you.
I was listening to a network admin at a coffee shop talking about something he just implemented at a job site, vmware capable of detecting idle workstations on a network and then utilizing their resources to ease the load off another computer. Does anyone know more about this, what its called?
 
very informative, thank you.
I was listening to a network admin at a coffee shop talking about something he just implemented at a job site, vmware capable of detecting idle workstations on a network and then utilizing their resources to ease the load off another computer. Does anyone know more about this, what its called?

sounds like DRS, but it doesn't use workstations... :confused:
 
very informative, thank you.
I was listening to a network admin at a coffee shop talking about something he just implemented at a job site, vmware capable of detecting idle workstations on a network and then utilizing their resources to ease the load off another computer. Does anyone know more about this, what its called?

I would guess DRS as well, but we're hearing about this third party, so what you heard may not have been what he said, or he may have been using "unofficial" terminology. DRS enabled ESX (or ESXi) clusters monitor each host for unbalanced resources, and depending on the aggressiveness of the DRS settings on that cluster, will move VMs from one host to another.

It will not utilize "workstations" per se to do this. DRS, and for that matter, any VMWare ESX enabled function only operates within the confines of the configured datacenters/clusters/resource pools that it has been enabled on.

Think of DRS as CPU and memory load balancing for ESX and ESXi host machines, and you'll have the basic idea. There's much more functionality there that I have not covered, but they all attempt to accomplish load balancing of resources.
 
THANK YOU for the idea of running a 32-bit VM for using Cisco VPN client. I have to use it to connect to SciFinder at my university and I run 64-bit Vista. I had gotten SO tired of borrowing the wifeys' laptop just to look up some articles.

Also under Hypervisor?

Type 1 would be like ESX right?

Type 2 would be VMWare server?

If so, maybe throw those in there to give "real-world examples" to add some clarity to that section.

Lastly, another becoming more common use is running a virtual machine as an internet appliance, for security sake. If it gets infected, it is easier to wipe clean. I know you can download internet appliances based off of various linux distros, that are just basically stripped down to the browser and thats it, to play in WMware Player.
 
Thanks for the thread it answeared a few of my totaly noob questions :)

Hmm but few more to have everything clear :

Is there any diffrence beetween AMD Phenom II and Intel core duo/quad cpus for virtualization?

Is the quest OS fully separate from host OS so the host is fully protected against viruses or other malware which quest might catch during usage.

Would it be possible to setup Win Xp 32 as a quest OS and use it for playing older games or softwatware which doesn't work too well under Vista (without problems assosiated with dual boot)
 
yes, yes, and yes. The key CPU difference is the ability for the new intel procs to do record/replay, I'll explain more later, on my iPhone
 
Thanks for the thread it answeared a few of my totaly noob questions :)

Hmm but few more to have everything clear :


Is the quest OS fully separate from host OS so the host is fully protected against viruses or other malware which quest might catch during usage.

No not technically. There are Malware in the form of rootkits that can drill through a VM into the platform OS I believe. I will pull the old links from Black Hat.

*Edit*
http://www.blackhat.com/presentations/bh-usa-06/BH-US-06-Zovi.pdf
http://www.blackhat.com/presentations/bh-europe-07/Bing/Presentation/bh-eu-07-bing.pdf

This is one of the main reasons of still keeping the most mission critical servers as stand-a-lone items.
 
Last edited:
No not technically. There are Malware in the form of rootkits that can drill through a VM into the platform OS I believe. I will pull the old links from Black Hat.

*Edit*
http://www.blackhat.com/presentations/bh-usa-06/BH-US-06-Zovi.pdf
http://www.blackhat.com/presentations/bh-europe-07/Bing/Presentation/bh-eu-07-bing.pdf

This is one of the main reasons of still keeping the most mission critical servers as stand-a-lone items.

Three_rook: those slides refer to rootkits that will virtualize existing environments. They become dom0. e.g. they become the virtual machine so your old OS becomes unaware of the malicious environment. AFAIK there is no malware that can get to dom0 from a domU
 
Three_rook: those slides refer to rootkits that will virtualize existing environments. They become dom0. e.g. they become the virtual machine so your old OS becomes unaware of the malicious environment. AFAIK there is no malware that can get to dom0 from a domU

Those are both old too, and related to type-2 hypervisors, not type-1.
 
Good FAQ - was trying to explain virtualization to someone today, got there in the end but it sure reminded me how bad a teacher I am. I'll point them at this tomorrow :)

One slightly pendantic comment though - virtualization extends way past the server now into the network and into the storage. Might be worth pointing out this FAQ is focused on x86 / x64 mainstream virtualization, although to be fair, the opening blurb copes quite well.

On the OS side, Solaris has had virtualization for ages (containers), but then again, I supose so have mainframes...
 
I don't know if you want to update the opening post but Workstation 6.5-7x allows for:

32 GB memory limit
8 processor limit
10 network adapter limit
2 TB disk size limit

Player (free) now allows you to reconfigure the Guest and even make VM's which it previously could not do. It still cannot take snapshots but can revert to snapshots created by Workstation.

I've noticed that assigning multiple cores/threads to a guest, where the apps hosted on it can take advantage of threading, will be able to benefit from it (both ESXi and Workstation). This was not the case in older versions, where you were better off just assigning 1 VCPU.

VMware Converter has come a long way as well (free). I often convert Acronis backups to VM's, especially in disaster situations where hardware as failed. It can convert a number of different backup types\images.

Also ESXi (now known as VSphere) is free and that you only have to pay for additional features.
 
Eh, I wouldn't necessarily use Joe Schmoe's random USB drive that you got as a gift from getting your oil changed or something but as long as they're USB 2.0 and fairly mainstream they should be just fine just try them out. I bought a couple of Patriot 16GB's for mine (mostly for Citrix study later down the line which wants that size for some reason) but that's about it. Both my servers run just fine on those.
 
How important is the flash drive's speed when running the host off it?

Any major name brand USB thumbdrive should be fine. But, I would highly recommend looking for drives that can come close to saturating the link speed on this, and if it's a production environment, make sure that you have a BACKUP of it on another drive. Single point of failure, and all.
 
Any major name brand USB thumbdrive should be fine. But, I would highly recommend looking for drives that can come close to saturating the link speed on this, and if it's a production environment, make sure that you have a BACKUP of it on another drive. Single point of failure, and all.

Speaking of that, has anyone seen any easy physical backups for the hypervisor storage? Since you can't RAID-1 flash drives, would be cool see something like what Dell does:

delli.png
 
How important is the flash drive's speed when running the host off it?

Not. The only time you'll really notice it is if you pull the VI Client off the host. I have a really slow USB thumb drive in one of my lab boxes and it can take 10 mins to grab the VI client off that one...other than that it's fine.
 
Not. The only time you'll really notice it is if you pull the VI Client off the host. I have a really slow USB thumb drive in one of my lab boxes and it can take 10 mins to grab the VI client off that one...other than that it's fine.

I thought it was hosted and downloaded from VMware's website with ESXi rather than on the host directly? I know I noticed slower boots compared to the SAS drives we had in our servers before the switch to diskless, but it wasn't bad at all.
 
I thought it was hosted and downloaded from VMware's website with ESXi rather than on the host directly? I know I noticed slower boots compared to the SAS drives we had in our servers before the switch to diskless, but it wasn't bad at all.

You're right. I'm absolutely losing my mind lately...yeesh.
 
Back
Top