is this really possible, a virtualization epiphany

  • Thread starter Deleted member 82943
  • Start date
D

Deleted member 82943

Guest
I was doing some reading in the file storage forum and stumbled upon an article from vmware on understanding virtual switches in vmware esx http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf. Anyway. It dawned on me that with a powerful enough system I could in theory collapse my pfsense firewall, and my windows samba fs (nothing fancy just samba shares for my roomates) into one box.

It hit me that with virtualisation I'd really only need 3 ethernet ports (the same as I use in real life), one from the cable modem to the box, from the box to the gigabit switch, and from the switch to the box for the fileserver. All of this would be seen as separate from the eyes of the virtualised OSes... and then it hit me: this is how datacenters are consolidating their boxes and saving on electricity.

the only hesitation I have is that currently if there's a failure with the computer that runs pfsense then my firewall goes down and I have to tend to that computer and get it up and working again but at least my fileserver and data is safe, and vice versa. I guess it creates a single point of failure, even, exacerbates it because should something (most probably a fault of my own) happen software wise I could accidentally remove or delete the vm image or say a hacker could and I could lose the data that way then again if I got hacked I could lose the data anyway...

do you guys do this? run a firewall/router/dhcpserver and a fileserver all on the same box

also, how would one protect the vmware esx os from the outside world when the firewall itself is being virtualised?
 
Best way to protect the host machine would be to have the wan nic pass thru to the pfsense machine.

The way enterprise solve the single point of failure is with redundancy. You can have multiple hosts and "hot swap" the vm guests between the hosts.

Also enterprise servers will have hot swappable psu and fans.

As for running the firewall with the rest of the vms. I do not do that for a few reasons. One main is my pfsense box is a much lower power machine that I can keep running for 2 hours with the cable modem and switch on UPS power. This is handy to be able to get online. My vm host can only last for about 15min. Other reason is in order for me to pass-thru the nic I would need a seperate nic in my host.
 
I virtualize everything. I don't use dedicated NICs as I see no need. My cable modem and external NIC on my Untangle box are on one VLAN (VLAN 100), internal things are on another (VLAN 5). There is no way to hit my ESXi systems from the outside. Nothing is exposed. I have a 3 node cluster. One node is often powered down by vCenter since it's not needed. If one fails the other live box will boot all my VMs (VMware HA) and I have four NICs in each box for redundancy.
 
I virtualize everything. I don't use dedicated NICs as I see no need. My cable modem and external NIC on my Untangle box are on one VLAN (VLAN 100), internal things are on another (VLAN 5). There is no way to hit my ESXi systems from the outside. Nothing is exposed. I have a 3 node cluster. One node is often powered down by vCenter since it's not needed. If one fails the other live box will boot all my VMs (VMware HA) and I have four NICs in each box for redundancy.

If you do not pass thru a nic to the firewall and you connect your esx box to a cable modem then your esx is exposed. Esx does though have its own firewall cabailities that you can use to protect it.
 
In my home lab I virtualized everything into one box ("all in one" concept) but had the same concerns as you about it being a single point of failure. So I decided to keep my small linksys router in place which ensures I always have an open gateway, DNS, DHCP, and VPN. And also the virtualized box has a dedicated management port (IPMI)
 
If you do not pass thru a nic to the firewall and you connect your esx box to a cable modem then your esx is exposed. Esx does though have its own firewall cabailities that you can use to protect it.
Only if you had the management vmkernel on the same vlan, which he won't. The external vlan will only connect to the pfsense vm hence no exposure to the outside world.
 
If you do not pass thru a nic to the firewall and you connect your esx box to a cable modem then your esx is exposed. Esx does though have its own firewall cabailities that you can use to protect it.

No, ESXi is not exposed. I'm sure his Management IP is non-routable and resides on a separate VLAN than the cable modem.

I run the same setup at home. One VM port group with VLAN A that has no access to the cable modem plugged into my physical switch on VLAN B. The only network traffic that communicates on VLAN B are the WAN vNIC on my router VM and the cable modem.
 
haha i stepped into a hot tub and found 2006 :)

I would virtualise this whole thing and even add another VM for my htpc so I'd have the firewall/router, HTPC, and NAS all virtualised on the box and it'd be awesome but then I'd have to have some sort of backup somewhere which would defeat the purpose i think of at least having the NAS virtualised

anyone care to post a diagram/flowchart of their setup especially with pfsense doing the firewalling / routing
 
what i got from that reading was that you could lock down the vlans to make them more secure but i guess that's not as secure as having separate computers connected by a physical switch?

though having esx operate as the switch or even pfsense do the routing or what not that should at least make the mac address attack harder because the pfsense box would have more resources than say a single switch does
 
Vlan are not the same as seperate networks. While they are more secure they are not the same.

http://www.cisco.com/en/US/products/hw/switches/ps708/products_white_paper09186a008013159f.shtml

From your link:

The simple observation that can be made at this point is that if a packet's VLAN identification cannot be altered after transmission from its source and is consistently preserved from end to end, then VLAN-based security is no less reliable than physical security.
 
is there not an attack that could spoof the vlan id?
 
is there not an attack that could spoof the vlan id?

How would someone spoof the VLAN I use for my internal network and get that across the internet and through my router? Even if they could do that, how would they get access to my internal, NON-ROUTABLE network? I can't just ping 192.168.1.1 and hit every Linksys router in the world.
 
I don't know my knowledge of networking is fleeting but i guess that doesn't make sense

i guess if they broke the firewall hosted on the esx server, then broke into the esx server management software, they could do some things
 
I also feel that relying on 2 pieces of software for security is less secure than one. A vulnerablilty of pfsense or esx will caus enough issues. Versus just relying on pfsense.
 
thats a good point - i guess there are tradeoffs when virtualizing, in any case i decided i'm in the wrong major - this stuff is far more fascinating to me than economics... ugh
 
Vlan are not the same as seperate networks. While they are more secure they are not the same.

http://www.cisco.com/en/US/products/hw/switches/ps708/products_white_paper09186a008013159f.shtml

Pretty much answered by others...but there is no way to do a VLAN spoofing attack over the Internet and in to my vSphere setup. Therefore for all intents and purposes here different VLANs are as good as separate cables and dedicated NICs.

No..I don't have my management vmkernels (or any vmkernels) on the External VLAN. That'd be pretty damn stupid, wouldn't it?

As for worrying about virtualizing a firewall on vSphere...I'm not. Never seen an attack that would break VM containment. Plenty of production DMZ hosts out in the world running on vSphere. Do it all the time.
 
If you do not pass thru a nic to the firewall and you connect your esx box to a cable modem then your esx is exposed. Esx does though have its own firewall cabailities that you can use to protect it.

No, it's not. There's nothing listening on those ports except virtual machines, so there's nothing to access the host with any more than if the VMs were physical machines.

ESX doesn't listen on networking ports unless you create a management port on that vswitch.
 
I also feel that relying on 2 pieces of software for security is less secure than one. A vulnerablilty of pfsense or esx will caus enough issues. Versus just relying on pfsense.

There's effectively no way to get out of the VM onto the host, if you're reasonably up to date on patches (someone did find one way, but it's extremely difficult, and has (IIRC) been patched). Stop thinking about ESX as an OS - it's a container, nothing more.

Unless you tell the host to listen on a port/network, it won't - there's literally nothing listening or responding on that physical network port unless specified, so it's totally unreachable. Hence the VLANs, or separate vswitches.
 
Sorry but the host has drivers for the nic which means it is handling the network stack and as such could be vulnerable to attack. While no attacks are currently known does not mean they do not exist or that they could not screw it up in a latter version.
 
Sorry but the host has drivers for the nic which means it is handling the network stack and as such could be vulnerable to attack. While no attacks are currently known does not mean they do not exist or that they could not screw it up in a latter version.

If the drivers in the vSphere stack have exploits we're all kind of f*cked. Not just here...but in every Linux system out there and probably most Windows systems and BSD. That goes for pfSense and Untangle, too.

It's a complete non-issue. You're relying on an attack against a network stack or virtual switch..both of which have been used for many years.
 
If the drivers in the vSphere stack have exploits we're all kind of f*cked. Not just here...but in every Linux system out there and probably most Windows systems and BSD. That goes for pfSense and Untangle, too.

It's a complete non-issue. You're relying on an attack against a network stack or virtual switch..both of which have been used for many years.

Thats what I was thinking. But I am at no means an expert in any of this.


If/when that attack happens all network devices are in trouble
 
If the drivers in the vSphere stack have exploits we're all kind of f*cked. Not just here...but in every Linux system out there and probably most Windows systems and BSD. That goes for pfSense and Untangle, too.

It's a complete non-issue. You're relying on an attack against a network stack or virtual switch..both of which have been used for many years.

There have been known exploits against network stacks and the network stack is very much OS dependent. An attack against a windows machine will not be the same for a linux or BSD.

ESX handles its network stack very different from BSD. Having your firewall depend on 2 stacks not have a problem versus just one is not a good idea.

While yes for most people this is a non-issue, certainly is for a home lab. But in an enterprise I would never run the system this way.

FYI - I am referring to the network stack used by the OS, not the OSI model.
 
I think you'll find that driver code and network stack code is the same across a lot of operating systems. Windows used to be almost all BSD network stack. vSphere uses a lot of the Linux network stack. Drivers are very much the same across.

But again... At this point you're playing a huge game of hypotheticals. A lot of your firewalls or other edge security devices run on Linux or BSD. Again...if there is a flaw as you describe we're all in a lot of trouble.

And yes...I've seen bugs in network card drivers and stacks. But at this point they are very rare and often only on new stacks. The last one I saw was on an Emulex CNA driver that was pretty new.

This is a non-issue.
 
And really..if you are THAT concerned just pass through a NIC via VMDirectPath and call it good. But I have no interest in that and no one else I work with does it either..even on DMZ hosts exposed to the evil Interweb.
 
I think you'll find that driver code and network stack code is the same across a lot of operating systems. Windows used to be almost all BSD network stack. vSphere uses a lot of the Linux network stack. Drivers are very much the same across.

But again... At this point you're playing a huge game of hypotheticals. A lot of your firewalls or other edge security devices run on Linux or BSD. Again...if there is a flaw as you describe we're all in a lot of trouble.

And yes...I've seen bugs in network card drivers and stacks. But at this point they are very rare and often only on new stacks. The last one I saw was on an Emulex CNA driver that was pretty new.

This is a non-issue.

Yes the attack vector is small, however having 1 exposed surfaces versus 2 is usually better.

And in an enterprise environment it is so easy to protect your VM with a edge device that the question would be asked why would you not do it? For a home lab, etc sure consolidate everything on to a single device. I do not do it in my home not for security but because I want a low power edge device that I can keep running for extended power outages to keep my phones and tablets online. Easier to have long battery backup for a 30W device than a 400-500W device.
 
Last edited:
And really..if you are THAT concerned just pass through a NIC via VMDirectPath and call it good.

Yep, this is what I suggested in the first reply :). It does have some hefty cost though in how you manage you VM though, so I can see many not messing with it.
 
Yes the attack vector is small, however having 1 exposed surfaces versus 2 is usually better.

And in an enterprise environment it is so easy to protect your VM with a edge device that the question would be asked why would you not do it? For a home lab, etc sure consolidate everything on to a single device. I do not do it in my home not for security but because I want a low power edge device that I can keep running for extended power outages to keep my phones and tablets online. Easier to have long battery backup for a 30W device than a 400-500W device.

Seeing more and more virtualized security services in the enterprise. Several good reasons for it. It's all about your design requirements. No one virtualizes the main edge protection for a large enterprise...but you see a lot of security services using things like vCNS (formerly vShield) because it greatly reduces administrative overhead and provides a simpler architecture. I've done a number of PCI and HIPAA compliant virtual environments both inside and outside (DMZ) using virtual firewalling and fencing services.

As for home... Eh. My vSphere lab boxes are 40w at idle..which is what they are most of the time. Two are normally powered up with DPM shutting the third one down. So for 80w I get a lot of resources and full redundancy/HA. Passing through a NIC defeats all that. Biggest power draw in my lab, by far, is the NAS.
 
Just to throw it out there.. o-o you should try a all in one box like we've been talking about in the supermicro thread.. you can have a gaming rig, fileserver, router, everything else in 1 box.. Its what I've been working for. I think I've even solved the problem of "where to put the GPU"(we will find out after I get a part from china).. After I finish my storage system. I'm going to buy a ceton tv tuner card and then I'll see about acquiring a good wireless usb nic for my "router".. Tbh most of my friends think my All in One box is a disaster waiting to happen(since everything is in one thing) and it is in a sense but its so nice to.. 160w is what it is using currently.. Can you imagine that.. that is 2.6 incandescent 60w light bulbs. This is without idling the drives down but also only idling the gpu.. Its amazing how powerful virtualization really is.. With the drives idle I estimate a power consumption of about 105w.. The heat coming off it is nominal and it is near silent.. Not silent mind you but near silent..
 
Just to throw it out there.. o-o you should try a all in one box like we've been talking about in the supermicro thread.. you can have a gaming rig, fileserver, router, everything else in 1 box.. Its what I've been working for. I think I've even solved the problem of "where to put the GPU"(we will find out after I get a part from china).. After I finish my storage system. I'm going to buy a ceton tv tuner card and then I'll see about acquiring a good wireless usb nic for my "router".. Tbh most of my friends think my All in One box is a disaster waiting to happen(since everything is in one thing) and it is in a sense but its so nice to.. 160w is what it is using currently.. Can you imagine that.. that is 2.6 incandescent 60w light bulbs. This is without idling the drives down but also only idling the gpu.. Its amazing how powerful virtualization really is.. With the drives idle I estimate a power consumption of about 105w.. The heat coming off it is nominal and it is near silent.. Not silent mind you but near silent..

Sorry OP way off tangent. But if you want a tuner for a VM I would recomend an HDHomerun Prime. 3 tuner cable card support, best part that puts it over the Ceton card, it is a network device! I.E. no need to mess with pass-through devices for it.

My home lab ESX host is fairly beefy and idles around 400 watts according to my meter :(. It is a dual E5 xeon with 13 SAS drives with redundant PSUs so that might have something to do with it. It is not just a lab it also serves my home, the ability to play with different configurations and OSes without needing any hardware is a bonus :).
 
That's a nice lab box. I went thin. 3 quad-core X3400-based nodes. No internal disk. Boot vSphere from USB. All VMs stored on a couple Synology NAS boxes.
 
That's a nice lab box. I went thin. 3 quad-core X3400-based nodes. No internal disk. Boot vSphere from USB. All VMs stored on a couple Synology NAS boxes.

Yeah my boot device is a small SSD, I have had poor reliability luck with usb sticks so my stomach can not trust data on them for longer than it takes me to walk down the hall with it :).

My ESXi setup on the SSD is just esx and OI+nappit with OI doing NFS share for the datastore the other VMs live on. You're basic All-in-one setup. Works very well, ESXi 5.1 fixes the auto-start issue so it even boots all of my VMs properly. One day I may build a few lighterweight boxes for ESX and delegate this one as just the SAN.
 
all of this is fascinating, and with passthrough it really is cool to be able to have a gaming box, a server, etc... all in one
 
There have been known exploits against network stacks and the network stack is very much OS dependent. An attack against a windows machine will not be the same for a linux or BSD.

ESX handles its network stack very different from BSD. Having your firewall depend on 2 stacks not have a problem versus just one is not a good idea.

While yes for most people this is a non-issue, certainly is for a home lab. But in an enterprise I would never run the system this way.

FYI - I am referring to the network stack used by the OS, not the OSI model.

Still not accurate. There is not an OS listening there. This is not windows, or linux, or BSD - it's a hypervisor. Unless you tell it to listen, there is nothing to reach on a NIC plugged in to the network. No userworld listens on a port unless you tell it to, so there is nothing to access - you can't get anywhere with it any more than plugging a cable into a powered on system with no OS can get you somewhere. Your edge device is just as secure as if it was a physical machine.

The entire network stack ~does not exist~ on a port unless you've told it to exist on a port - ESX is completely 100% incapable of talking directly to a network card except during boot, in which case we've handed it off to a PXE boot sequence, or IBFT. It simply doesn't have the code. It talks to a virtual nic that makes it to the physical card through a whole different system (a set of userworlds and specialized queues) - if you don't have a virtual nic, there is nothing there to talk to at all - the worlds simply don't exist. Drawbridge is up and all. Hell, there isn't even anything at layer 2 listening - all you've got is the gratuitous arp that went out when you first booted the box to let the switches know that the MAC address was on that port. Unless you're going to find a way to get in totally at layer 1 (I'm not aware of any), there's nothing to do.

The whole point of a type-1 hypervisor is that the entire thing is segregated, and that's what the vendors who write them take the most seriously.
 
Oh, one clarification - before someone mentions something like CDP or the like, those too are configured userworlds and can easily be disabled in various ways by my understanding.
 
There have been known exploits against network stacks and the network stack is very much OS dependent. An attack against a windows machine will not be the same for a linux or BSD.

ESX handles its network stack very different from BSD. Having your firewall depend on 2 stacks not have a problem versus just one is not a good idea.

While yes for most people this is a non-issue, certainly is for a home lab. But in an enterprise I would never run the system this way.

FYI - I am referring to the network stack used by the OS, not the OSI model.

Can you list some examples or white papers on these attacks that can affect equipment that only passes data through it? While there's countless attacks against addressable devices I've never heard of any sort of attack that could be carried out over the internet against what is essentially acting as a layer 2 device. All the processing a VM host does is check the MAC address against the VM MACs to determine which one to send it to. For instance if you use Hyper-V in a full windows server, VM traffic doesn't have to traverse the hosts firewall because the data in those packets isn't processed in anyway past the layer 2 headers. To attack any sort of VM host hosting a firewall you'd have to send some sort of strange, malformed packet that could trigger some sort of response in the host (unknown stack overflow) that wasn't bounced, rejected, or fixed in the few dozen routers and switches between the target host and the attacker.

Even if you did do something moronic like binding management to the same VLAN and NIC as your modem is on, pretty much every modem will restrict users to a single IP address (since IPv4 addresses are so scarce these days), so if your VM host gets a publicly routable IP address then your firewall won't get one and your internet on your LAN will effecively be down, and if your firewall has the routable IP address then still nobody can access your VM host. You'd have to let your firewall get a real IP then try to find another unused IP in the DHCP scope of your providers router for your area and manually assign it to your VM host (and deal with the inevitable IP conflicts when that IP eventually does get handed out), in other words you have to not only ask for it, you have to beg for it on your hands and knees.

I've just never heard of any sort of attack against a device caused by traffic passing through it (as in "in one NIC, out another NIC real or virtual"), as an attack like that would be a huge deal, and if it existed it's much more likely a layer 2 attack that wouldn't translate across networks.
 
You guys think esx passively just passes data through it does not. It must examine the packets and handle them in its network stack. How do you think the vswitch works? unless you pass the nic through to the vm guest esx must touch the data

Oh and the hyper visor is an os. A very lightweight one but still an os.

Edit: Also everyone saying it is not listening on a port, that is only if an attack surface for a service running. You can still attack the network stack itself without any service running. As already stated above with NetJunkie, this is a very low risk issue but still a risk. Security is about understanding where your risks are and how or if it is worth it to protect them. This particularly risk is low, but effort to protect it is very low as well.
 
Last edited:
Yes, but that type of attack is very complex and exceedingly rare...and would require a vulnerability in the network stack itself or the NIC driver. Just as likely as that happening in a dedicated edge box.
 
Back
Top