is this really possible, a virtualization epiphany

  • Thread starter Deleted member 82943
  • Start date
Yes, but that type of attack is very complex and exceedingly rare...and would require a vulnerability in the network stack itself or the NIC driver. Just as likely as that happening in a dedicated edge box.

Yes, I think I stated pretty much that. But having 2x the attack surface doubles your risk, while the risk is very low why double it if you do not have too?
 
Because I'm not convinced it's double the risk. As I've said..how many edge devices are using similar network stacks? Running on Linux or BSD? Everyone using the same Intel/Broadcom NICs and drivers?

Plus, it's not a zero cost change. Capital costs can be very different as well as operational costs. I'd love to do a perfect design everywhere but there are always constraints and requirements that shift the architecture of an environment.
 
Because I'm not convinced it's double the risk. As I've said..how many edge devices are using similar network stacks? Running on Linux or BSD? Everyone using the same Intel/Broadcom NICs and drivers?

Plus, it's not a zero cost change. Capital costs can be very different as well as operational costs. I'd love to do a perfect design everywhere but there are always constraints and requirements that shift the architecture of an environment.

The drivers and network stack can be very different. Look at windows, they drasitically changed the network stack between XP and Vista yet I will bet the nic drivers did not change that much.

Edit: for reference http://en.wikipedia.org/wiki/Windows_Vista_networking_technologies
 
Sure..but how many Microsoft's are out there with the resources to build an entire network stack for a product? Far bigger security issues to look at for most environments.
 
Sure..but how many Microsoft's are out there with the resources to build an entire network stack for a product? Far bigger security issues to look at for most environments.
See you need to look at all the issue and weigh them to how hard it is to solve them. This one is dead simple and easy to solve with little compromise to your system. So why not?
 
Because virtualized security and network services do things that a separate device/solution can't. That's why. If you're the NSA and absolutely demand air gap so be it but that's not even close to a common requirement. Even PCI and HIPAA environments are fine in mixed cluster virtual environments.
 
You guys think esx passively just passes data through it does not. It must examine the packets and handle them in its network stack. How do you think the vswitch works? unless you pass the nic through to the vm guest esx must touch the data

Oh and the hyper visor is an os. A very lightweight one but still an os.

Edit: Also everyone saying it is not listening on a port, that is only if an attack surface for a service running. You can still attack the network stack itself without any service running. As already stated above with NetJunkie, this is a very low risk issue but still a risk. Security is about understanding where your risks are and how or if it is worth it to protect them. This particularly risk is low, but effort to protect it is very low as well.

I know it does. Everyone else here can explain why. The vswitch works as a layer 2 device, segregated from anything having to do with host management. It acts exactly as a layer 2 device - there's nothing there on layer 3 to talk to, unless you're running a Nexus (which I'm not familiar enough with to know) with the exception of adding a vlan tag to traffic coming through it if you ask it to - without anything there, it's not looking or talking to anything.

Again, there's no stack there to talk to, unless you're planning on running an attack against a dumb layer 2 device like the netgear switch I have on my desk.
 
The drivers and network stack can be very different. Look at windows, they drasitically changed the network stack between XP and Vista yet I will bet the nic drivers did not change that much.

Edit: for reference http://en.wikipedia.org/wiki/Windows_Vista_networking_technologies

I can tell you right now, ESX uses GPL drivers for almost every bit of the networking stack - you can look at the signing yourself on the modules. But there's still not something listening there unless you tell ESX to - it can't talk to the card itself. You're dealing with a GPLed linux driver which is talking (or not) to a userworld, based on your choices - the hypervisor itself can't talk directly to the card with any type of TCP data, it has to talk through it's own virtual nic on the layer 2 device.

Even if you whack the driver, all you lose is the networking card itself, which is identical to any edge device using a GPLed driver (most of them afaik). You're not going to get to the hypervisor or the VMs that way.
 
Because virtualized security and network services do things that a separate device/solution can't. That's why. If you're the NSA and absolutely demand air gap so be it but that's not even close to a common requirement. Even PCI and HIPAA environments are fine in mixed cluster virtual environments.

Hell, at this point we're looking at things like the cache exploit they found for Xen - where a sequence of calculations in ring 0 of the CPU can get you vm info - it's bonkers.

Segregation ftw. We're as likely to see a problem with this as we are to see someone getting data via Van Eck phreaking - it's just not really going to happen.

If it's that important, put it in a faraday cage and turn it off :p
 
Our edge firewall is virutalized for our production servers. We are using all MS based products, but it's a fully supported configuration from Microsoft.

Our physical boxes (HyperV Hosts) have 2- 10GB LAN, 1 public and 1 private. The firewall array are the only servers that are connected to the public LAN, and act as a proxy for the other servers. Even the host has the public network adapter disabled. The private LAN is isolated from the internet, so the only way in/out is through the firewall.
 
I know it does. Everyone else here can explain why. The vswitch works as a layer 2 device, segregated from anything having to do with host management. It acts exactly as a layer 2 device - there's nothing there on layer 3 to talk to, unless you're running a Nexus (which I'm not familiar enough with to know) with the exception of adding a vlan tag to traffic coming through it if you ask it to - without anything there, it's not looking or talking to anything.

Again, there's no stack there to talk to, unless you're planning on running an attack against a dumb layer 2 device like the netgear switch I have on my desk.

And this vswitch works in what? Magic fairy dust? There is no ASIC for a layer2 switch in my server, not sure about yours. So tell me how does traffic end up in this magic vswitch? My NIC does not magically act like a layer2 switch so something much be doing that. Handling the ARP cache, VLAN tags etc...
 
You guys think esx passively just passes data through it does not. It must examine the packets and handle them in its network stack. How do you think the vswitch works? unless you pass the nic through to the vm guest esx must touch the data

Oh and the hyper visor is an os. A very lightweight one but still an os.

Edit: Also everyone saying it is not listening on a port, that is only if an attack surface for a service running. You can still attack the network stack itself without any service running. As already stated above with NetJunkie, this is a very low risk issue but still a risk. Security is about understanding where your risks are and how or if it is worth it to protect them. This particularly risk is low, but effort to protect it is very low as well.

I think the HV loads the packet into memory then examines octet 9-14 of the packet to see what the destination MAC is and possibly octet 21-24 for VLAN info if 802.1Q is enabled, then compares all that against its internal MAC table to figure out which VM to send it to. Heck if you have a system that supports SR-IOV the NIC can DMA the data directly into the VMs memory space bypassing the hypervisor entirely without having to waste an entire NIC for passthrough to a single VM.

I'm quite confident that the non SR-IOV switches in all current hypervisors are very particular to only look at the 10 bytes of pertinent data as to where to forward the packet and completely ignore the payload section to prevent problems like you're speculating. The only way would be via malformed packets and those would likely be very difficult to transmit, and if they could exist would likely require being on the same layer 2 network as the target, and probably also require specific hardware vulnerabilities as well.

So to the OP, yes a virtualized router is just as secure as one running on bare metal, but if you're super crazy paranoid about ultimate attacks which amount to the holy grail of net hacks, that of compromising unaddressable computers/devices, and would likely require the attacker to be in your house anyway, for which no reliable security researcher has even formulated a whitepaper on, then get a system that supports SR-IOV.

And this vswitch works in what? Magic fairy dust? There is no ASIC for a layer2 switch in my server, not sure about yours. So tell me how does traffic end up in this magic vswitch? My NIC does not magically act like a layer2 switch so something much be doing that. Handling the ARP cache, VLAN tags etc...

And yeah, with SR-IOV the NIC does magically act like a layer 2 switch handling VLAN tags (ARPing would still be handled by the individual VMs just like it would with non SR-IOV, as the HV doesn't do ARP caching on behalf of the VMs either)
 
Last edited:
then compares all that against its internal ARP table (of VM MACs, actually I don't think ARP cache is the correct term for this)

Call it a MAC (address) table. That's what the switch guys do and that's what it is.
 
And this vswitch works in what? Magic fairy dust? There is no ASIC for a layer2 switch in my server, not sure about yours. So tell me how does traffic end up in this magic vswitch? My NIC does not magically act like a layer2 switch so something much be doing that. Handling the ARP cache, VLAN tags etc...

Correct - it's a userworld, which is incapable of passing information to the hypervisor unless you tell it to pass information to the hypervisor.

That's the point - there's no direct connection there, so unless you've TOLD the hypervisor "create a port on this userworld and listen/send traffic", it cannot.
 
I think the HV loads the packet into memory then examines octet 9-14 of the packet to see what the destination MAC is and possibly octet 21-24 for VLAN info if 802.1Q is enabled, then compares all that against its internal ARP table (of VM MACs, actually I don't think ARP cache is the correct term for this) to figure out which VM to send it to. Heck if you have a system that supports SR-IOV the NIC can DMA the data directly into the VMs memory space bypassing the hypervisor entirely without having to waste an entire NIC for passthrough to a single VM.

I'm quite confident that the non SR-IOV switches in all current hypervisors are very particular to only look at the 10 bytes of pertinent data as to where to forward the packet and completely ignore the payload section to prevent problems like you're speculating. The only way would be via malformed packets and those would likely be very difficult to transmit, and if they could exist would likely require being on the same layer 2 network as the target, and probably also require specific hardware vulnerabilities as well.

So to the OP, yes a virtualized router is just as secure as one running on bare metal, but if you're super crazy paranoid about ultimate attacks which amount to the holy grail of net hacks, that of compromising unaddressable computers/devices, and would likely require the attacker to be in your house anyway, for which no reliable security researcher has even formulated a whitepaper on, then get a system that supports SR-IOV.



And yeah, with SR-IOV the NIC does magically act like a layer 2 switch handling VLAN tags (ARPing would still be handled by the individual VMs just like it would with non SR-IOV, as the HV doesn't do ARP caching on behalf of the VMs either)

Swap HV for a specialized world and you nailed what ESX at least does. You can't reach the kernel unless you let something reach the kernel, and even then you'd be isolated to the busybox and management worlds, which, admittedly, would make life "annoying". So you don't let the HV listen on that world.

Very well said.
 
For the record when I said HV I was refering to a generic hypervisor and not Hyper-V, and yeah MAC table sounds right.
 
looking up sr-iov boards now, this thread is awesome.
 
SR-IOV is fairly new and a bit of a PITA, the CPU, MB, BIOS, NIC, and VM host all have to have support and work with each other, I haven't been able to get it working properly on my host (seems to be a BIOS issue) but then again I didn't try to get it working on the outset.
 
Last edited:
Call it a MAC (address) table. That's what the switch guys do and that's what it is.

An ARP cache and a CAM/MAC table are completely different. Your comment seems to presume they are interchangeable. They work at different layers.
 
Back
Top