virtual switches

cyr0n_k0r

Supreme [H]ardness
Joined
Mar 30, 2001
Messages
5,360
I have read quite a few articles about virtual networking but I haven't been able to find anything that actually addresses the specific question I have.

My question is specific to Hyper-V but I don't think it matters honestly.

If I have 2 VM's on the same node/host and they both are on the same subnet, if those 2 VM's need to talk to each other, is the hypervisor smart enough to allow them a direct connection rather than:

VM1 -> physical nic -> switch -> physical nic -> VM2

If both VM's have virtual 10Gb network cards, can they theoretically communicate with each other at the 10Gb speed because they don't have to go out of the physical 1Gb nic and come back in?

This question is only for VM's on the same host. I understand that communication between different hosts would have to go out the physical nic.
 
If they are on the same Virtual Network, yes. Will you get 10GbE speeds? Depends on your host.
 
So the virtual switch that all the VM's communicate with maintains its own MAC table?
That way the vSwitch is smart enough to know that traffic can go from one VM to another without having to traverse the physical NIC up to a physical switch... correct?
 
Yes, similar in fashion to the Internal network type only there is no Physical NIC bound to it as an uplink.
 
I can only speak to VMware...but yes, the virtual switch knows the MAC addresses of all VMs on that host and knows to switch them directly without exiting the host. You can exceed 10Gb/s with internal communication, which is why it's common to see some servers using affinity rules to keep VMs on the same host for fast transfers.
 
I can only speak to VMware...but yes, the virtual switch knows the MAC addresses of all VMs on that host and knows to switch them directly without exiting the host. You can exceed 10Gb/s with internal communication, which is why it's common to see some servers using affinity rules to keep VMs on the same host for fast transfers.
After finally having the time to test this in a lab I have noticed the following behavior.

I am not actually getting anywhere near 10GB/sec speeds because my VM's are being hosted on a SAN. It seems even though the data might be switched in the vSwitch it still has to traverse the iSCSI links for read/write. This seems to indicate that even though you have a virtual 10GB/sec link, your bottleneck is going to be the speed at which your VM's communicate with your SAN storage.
 
After finally having the time to test this in a lab I have noticed the following behavior.

I am not actually getting anywhere near 10GB/sec speeds because my VM's are being hosted on a SAN. It seems even though the data might be switched in the vSwitch it still has to traverse the iSCSI links for read/write. This seems to indicate that even though you have a virtual 10GB/sec link, your bottleneck is going to be the speed at which your VM's communicate with your SAN storage.

This is not exactly true. It is true if you want to move file A on server A to server B and file A exists on the SAN the bottleneck of course is the SAN. This is the same as have your slow 5400 RPM disk be the bottleneck when tryin g to saturate a physical 1Gbe.

Now if you were to use something like iPerf or anyother operation where the transfer is memory based you will not run into that bottleneck of the SAN.
 
This is not exactly true. It is true if you want to move file A on server A to server B and file A exists on the SAN the bottleneck of course is the SAN. This is the same as have your slow 5400 RPM disk be the bottleneck when tryin g to saturate a physical 1Gbe.

Now if you were to use something like iPerf or anyother operation where the transfer is memory based you will not run into that bottleneck of the SAN.
As a proof of concept can you provide some items that would be memory based in the real world? Not just iperf.
 
From a file Read/Write perspective, absolutely it'll be limited by your storage. Depending on your other communication, you'll get closer to the 10GB mark. It's just not going to magically make your storage run faster :D
 
As a proof of concept can you provide some items that would be memory based in the real world? Not just iperf.

Oracle db cache in an Oracle RAC installation

edit: Probably applies to any other clustered software environment that uses a shared memory model.
 
Last edited:
Oracle db cache in an Oracle RAC installation

edit: Probably applies to any other clustered software environment that uses a shared memory model.
looking for something easy to stand up in a lab. To provide proof of concept for theoretical 10GB throughput from real world applications. I'm just not sure which applications have memory only throughput that I can easily setup quickly in a lab.

EDIT: even running iperf tests I am only able to get about 1.2Gbps between VM's on the same node. Any thing I can try? I've tried different TCP window sizes as well as between 5 and 15 parallel threads but still only pushing between 1.2 and 1.5 Gbps
 
Last edited:
iPerf is the easiest thing to setup and test throughput between vms.

You may be hitting the limit of your host vm performance. What is the host machine?
 
single quad core HP G5
8gb ram
OS is on raid1 10k rpm
VM's are on a EQ SAN
right now everything is connecting to everything with only 1Gb/sec links since this is a lab.

Host hyper-v server and client VM's are all running Windows Server 2012
 
Back
Top