Increase SMB performance

Wixner

n00b
Joined
Jul 5, 2011
Messages
15
Good Morning / Evening / Whatever!

As my new storage hardware is about to arrive, I've decided to take a couple of hours to benchmark my current network and I'm a bit disappointed with the network performance.

This is a CDM on the Windows Server 2008 R2 Software Raid5-Array
HDD.png



The following benchmarks has been performed virtually between the host and the guest through
the Windows Hyper-V 10gbit interface to eliminate external hardware limitations


This is a CDM over SMB
Samba.png


This is a CDM over SMB with Jumbo Frames
Samba_JUMBOFRAMES.png


This is a CDM over SMB with Jumbo Frames and QOS disabled
Samba_JUMBOFRAMES_NO_QOS.png


As you can see by IPerf below, my network performance should be able to reach some higher speeds, at least theoretically, even with the standard TCP Windows Size
iperf.png


Is it just that SMB is such a bad protocol that it can't manage more than 50MBytes per second or is there something I've missed?
 
Last edited:
Ah well, the server is an Intel Core I7 930 with 24GB DDR3 Ram and equipped with an Intel Gigabit NIC.

The chipset is a X58 using software Raid5 with 4 Samsung Spinpoints 7200RPM, even though that information is irrelevant as you can see the performance of the array in the first picture.
 
When you are doing the test at both ends watch your CPU and RAM usage. Samba tends to suck up a lot when it is doing large transfers and you might be maxing yourself out.
 
No oddities regarding the performance; the host is using around 12-14% cpu and the guest around 32-36% during the test. Memory usage is below 60% on the host and 818MB out of 4096MB on the guest.
 
While Samba/CIFS isn't always amazing, you should be able to get MUCH better performance than that. I'm able to hit the theoretic limits of gigabit Ethernet;

Samba Server: 2008 R2 Standard Edition
This is CDM on the local storage(of shared folder) on the Server(2x1.5TB raid0)
perflocally.png


This is CDM over SMB on my Workstation(over 1Gbit wired connection)
perfoversamba.png
 
5311046798_d9bc0a6c97_o.jpg


This is with Solaris 11 serving over CIFS. The random speeds are due to caching, but you should be able to get over 70MB/s seq.

Just re-read that this is going from virtual guest - virtual NIC - host share. Most likely it is either a virtualization oddity or something with software RAID.

What new hardware are you getting that might have an impact on this?
 
My new hardware is an Intel RS2BL080 and six Seagate Barracuda Green 2TB (the 5900RPM SATA/600 version).

As soon as I get an opportunity, I'll try CDM the "real way" between the host and a physical computer to rule out any virtualization limitations
 
That would be best. I benchmarked a Win7 guest stored on an iSCSI volume over gigabit, and it averaged at 500MB/s, which is impossible for gigabit. Virtualization and caching can skew the numbers considerably.
 
I just ran the CDM over my physical hardware and received these results:
Samba_PHYSICAL.png


The physical iperf:
iperf_PHYSICAL.png


The iperf-performance seems low, but if I change the TCP Window Size to 16K the perfomance is effectively doubled as well, so this is a TCP protocol limitation, however this does not explain my low overall SMB performance...
 
Software RAID 5 on a windows Host is going to be slow. Software RAID 5 on Linux is a whole different beast.

For comparison my PCIx 3ware 9500-s 8 port controller with 6 drives in RAID 5 has 280MB reads and 96MB writes.

I'm hoping to upgrade later this year to a 3ware 9650SE-8 PCIe controller. That change alone should get me to 300MB reads and 260MB writes.
 
Software RAID 5 on a windows Host is going to be slow. Software RAID 5 on Linux is a whole different beast.

For comparison my PCIx 3ware 9500-s 8 port controller with 6 drives in RAID 5 has 280MB reads and 96MB writes.

I'm hoping to upgrade later this year to a 3ware 9650SE-8 PCIe controller. That change alone should get me to 300MB reads and 260MB writes.

Yeah I know that software raid on a windows machine is everything but effective, but if you take a look on the first picture I posted you can clearly see that the raid-array isn't the issue with its 188mb read and its 70mb write: I should be able to squeeze out some higher numbers.

I was hoping to fix the "issue", whatever it is, before I receive my Intel raidcontroller and my new harddrives
 
Is the intel nic card PCI or PCIe? Is offloading turned on, under performance. Sometimes the CPU is faster than the offloading ability of the Ethernet card. In your case it should be.
 
Back
Top