Slow read performance with SAMBA over Gigabit - any ideas?

This is definitely a long standing issue with Samba. It's has nothing to do with the hardware, it's simply an issue of scaling in software. I've chased this issue many times since the P4 days trying to get gigabit to perform at full speed. Anything samba/SMB based will not scale well. Take an XP or older client and you will hit at a wall around 30% or ~30MBps. Take a Vista/Windows 7 server 2003 R2 or Server 2008 client and server and you will be able to hit 100% usage. I have yet to see any Linux client ever peak a gig connection with samba. I've heard a few people say they could, but no one has been able to provide any details. The default configuration simply won't do it. I have an Althon 1800+ on a software RAID 0 card with 2 sata drives and a Linksys PCI gigabit card running on Server 2008. It will hit 50MBps in both directions and could go higher if the cpu wasn't at 100%. Can take a dual core box with an flavor of Linux and it won't break that 30 - 35MBps barrier.

I can easily back up the same configurations that everyone else is having. 85% when Windows 7 is the host, and 35% when Samba is the host. Using Windows 7 and Samba 3.4.12 on Gentoo.

FWIW CIFS is Common Internet File System, the common name for Server Message Block (SMB) which is what Microsoft calls it. OSX 10.7 Lion now has SMBX which is their implementation of SMB2. From what I've heard it should scale much better than their prior Samba based clients would.

Shame that Samba 3.6 doesn't seem to help for people. I was going to attempt to install it myself but there are some dependency issues to get around to upgrade my box. Guessing that it's not completely implemented on the server side yet and only support faster reads from a Windows host.


If you're running Linux boxes then NFS might be a better protocol for you. Haven't tested it but it probably scales better than Samba. If you're primarly Windows use a Windows box for SMB2 and it will run much better.

EDIT: One more thing that is pretty noteworthy. I know this is true of Windows XP and is likely true of Samba. It's not the server as a whole issue, it's per data connection. I know that if you have 2 hard drives in one XP box and 2 hard drives in another XP box; do one copy from A to A and one from B to B at the same time. You will see double the bandwidth used. So if you have the storage subsystem capable of multiple connections you can just do 2 transfers at once and it should double the speed.
 
Last edited:
It seems you have other problems with your systems.

I have been getting 80-100+ MB/s with linux Samba for a long time now, through various kernel versions and Samba versions. This is with Windows 7 clients (gigabit network, obviously). I did not do anything special to achieve those rates.
 
There seem to be quite a few variables here. One thing that seems consistent is win7 getting sub-par read rates with the server being OpenSolaris using the kernel CIFS driver. Hasn't been a big deal for me, since I use the share mainly for backups...
 
There seem to be quite a few variables here. One thing that seems consistent is win7 getting sub-par read rates with the server being OpenSolaris using the kernel CIFS driver. Hasn't been a big deal for me, since I use the share mainly for backups...

I don't have problem with Solaris CIFS server. I consistently get over 100MB/s write and read from Windows 7 client.
 
I'm noticing a pattern with the 40MB/s limiting rate. Either it is due to very old versions of Samba or the smbd daemon, or it is due to a file transfer program in Windows.

The two so far are Tera Copy and Total Commander. It really sounds like these either do not play well with SMB/CIFS or, as was stated before, is a 32/64-bit priority problem limiting transfer speeds.
 
So far what I found out helps Samba (on my linux-system, you results may vary):

Turn off CPU frequency scheduling or set the governor to performance. This increases throughput considerably (20% on both r/w here)

Try changing your adaper offload-settings in Linux (ethtool -k eth0). Your mileage may vary greatly here, setting TX offloading to on gave me a 30% bump on writes but a 15% decrease on reads. This is Adapter-dependant, it may also stop working with certain settings.

Leave the smb.conf mostly alone, it selects rather good defaults anyway. The only thing is, try
oplocks=no
this might help.

Mostly it is neither samba norr the disk subsystem, but most often the adapter (also on windows), esp. onboard-chips often don't perform too good in many cases.
 
All good ideas, yes. In the case I have experienced, NFS does not have the issue, nor does iperf, so it isn't adapter, kernel, clock etc related. Nor does it slow down writes. Odd indeed.
 
I'm noticing a pattern with the 40MB/s limiting rate. Either it is due to very old versions of Samba or the smbd daemon, or it is due to a file transfer program in Windows.

The two so far are Tera Copy and Total Commander. It really sounds like these either do not play well with SMB/CIFS or, as was stated before, is a 32/64-bit priority problem limiting transfer speeds.

I tried as a test last night running a VM in Vmware on the 64 bit machine, the VM was 32 bit Windows 7 and Total commander was able to get almost full read speed through the SAMBA server. (Around 70 MB/sec) And if I went back to the host OS and ran TC and did a copy speed was hovering at 35-36 MB/sec. Kicking off a copy using Explorer it was hitting 95 MB/sec read speed.

I am going to try packaging Total Commander in VMware ThinApp later to see if that creates a jury rig that works around the issue with Total Commander.

Once I get that out of the way I will go back to tweaking (an reinstall Samba 3.60 for SMB2) and post my pre-tweaked speed vs tweaked and smb.conf and details on other changes i made for any that want it.
 
Not a bad idea, the jerry rig might alter the 32/64-bit issue occurring.
 
I found our issue was an old version of samba. Due to the production nature of the machine, I'm not sure if we will be upgrading soon either.
 
64 bit beta of Total Commander is out.

After testing I get full expected speed from a SMB/CIFS share down to a workstation now.

From workstation to SMB/CIFS I average around 105 Megabytes per second, from SMB/CIFS down to workstation I average 85 Megabytes per second. This is with the 64 bit version of TC. Much better then 35.
 
I found our issue was an old version of samba. Due to the production nature of the machine, I'm not sure if we will be upgrading soon either.

so there is more than one issue here. a number of folks (like me) see the same behavior when NOT using samba (e.g. the builtin CIFS server...)
 
I've found that having a newer version of SMBD or a Samba daemon, and enabling the max protocol to SMB2 will primarily fix the issue on Linux machines.
 
64 bit beta of Total Commander is out.

After testing I get full expected speed from a SMB/CIFS share down to a workstation now.

From workstation to SMB/CIFS I average around 105 Megabytes per second, from SMB/CIFS down to workstation I average 85 Megabytes per second. This is with the 64 bit version of TC. Much better then 35.

I just tried the new TC x64 and I also got around 85MB/s down for around the first 5 GiB of a 40GiB Blu-ray file. Then it dropped off to 65MB/s, spiked back up for a bit, and seemed to stabilize in the 70MB/s range. It's definitely a huge improvement over the 32 bit version though. Too bad Windows doesn't have an equivalent to /dev/null for tests like this. It would be nice to take my destination HD out of the equation.
 
It does have a NULL: device, available from the command line.

Has everyone running these tests disabled their antivirus programs beforehand? I had slow Win7-Win7 copy performance over my gige network, until I added exceptions for extensions and my media drives.
 
Back
Top