Slow read performance with SAMBA over Gigabit - any ideas?

Joined
Jan 23, 2008
Messages
58
Please help.. pulling my hair out here.

I have a server that I am using for my NAS. It is connected to my local switch (by 4 foot cat 6 cable) Intel Gigabit NIC on the PCI express bus. Running CentOS 6.0, tried the newest stable SAMBA that comes with CentOS 6.0, and even the new 3.6.0 version with SMB2 enabled.

The NAS is configured with JBOD, no raid going on.

The workstation I am testing from, connects to the same switch with an Intel gigabit adapter (on the PCI express bus) by a 6 foot Cat 6 cable (so there is 12 feet between the testing workstation and the server + a gigabit switch that supports jumbo frames.

Locally on the NAS I can copy 16+ gig files from one drive to another at around 95 Megabytes per second for the entire transfer.

From the workstation I can copy to ANY drive in the NAS over the network at a sustained 100 Megabytes per second, never drops below 98 Megabytes per second, goes as high as 112.

Reading from the workstation from the NAS, the MAX I can hit is 40 Megabytes per second. The workstation is a Windows 7 64 bit machine. Not even half the performance as writes!

I installed VSFTPD on the NAS and can upload and download from the box (tested with a 20 gig ghost image) at 100 Megabytes per second BOTH ways the entire transfer from any drive on the NAS.

I have tried messing with Jumbo frames, window sizes, nothing seems to increase the read performance from the NAS. You might say 40 megabytes per second is fast, but 50% decrease is something, especially when copying DVD's and other large media.


Here are some of my tests to show performance:

192.168.0.4 is the NAS

C:\Downloads>iperf -c 192.168.0.4
------------------------------------------------------------
Client connecting to 192.168.0.4, TCP port 5001
TCP window size: 63.0 KByte (default)
------------------------------------------------------------
[148] local 127.0.0.1 port 49883 connected with 192.168.0.4 port 5001
[ ID] Interval Transfer Bandwidth
[148] 0.0-10.0 sec 1.08 GBytes 930 Mbits/sec

C:\Downloads>iperf -c 192.168.0.4 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.4, TCP port 5001
TCP window size: 63.0 KByte (default)
------------------------------------------------------------
[176] local 192.168.0.182 port 49905 connected with 192.168.0.4 port 5001
[188] local 192.168.0.182 port 5001 connected with 192.168.0.4 port 54135
[ ID] Interval Transfer Bandwidth
[188] 0.0-10.0 sec 903 MBytes 757 Mbits/sec
[176] 0.0-10.0 sec 769 MBytes 644 Mbits/sec

ALL of my drives in the NAS are the same as the results below, (they are all the same type of SATA II drive)

[root@misc01 src]# hdparm -Tt /dev/sdc

/dev/sdc:
Timing cached reads: 2416 MB in 2.00 seconds = 1208.19 MB/sec
Timing buffered disk reads: 384 MB in 3.00 seconds = 127.87 MB/sec

Anyone have a nicely tuned SAN in Linux that manages to sustain 100Mbyte per second transfers both ways that can help me out here with some ideas of where to go with this?

I have not run any packet captures yet, that will be my next step to see if that says anything.
 
Tried some other things, re-compiled SAMBA with AIO and set the proper AIO directives in my smb.conf, still no difference on reads. My kernel also had AIO built in, there is not much documentation out there on Linux with AIO and SAMBA, so I am not entirely sure I did that right, the only people I could find using AIO and SAMBA were FreeBSD users.

So I reverted back to SAMBA without AIO compiled.

After messing with all that I can on the Windows 7 side, I now have my writes over the network up to 110+ Megabytes per second which is great. The reads seem to stay between 38 and 42 Megabytes per second.

I tried a different NIC on the WIndows 7 box, different NIC on the server, I am 100% certain this is software at this point.

Does anyone have SAMBA setup in a Linux environment that is getting over 80 Megabytes per second read performance sustained from their server with a Windows 7 or Vista client?
 
Have you tried NFS? It is far more efficient that Samba for network file transfers.

Also, this thread sound really familiar to another with a similar problem, let me see if I can dig it up.
 
No I had not considered NFS as an option... yet.

I have not even tested NFS for comparison. I would assume to see equal to or better performance than FTP which I have already confirmed is fast both ways.

Unfortunately, the convenience is the reason I am trying to push through getting SMB/CIFS working well both ways. If I use NFS I would need NFS support on any device connected to my network to utilize the SAN.

It's almost a guarantee these days that anything that needs network file system access will support out of the box SMB/CIFS.

I guess I could run both concurrently, and then use NFS from my computers connected and get all the speed without the problems and overhead of SMB/CIFS. Then any oddball devices that really would not benefit from the extra speed of SMB can use SAMBA.

Any recommendations on a NFS client for Windows 7 that is problem free?

Thanks!
 
I've seen the same behavior. No one seemed to have any ideas, but I use my CIFS from win7 mainly for writes (backups), and even so, I know you're not happy about 1/2 the speed, but it's still quite respectable, no?
 
SMB2 = CIFS?

Is Samba the best protocol to use?

I still think of CIFS as SMB. I don't honestly know the difference just that SMB2 is what Vista and Windows 7 uses which is a better version of SMB, with less commands and is more efficient. And SAMBA 3.6.0 supports SMB2 short of one thing I think at the moment.

I am not sure if SMB2 is basically CIFS2 or if it is just considered still just CIFS.

And as far as SMB or SMB2 being the best protocol to use, I don't know. For convenience and compatibility, I think yes.. for performance, I am starting to think not..
 
I've seen the same behavior. No one seemed to have any ideas, but I use my CIFS from win7 mainly for writes (backups), and even so, I know you're not happy about 1/2 the speed, but it's still quite respectable, no?

Yes I would agree it is respectable, but when copying 60 gigabytes back down to a workstation, double the time is worth seeing if there is something I can do to make it perform to its max capability.

I was having a conversation with another person that I found with the same issue when googling yesterday, and it would seem to be a common issue with Vista + Windows 7. Possibly due to changes they made in the TCP/IP stack.

I have not confirmed this yet, but he said I should be able to start 2 read copies from the server on my windows 7 box (assuming the reads are from separate volumes on the SAMBA server) and I will see the combined rate closer to my MAX write speeds over the network. Indicating that Windows is throttling the speed on each SMB connection and he said he has never found a way around that yet.

So it is starting to sound like NFS might be my ONLY option at this point to get the maximum speed both directions from my NAS on Windows 7 and Vista clients using SMB/SMB2.

If that is the case, I wish there was a way to just fool Windows 7 into thinking my 1Gig adapter is a 10Gig, maybe it would let the read rate of a single SMB/SMB2 connection go higher, if there is throttling going on..
 
I have a similar issue at work, where we copy files from a win7 64 box to a red hat samba system. While sending files from the win7 box, we get like 40 Mbps. If I do it the opposite direction, I get 90+. If I go from a win7 box to another win7 box, I get 90+ again. No idea how to fix it, although I'm not a Linux guru to be honest.
 
I found a site that mentioned win7 throttling when doing multimedia apps, but changing that had no real effect. Still looking...
 
I have a similar issue at work, where we copy files from a win7 64 box to a red hat samba system. While sending files from the win7 box, we get like 40 Mbps. If I do it the opposite direction, I get 90+. If I go from a win7 box to another win7 box, I get 90+ again. No idea how to fix it, although I'm not a Linux guru to be honest.

Interesting.. so Win7 to Win7 is full speed both ways. I would assume Vista also.

Before I have to resort to moving to NFS I will try XP later today or tomorrow at home and see if it is different. If so, maybe this is a SAMBA bug of some sort.
 
It isn't (just) samba. I read from a CIFS share on my OpenIndiana SAN and see this. Reads about 1/2 of writes.
 
Hmmm, doesn't seem (for me at least) to be just win7. Tried with a win-xp VM on the same all in one. I get 61MB/sec reads, 238MB/sec writes. Note that the effective network speed is more like 3gb/sec, since the OI SAN and the XP are both VMs and the traffic never leaves the host. It's possible that the XP is getting 61MB/sec and my win7 only gets 42MB/sec because the latter is on a real gig wired link and the former is virtualized?
 
FWIW, I see similar behavior (100+ when writing from Win7 to the server, and around 40-60 on reads) on my network. My server is running Solaris 11 Express using the built-in CIFS stack, so SAMBA isn't even involved. It could be something on the Windows side, as another poster mentioned.
 
FWIW, I see similar behavior (100+ when writing from Win7 to the server, and around 40-60 on reads) on my network. My server is running Solaris 11 Express using the built-in CIFS stack, so SAMBA isn't even involved. It could be something on the Windows side, as another poster mentioned.

What SMB/CIFS server does Solaris use? It is not SAMBA?

I have never used Solaris, but when looking I see people using Solaris 11 refering to smb.conf..
 
It isn't (just) samba. I read from a CIFS share on my OpenIndiana SAN and see this. Reads about 1/2 of writes.

What SMB/CIFS server does OpenIndiana SAN use? When I googled OpenIndiana it looks like it is based on Open Solaris, which looks like it uses Samba. Is that not the case? I also see references to smb.conf editing..
 
Solaris 11 and the OpenSolaris variants like OI have their own native SMB/CIFS server built into the kernel. I think you can still install and use SAMBA if you want but the kernel-level server is supposed to perform much better.
 
If that is the case, I wish there was a way to just fool Windows 7 into thinking my 1Gig adapter is a 10Gig, maybe it would let the read rate of a single SMB/SMB2 connection go higher, if there is throttling going on..

10Gbe won't help - whether it is real 10gig or you fake it out...

I've chased this same problem on my network with no solution. My network is 10Gbe. Speeds over 10Gbe and 1Gbe are about the same for Samba/CIFS/SMB/whatever. Can't get better than 60MB/s or so from SE11-based server. I have no idea what the problem is. I can get near full-speed Win7-Win7 or Win7-Server2008 (which is pretty much just another flavor of Win7).

NFS won't help either - at least not all by itself. MS's NFS client is just crap and can't get any kind of speed either.

The only thing I've found that works is NFS using the OpenText NFS Solo client on the Windows 7 side. With that i can get 600MB/s over a 10Gbe link. Definitely disk limited as the server side ZFS shows just over 600MB/s reads natively, so I'm getting almost full disk read speeds over the network. Only problem is that NFS Solo is $245/machine (ouch!).
 
10Gbe won't help - whether it is real 10gig or you fake it out...

I've chased this same problem on my network with no solution. My network is 10Gbe. Speeds over 10Gbe and 1Gbe are about the same for Samba/CIFS/SMB/whatever. Can't get better than 60MB/s or so from SE11-based server. I have no idea what the problem is. I can get near full-speed Win7-Win7 or Win7-Server2008 (which is pretty much just another flavor of Win7).

NFS won't help either - at least not all by itself. MS's NFS client is just crap and can't get any kind of speed either.

The only thing I've found that works is NFS using the OpenText NFS Solo client on the Windows 7 side. With that i can get 600MB/s over a 10Gbe link. Definitely disk limited as the server side ZFS shows just over 600MB/s reads natively, so I'm getting almost full disk read speeds over the network. Only problem is that NFS Solo is $245/machine (ouch!).

That sucks.. so others have hit this wall also with SMB/CIFS. And it has been confirmed to occur with different SMB/CIFS servers (other then Windows 7/Vista/2008) I am surprised this is not spoken about much more when I google this. I can find people talking about it, but usually no explanations by anyone..

I thought only getting 40 out of 100 MB per sec was bad, 60 out off a possible 600 MB per sec! That is really terrible..

Also really sucks that a decent NFS client costs what it does. Guess I will give up soon..

I did verify that I can get a higher read rate from the server by doing 2 concurrent transfers. Not that much but totaled at like 60 Megabytes per second concurrently vs 40 single transfer.

One other odd thing I found was if I boot my Windows 7 workstation up. Copy a 2 gig file from the Server to a local drive (as it chugs away between 36-40 Megabytes per second, abort the transfer at like 800 megs, reboot the workstation, then transfer the same exact 2 gig file from the SAMBA server it will run at 100+ Megabytes per second, right up to where I aborted it then slow down to 40 megabytes per second.. Windows was not caching the file.. (since I rebooted) That makes NO sense to me. SAMBA can pull the file out of the Linux servers cache in memory and deliver it at network speed, but cannot when it has to pull from disk.. But FTP (and I assume NFS) can??
 
That sucks.. so others have hit this wall also with SMB/CIFS. And it has been confirmed to occur with different SMB/CIFS servers (other then Windows 7/Vista/2008)

It is not a problem for the linux samba servers I have used. I can get 80-100MB/s reads and writes from Win7 clients to linux samba servers.
 
It is not a problem for the linux samba servers I have used. I can get 80-100MB/s reads and writes from Win7 clients to linux samba servers.

Yeah, same here. This is surprising since the OP is using the newest version of smbd 3.6.0.
It shouldn't be capping at 40MB/s with the newer versions.

Can you find the thread where we talked about this before? Maybe something in it can help the OP find a solution.
 
Since you are using Intel NICs we can rule that out. Are the workstations Intel NICS as well?

That aside try different distros. It's easy to throw those up. The differences in SMB are meaningful but so is the kernel version you are using. Try different distros Debian, Ubuntu, Red Hat, Fedora, Suse, etc. Most if not all of the time there are distinct differences between not only SMB versions but kernel versions as well. It's worth it to try different distros to see if they are affected too.

BTW FTP and NFS are faster and shouldn't be affected. I'm still thinking it's something in SMB2 that's giving you headaches.
 
It is not a problem for the linux samba servers I have used. I can get 80-100MB/s reads and writes from Win7 clients to linux samba servers.

Is this from memory, or do you have in front of you a Linux Samba server on your network that you are sustaining (not just a minute or two) those rates from the server on a Windows 7 client?

If so do you mind posting your smb.conf, kernel rev, and other details like your sysctl.conf if you have any extra customizations? I have tried practically everything. I even tried different 2.6.x Kernel revs to see if that would make any difference.

One thing I left out of all the info I have posted, is I setup another test box with a different motherboard and Ethernet controller (still on the PCI Express bus though - Realtek RTL8111DL vs the 82566DM & Intel 82573L that I have tested with so far) With the same exact results, almost disk speed writes over Gigabit with SMB, 60% of that for read. But FULL speed on read and writes with FTP.. Like my NAS.

Thanks
 
Last edited:
Since you are using Intel NICs we can rule that out. Are the workstations Intel NICS as well?

That aside try different distros. It's easy to throw those up. The differences in SMB are meaningful but so is the kernel version you are using. Try different distros Debian, Ubuntu, Red Hat, Fedora, Suse, etc. Most if not all of the time there are distinct differences between not only SMB versions but kernel versions as well. It's worth it to try different distros to see if they are affected too.

BTW FTP and NFS are faster and shouldn't be affected. I'm still thinking it's something in SMB2 that's giving you headaches.


Yes my test box (windows 7 build is an Intel NIC, that IS on the PCI express bus) I tried other NIC's without any change also.

The Server I have tested two different types of NIC's. (Intel 82566DM & I82573L both on PCI Express bus)

I have tried different Kernel's but not a different distro. I am hesitant to do that with other reporting this same type of issue with Solaris. At this point I might do that though next to see if I see any difference at all.
 
With the same exact results, almost disk speed writes over Gigabit with SMB, 60% of that for read. But FULL speed on read and writes.. Like my NAS.

Thanks

Hmm if you tried different NICs (and achieved the same score....although that shouldn't be technically possible with different brands) than that screams that it's software. Tried different distros or kernel+ smb configs?
 
Yes my test box (windows 7 build is an Intel NIC, that IS on the PCI express bus) I tried other NIC's without any change also.

The Server I have tested two different types of NIC's. (Intel 82566DM & I82573L both on PCI Express bus)

I have tried different Kernel's but not a different distro. I am hesitant to do that with other reporting this same type of issue with Solaris. At this point I might do that though next to see if I see any difference at all.

K I would try different distros... but something tells me SMB2 and Sambas rendition of it (and/or MS's misuse of it) is the culprit . This problem does not happen with WinXP. My workstations are all WinXP and they get full bandwidth from GigE in reference to what XP's net stack can handle.
 
I have the same issue as danswartz with the OI SAN. Caps around 40-45 MB reads and 85-90 writes on Ultimate x64 and Home Premium x64.

With a ubuntu server 10.10 I was maxing out the transfers both ways at around 90-100+MB/s
 
I have the same issue as danswartz with the OI SAN. Caps around 40-45 MB reads and 85-90 writes on Ultimate x64 and Home Premium x64.

With a ubuntu server 10.10 I was maxing out the transfers both ways at around 90-100+MB/s

Thanks for the details, going to try Ubuntu 10.10 then next just to see, what the heck.. I have only been using RedHat based distros so far.
 
I just ran CDM on a linux 3.0, samba 3.6 server and got 83 MB/s sequential read and 100+ MB/s sequential write. That server has been through several kernel revisions and samba versions during 2011 and the speeds have always been about the same. It is running SMB2 now, but the speeds were the same pre-3.6 (which was SMB1).

Here is part of smb.conf (note that there is no security since it is on a home LAN):

[global]
security = SHARE
map to guest = Bad User
null passwords = Yes
guest account = nobody
max protocol = SMB2
dns proxy = No
idmap config * : backend = tdb
 
Thanks for the details, going to try Ubuntu 10.10 then next just to see, what the heck.. I have only been using RedHat based distros so far.

It has nothing to do with the Ubuntu front-end.
What matters is the smbd version, the kernel version to an extent, and the smb.conf settings.

Altering to a different linux distro is likely to give you the same results since you already have smbd 3.6.0.

One of your settings has to be off to be causing the 40MB/s transfer cap.
 
I had an similar issue with FreeBSD 8.2. Samba transfer was like 80mb/sec, but only for 2 seconds - then it dropped to zero and then an 80mb/sec spike again and so on. Setting the SMB Version to 2 in the config fixed it for me.
 
I figured out my problem!

I have been using Total Commander for my copy tests. I have been using TC for a long time and never had issues with it in the past so I never suspected it.. After seeing john4200 refer to CDM I ran a benchmark against the network drives and my read performance (sequentially) was over 80 Megabytes per second. So I started using windows file copy and my read speeds are nearly identical to my write speeds. I am averaging like 95 Megabytes per second reading from my shares.

So I started doing some digging and it looks like Total Commander is hindered by Windows 64 bit. Something with how Windows 64 bit prioritizes the 32 process when copying files. The author cannot make a 64 bit version of TC, because there is no 64 bit version of Delphi I believe, which is what it is written with.

Here is the German thread on TC
http://www.ghisler.ch/board/viewtopic.php?p=216762#216762

Google translator works if anyone wants to read it. The Author comments on it.


Thanks for the info guys, and the others that reported similar issues with SAMBA or other SMB/CIFS servers, make sure you are NOT testing with Total Commander on 64 bit windows because it WILL be slower on reads!
 
Last edited:
I use TC myself, however I have run the CDM sequential transfer test on my Solaris server and I still only get around 50MB/s on reads, but 97 MB/s on writes.

I don't think this is a problem with my current setup, but one thing I've had issues with in the past is antivirus software. At one point I was using, I think, Avast!, and when the real-time scanning was enabled, write performance from the network to the HD would drop way off. Turning off the real-time scan would fix things though.

EDIT: I just realized that the Solaris CIFS/SMB service probably does not support SMB2, so that could be the problem in my case.
 
Last edited:
I'm glad you found a solution. I'm thinking of forcing an early update to Samba 3.6. Could you let me know what kind of speed difference you see with/without "max protocol = SMB2" in your smb.conf?
 
I'm glad you found a solution. I'm thinking of forcing an early update to Samba 3.6. Could you let me know what kind of speed difference you see with/without "max protocol = SMB2" in your smb.conf?

It makes no difference at all on my server.
 
SMB2 made no difference for me either on my writes.

I was going to re-test with SMB2 and AIO (asynchronous I/O support) when I get a chance.

AIO is supposed to increase performance, but when I tested before I also noticed no difference.

I am currently back to running 3.5.4 and no SMB2 or AIO.

When I get a chance I am going to compare 3.6.0 with and without SMB2 + AIO.
 
the thread is about slow reads, though, not writes?

Yes, it is, I can only comment on my experience with SMB2 in how it relates to writes.(since at the time that I was testing SMB2 my reads were not at full speed I can't really determine if it really did anything..)

I will be able to comment on if SMB2 impacts my reads when I go back to 3.6.0 and turn SMB2 back on. When I test with anything other than Total Commander..
 
Back
Top