Slow Performance SMB to CIFS

KapsZ28

2[H]4U
Joined
May 29, 2009
Messages
2,114
I've asked about this before, but I really need to get some sort of solution in place. Our client needs to copy data from the West coast to the East coast.

On the West coast they have Openfiler with an iSCSI mount on Windows Server 2003. On the East coast, they are also using Openfiler, but with a CIFS share.

There is a 1 Gb link between the two sites. The latency is about 76ms.

From the Windows Server 2003 on the West coast I am trying to copy data to "\\ip_address\share" and getting about 2Mbps.

I tried enabling jumbo frames and also increased the TCP Window to 4Mb. Since the other end is a CIFS share on Openfiler, I don't think I can adjust any TCP settings.

What would you recommend to increase performance other than a WAN Accelerator?
 
I would think openfiler is the bottleneck.

If it were only windows servers, it would be 2003 that is your bottleneck. Server 2008 or higher or windows 7 or higher is what you want..in the Microsoft world.
 
I would think openfiler is the bottleneck.

If it were only windows servers, it would be 2003 that is your bottleneck. Server 2008 or higher or windows 7 or higher is what you want..in the Microsoft world.

Well, I can definitely test 2008 to 2008 and see if there is any difference.
 
No, just about the same performance if not worse going from 2008 R2 to 2008 R2.
 
CIFS is not suited for use on a WAN. It does not tolerate latency. If possible use something more WAN friendly.
 
Maybe FTP? I've used Netdrive in the past to make FTP behave similar to SMB/CIFS if that's required. What does an iperf look like over the line?
 
Maybe FTP? I've used Netdrive in the past to make FTP behave similar to SMB/CIFS if that's required. What does an iperf look like over the line?

Iperf from SAN to SAN was good last time I tested. Will need to test from Windows to SAN and see how it performs.
 
So I started the iperf server running on the Openfiler SAN and initiated a connection from the Windows 2003 server. Without changing the TCP window size, it transfers at 456 Kbps. By changing the window size I get the following results.

4 MB window - 3.51 Mbps
8 MB window - 6.28 Mbps
16 MB window - 10.7 Mbps
24 MB window - 17.3 Mbps

So the larger the window size, the higher the transfer speed. I am not really familiar with how iperf works when compared to copying via CIFS. On the Windows Server 2003 I changed the TCP Window Size to 16 MB, but it didn't increase performance. However with iperf, it makes a huge difference. Why?
 
So I started the iperf server running on the Openfiler SAN and initiated a connection from the Windows 2003 server. Without changing the TCP window size, it transfers at 456 Kbps. By changing the window size I get the following results.

4 MB window - 3.51 Mbps
8 MB window - 6.28 Mbps
16 MB window - 10.7 Mbps
24 MB window - 17.3 Mbps

So the larger the window size, the higher the transfer speed. I am not really familiar with how iperf works when compared to copying via CIFS. On the Windows Server 2003 I changed the TCP Window Size to 16 MB, but it didn't increase performance. However with iperf, it makes a huge difference. Why?

Iperf is not going to take into account Disk performance, storage load, queues or latency.

Iperf is simply a network throughput test

Have you tried different copy types, such as FTP as suggested above to see what the throughput is?
 
Iperf is not going to take into account Disk performance, storage load, queues or latency.

Iperf is simply a network throughput test

Have you tried different copy types, such as FTP as suggested above to see what the throughput is?

Not yet. That is next. The SAN only has CIFS setup, so I need to enable and setup a NFS share. I am not really interested in FTP and based on an article I read yesterday it seems that NFS has the best performance.
 
rsync or ftp will be the fastest and probably easiest to setup....

Matej
 
Jumbo frames? You sure you have that size MTU across that WAN link?

Yes, all the storage switches are setup with the maximum MTU size and so are the Openfiler SANs. I also specifically enabled Jumbo Frames on the Intel NIC that Windows Server is running on to test. It didn't make any difference.

As for the rsync, I was told it can't be used with the current setup since on one side the Windows box is using iSCSI and the other sides is a CIFS share. Something about iSCSI can only be connected to one device so rsync on that SAN can't copy the data. :confused:
 
Yes, all the storage switches are setup with the maximum MTU size and so are the Openfiler SANs. I also specifically enabled Jumbo Frames on the Intel NIC that Windows Server is running on to test. It didn't make any difference.

The SANs may be set for 9000 MTU but is everything in between? That's unusual on any long distance WAN. I'd do a ping with a large packet and confirm that. Make sure to force fragment off.
 
The SANs may be set for 9000 MTU but is everything in between? That's unusual on any long distance WAN. I'd do a ping with a large packet and confirm that. Make sure to force fragment off.

It is a single hop and the line has been tested to a full 1 Gb connection. It is probably either a MPLS, VPLS, or VLL. I try not to get too involved in that part of the networking.

We can do a SAN to SAN sync very quickly. We just can't do SAN to SAN with the iSCSI on one end and CIFS on the other. We are stuck going through Windows.

I want to try the NFS or FTP, but it seems like someone did something wrong with the SAN that holds the CIFS share. When I log into the Openfiler GUI, there is no share listed. They may have created it through CLI and it is not showing up.

The goal would be to get them off of Openfiler all together. They also use our NetApp on the west cost, so we were looking to purchase a NetApp at their DR site and get SnapMirror. That is when all this Nimble talk started too which would also be great if we purchased one for each site and replicated that way. As of right now this client must be using 15+ different SANs in three different locations with 100+ TB of storage in use. And that doesn't include offsite backups.
 
Back
Top