HP Microserver, slow in linux/bsd fast in windows.

CyberPunk_1000

Weaksauce
Joined
Jan 31, 2004
Messages
83
Having some trouble with my HP Microserver, running windows 2008 I get 100Mbp/s over the wire (copying a file) from my desktop. 2GB RAM installed and a single 2TB disk (not one of the newer 4k types). Running the same machine with Linux or FreeNAS I'm only getting 35-40Mbps (Megabytes not bits).

Anyone else observed similar or have any suggestions? I've tried all sorts. The hacked bios with AHCI support. Multiple O/S's Samba versions and tweaks I can see the same on NFS performance wise. Local disk to disk seems fine. I simply cannot get anywhere near the speed I get off windows.
 
Linux and FreeNAS (along with all other distros) only support SMB1, which caps at a 40MB/s limit.

Only Windows Vista/7/2008/2008R2 and OS X 10.7 Lion fully support Samba with SMB2 (Microsoft proprietary) which can exceed the 40MB/s limitation.
 
(...) SMB1, which caps at a 40MB/s limit.
.

Do you have some source for this (the 40MB/sec cap)? I have certainly achieved faster transfers than this to a Solaris box via CIFS (100-110 on 1GbE and 400ish on 10GbE (limited by local drive)).
 
That's with CIFS, not SMB.

SMB1 will cap at 40MB/s, there are threads on here and you can test it for yourself, it's well documented.

One of the reasons OS X 10.7 is dropping Samba support in favor of Microsoft's proprietary SMB2.
 
Do you have some source for this (the 40MB/sec cap)? I have certainly achieved faster transfers than this to a Solaris box via CIFS (100-110 on 1GbE and 400ish on 10GbE (limited by local drive)).

I run Samba on my linux box, and I regularly achieve 80 - 100 MB/s transfers to my Windows 7 machines.

I just mapped a network drive and ran CDM sequential with a 4000MB file size. Normally I would use AS-SSD, but that does not recognize mapped drives for some reason.

Anyway, the speeds over a gigabit network to a partially-filled Hitachi 2TB HDS5C302 formatted XFS (with some other network activity also going on) were:

80.3 MB/s read
101.9 MB/s write

This was with Samba version 3.5.9 on a linux 2.6.39 kernel connected to a Windows 7 HP SP1 machine. Both server and client were running Intel NICs.
 
Last edited:
This was with Samba version 3.5.9 on a linux 2.6.39 kernel connected to a Windows 7 HP SP1 machine. Both server and client were running Intel NICs.

That's great, apparently I've been reading old news. I just read up on Samba on Linux, apparently SMB2 is now integrated, and Samba4 has since been released to Linux distros.

OP, make sure to fully update Samba and the Kernel on your Linux OS so you can have full support of SMB2.

I've been waiting for this for a while now, it's very exciting news.

http://www.zdnet.com/blog/open-sour...2011-smb2-and-smbcifs-protocol-docs-done/6810
 
That's great, apparently I've been reading old news. I just read up on Samba on Linux, apparently SMB2 is now integrated, and Samba4 has since been released to Linux distros.

OP, make sure to fully update Samba on your Linux OS so you can have full support of SMB2.

I've been waiting for this for a while now, it's very exciting news.

http://www.zdnet.com/blog/open-sour...2011-smb2-and-smbcifs-protocol-docs-done/6810

I don't think the speeds I am getting are due to SMB2 -- SMB2 is not used by default in current, stable Samba releases, and I certainly have not enabled SMB2. I have been using Samba for quite a while (more than a year), and I have been getting 80 - 100 MB/s speeds during all that time.
 
Perhaps there was an update to the SMB1 protocol allowing faster speeds?
 
Last edited:
So some more info on this one.

I've got the newer revision HP Microserver as well as an older one. The older one has some bizarre problem with freenas and just dropping off the network but that's another issue in it's self as it runs Vmware fine.

So to give a bit more background on what I've done is as follows:

I've tried two different disks. A 2TB 5900 Seagate, and the 250GB 7200 Seagate that ships with the unit. Both to the same result.

I've tried various operating systems and distributions all at the latest version and up to date with patches etc as follows:

Windows 2008 r2
Copies at 100MBps from my desktop (Windows XP) to the server and vice -versa (Also checked in network utilisation and link seems saturated)

I've also tried Ubuntu server 10.04.2 LTS (Samba Version 3.4.7) x64 and i386
Openfiler
and Freenas

All with (at best) 40MBp/s transfer rates.

I've also tried copying from Virtual machines running ubuntu 10.04 and I get slower results on both Samba and NFS. Locally copying disk to disk seems fine. I use dd in various ways as follows:


time dd if=/dev/zero bs=1024 count=1000000 of=/mnt/u01/1Gb.file
time dd if=/mnt/u01/1Gb.file bs=64k | dd of=/dev/null
time dd if=/tmp/1Gb.file of=/mnt/u01/1Gb.file

I've fully updated the bios to the latest version using IDE and AHCI modes.
I've tried the 'hacked' AHCI version of the BIOS.
I've also tried a IBM Serveraid BR10 (Re branded LSI card) and another network card. All to the same effect. It's like the box cannot push more than that data rate.

I've also tried setting up a old MSI Platinum board with 1.5 GB RAM and using Freenas. 100MBp/s rock solid to the 2TB Segate disk no problem.

It's worth noting that I'm trying NFS as well as SAMBA and I've tried iSCSI on occasion as well.
My clients testing against this box are a Dell poweredge 1800 running Ubuntu 8.04 LTS, Windows XP (Intel e8600) another microserver running Vmware and a Whitebox MSI 3000+ Running windows server 2003. The Poweredge 1800 also shows low speeds on Samba which I hadn't noticed till this round of testing.

So any comments welcome I think I've tried near everything. Has anyone else got benchmarks of their systems or a similar setup to compare with?
 
Samba 3.4.7 is from March 2010. I posted my benchmark of version 3.5.9. I cannot remember what version I was using when I first set up my current server, but it may have been newer than 3.4.7.

Red Falcon's idea that something changed in the newer versions of Samba that improved performance seems the most likely explanation. I'm not familiar with the details of Samba server -- I just set it up with a basic share setting on my LAN, it worked, and I did not put any more thought into it. I suppose one other possibility might have to do with security share vs. user (I assume you are set at user).

You could try the 3.5.9 version of Samba to see if that makes a difference. Or you could go through all the changelogs between 3.5.9 and 3.4.7 to see if anything looks relevant:

http://www.samba.org/samba/history/
 
Installed Ubuntu 11.04 x64. Came with Samba 3.5.8, tried this. About 50MBp/s so built 3.5.9 from source. Same thing.
 
Installed Ubuntu 11.04 x64. Came with Samba 3.5.8, tried this. About 50MBp/s so built 3.5.9 from source. Same thing.

So you are saying 3.5.8 and 3.5.9 are faster than 3.4.7?

What exactly is this 50 MB/s that you are quoting?

If you want useful help, you need to be more specific and more systematic in your tests and troubleshooting.
 
john4200, are you running Ubuntu 11.04?

It appears that certain versions (Kernels) of Ubuntu Linux can only support up to a certain version.

Ubuntu : Samba : Kernel
10.04.2 : 3.4.7 : 2.6.32
10.10 : 3.5.4 : 2.6.35
11.04 : 3.5.9 : 2.6.39

Unless there is a way to update Samba specifically, I'm not sure if these versions support any higher, not counting 11.04.

I definitely think there was a serious revision to Samba with the SMB1 protocol, which is allowing transfers faster than 40MB/s.
The reason for the cap with SMB1 was due to overhead, over-coding, and outdated standards.

SMB2 was a huge overhaul on the original code, which Microsoft fixed up for it's newer OSes.

However, SMB2 does not include UNIX extensions, hence the ongoing development for UNIX and UNIX-like OSes.
This was one of the major reasons OS X 10.7 decided to dump Samba and go with "Microsoft Network Support", aka SMB2.

If the SMB1 protocol had an update or new code which allowed the 40MB/s cap to be reached, that would explain the faster transfer rates with newer versions of Samba and the Linux Kernel.
 
Last edited:
Ah, that might explain the higher transfer rates.

On the server in my sig, Samba 3.4.7 with SMB1 is maxed out around 39-40MB/s.

However, on the same server (same RAID array), I have an NFS share setup, and that nets me transfers in the 50-60MB/s area.

This proves it is definitely not hardware or driver related, but a protocol limitation.

I'll have to look through that link you posted, john4200, and see where the critical update was applied, and what exactly the update was that 'fixed' the issue.
 
This proves it is definitely not hardware or driver related, but a protocol limitation.

This is the part I was disagreeing with - I don't believe it's an inherent protocol limitation as solaris CIFS only implements the SMB1 level, not SMB2, and I can get full wirespeed on gigabit, and 30-50% on 10GbE.
 
This is the part I was disagreeing with - I don't believe it's an inherent protocol limitation as solaris CIFS only implements the SMB1 level, not SMB2, and I can get full wirespeed on gigabit, and 30-50% on 10GbE.

If you haven't, you need to read some of my above posts stating the version of Samba and Linux Kernels.

How is it then that with my version of Samba I'm capped at 39-40MB/s, yet with NFS I can achieve 50-60MB/s?

I truly believe the newer Kernels support an updated version of Samba, using SMB1, which has alleviated the data transfer cap.

How else could this be explained?
 
Couldn't the slow transfers just be a bug in older versions of Samba? When I refer to Samba, I am talking about the smbd daemon.

Although in the case of the OP, it seems like the situation may be more complicated, since he went from 35-40 MB/s to 50 MB/s, but not 80 - 100 MB/s.
 
The version numbers I stated were for smbd.

It could very well be a bug, but I remember a lot of people having this issue with Windows XP and OS X 10.4-10.6.

They showed their network usage and it literally capped and stayed almost a solid line at 40MB/s.

Though that is odd that he jumped to 50MB/s. Maybe the newer version allowed faster transfer rates, but not to the level that you get since it is a slightly older version. (3.5.8 vs 3.5.9)

OP, are you using encryption of any kind on your Linux distro?
 
I thought the OP said that he tried 3.5.8 and 3.5.9, and got 50 MB/s on both. But his comments are vague. If he really wants help, he needs to be more specific on what exactly he is measuring and how.
 
Installed Ubuntu 11.04 x64. Came with Samba 3.5.8, tried this. About 50MBp/s so built 3.5.9 from source. Same thing.

You're right, he did state that.

Yes, more info is needed at this point if he is achieving the same with both.

Heavy encryption on a software RAID array can degrade data transfer rates as it requires a decent CPU to decrypt the data and send the information. I'm thinking the OP is using encryption, but again I'm not really sure at this point.

If it isn't encryption and/or software RAID, we will have to dig deeper.


Here is another person having a similar problem: http://hardforum.com/showthread.php?t=1622060
 
Last edited:
Thanks for the feedback so far I've posted my setup and testing methods above.

To add to this I also tried FTP but I'll come to that shortly.

So here are the various things I've been doing, I'll try and make this post as detailed as I can but if there is more information that would be useful then let me know.

So at the core of my network I have a HP 1810G24 Switch. No vlan's or anything fancy configured. I've also got a Cisco 24 port + 2gigabit ports that I've tried as well wondering was it something odd but to no avail.

I'm using my desktop for testing as my primary source to copy files from. This is an intel E8600 with 2x Segate 500GB 7200RPM Drives running Windows XP. When I have been reffering to transfer rates I've been mainly referring to the Megabyte per second transfer value. Observing this in the task manager under windows my NIC usuage rises to near 500mbp/s (Megabits). I realises there is a conversion factor for base 8 etc.

I also have a white box MSI system running Windows 2003 server. I can reliably transfer files between these two at 100% of my network bandwidth. I.e. in task manager the network appears saturated.

I also have a v.1 Microserver (the one that shipped with a 160GB Harddisk). Running vmware ESXi. On this I have various distrubtions and installs of windows. The performance on this also seems low but I have been using it as a test bed to compare results. I have also been trying to use it as a client to the microserver in question for running FTP transfers, smbmount and nfsmount from the other system to see what sort of results I can get with things like dd.

I also have a Dell Poweredge 1800 running a identaical 5900 RPM 2TB Seagate hard disk that I've tried smbmount and nfsmount from. I have also tried copying files from this to my desktop and see similar results to that of the microserver only managing 25-30MBps.

The Microserver that I'm working on is a v.2 Microserver (shipped with a 250GB HD) I've got two gigabytes of RAM installed.

Having installed Windows server 2008 R2 on the Microserver I can reliably transfer files to and from both my Windows 2003 and XP boxes at the Windows file copy dialogue of 100MBp/s once again link saturation.

Now removing this and installing Freenas 7,8 or any of the 8.0.1 betas, Openfiler, Ubuntu server. I see a drop to arround 300-500mbps transmission rate too and from. With a variance of ~10% on reads and writes.

So I've tried various falvours and revisions of Ubuntu. Certainly 11.04 is the best performing so far.

So what I've been doing is copying files to and from the microserver using my desktop and whitebox as my 'controled' samples as I know these are both 'good' i.e. they seem to go at full speed.

I have also tried setting up FTPd and FTPing files off my desktop to the microserver. Once again the same sort of result has been observed (Comparable to Samba at 50MBp/s). Again similar for NFS.

From any of my virtual machines or from my poweredge I have smbmounted shares from the microserver or used nfs. And done the following:

Create a local 1GB file and read thsi back to test local speed.
time dd if=/tmp/1Gb.file bs=64k | dd of=/dev/null
time dd if=/dev/zero bs=1024 count=1000000 of=/tmp/1Gb.file

15625+0 records in
15625+0 records out
1024000000 bytes (1.0 GB) copied, 9.85151 s, 104 MB/s
2000000+0 records in
2000000+0 records out
1024000000 bytes (1.0 GB) copied, 9.85408 s, 104 MB/s


Try the same on the remote mount, repeated for both Samba and NFS
time dd if=/mnt/1Gb.file bs=64k | dd of=/dev/null
time dd if=/dev/zero bs=1024 count=1000000 of=/mnt/1Gb.file

Try just flat copying the locally generated file:
time dd if=/tmp/1Gb.file of=/mnt/1Gb.file
And in reverse:
time dd if=/mnt/1Gb.file of=/tmp/1Gb.file

I could post all the results for these but they all end up being roughly as follows:
1024000000 bytes (1.0 GB) copied, 29.8817 s, 34.3 MB/s
(Yes this is slower than my desktop transfer rate by ~10MB/s)

The trasnfer rates between Linux is a concern but at the minute I'm just interested in figuring out the initial question of, Why does Windows 2008r2 fly along? And yet Ubuntu doesnt assuming my desktop was the only client.
 
Also I am not using any RAID. This is a single 2TB 5900 RPM Seagate green disk. There is no encryption. The file system in use is ubuntu default ext4.
 
Okay, those are a lot of details, but they are not the right details, they are not specific and systematic, and you never really answered my question about where exactly that 50 MB/s number came from.

When troubleshooting computer problems, you need to be systematic and specific. What you have done is given a lot of random details about your hardware and software with extremely vague descriptions of the tests that you have run.

With troubleshooting, the best thing you can do is to narrow things down by systematically doing EXACTLY the same test while varying as few other parameters as possible, ideally just one thing changing with an A / B test.

I suggest you pick one of your clients, whichever is the easiest to work with, and stick with that. Then pick a method of measuring the throughput. I suggest either mapping a network drive and using CDM sequential, or alternatively getting a large file (should take at least a minute to transfer) and a stopwatch and copying the file up and down while timing it to get the throughput.

Once you have decided on that, do a systematic test. Think of things that you can change, and change them one at a time, keeping everything else the same. For example, you could do the throughput test to a linux server, and to a Windows server. You could do the test with Samba v3.4.7 and 3.5.9. You could trying configuring Samba differently, say, security user or security share. And any other things you can think of to test.

After you have done the tests, you can post what you found: describe specifically what you have done, and include the exact throughput you measured in each test.
 
Hi, I feared the data might be a bit all over the place. I'll take some time to structure it out and repeat all my tests as you have described to try and get a consistant format to work with.

The 50MBp/s figure comes from Windows 2008r2 or Windows 7 when you copy a file a dialoge window appears. This has a speed below it, this is the number I have used which isnt wonderfully accurate but is a good guideline as the network utilisation just about matches, just about in that network meter in the task manager seems to respond faster when sampling data compared to this reported figure. Is this clear?
 
The 50MBp/s figure comes from Windows 2008r2 or Windows 7 when you copy a file a dialoge window appears. This has a speed below it, this is the number I have used which isnt wonderfully accurate but is a good guideline as the network utilisation just about matches, just about in that network meter in the task manager seems to respond faster when sampling data compared to this reported figure. Is this clear?

Yes, I think I follow. But that method is not very repeatable. I suggest you use one of the two methods I mentioned in my previous post, when you do more tests.
 
What are you using to copy the files? Windows explorer or something else?

That's a very good point. Teracopy can sometimes hold the transfer rates back if using Samba or another SMB variant.

If it's just explorer though, it should be fine.
 
Well here seems to be a bit of confusion and also superficial knowledge, so I post some benchmarks with a bit of explanation here (the dd-type test is rather good, because you can tune a bit, so later more):

The Server is an AMD Athlon 64 X2 Dual-Core machine with 1GB RAM (yes, only), a Realtek onboard Gigabit Ethernet Controller and a rather fast RAID-System as Disks. It is running ubuntu 10.04 with Samba 3.4.7.

I do the benchmarks using 2GB-Files (to avoid cachin-problems on the server), here the local server-benchmark

Code:
time dd if=/dev/zero bs=1024 count=2000000 of=2GB
2000000+0 records in
2000000+0 records out
2048000000 bytes (2.0 GB) copied, 10.5756 s, 194 MB/s

real    0m10.601s
user    0m0.512s
sys     0m9.333s

OK, the disk-system can do Gigabit-transfers, now let's go over the wire. The Client is a AMD Phenom X4 machine running Windows XP and I am using cygwin to do the test. NIC is the same Realtek as in the Server:

At first the "standard test" like above, I just use a 100MB-file to keep the time somewhat reasonable:

Code:
$ time dd if=/dev/zero bs=1024 count=100000 of=100MB
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 37.4077 s, 2.7 MB/s

real    0m37.544s
user    0m0.406s
sys     0m4.046s

Rather slow, isn't it?

how can I get that to do 75MB/s?

Very simple:

Code:
$ time dd if=/dev/zero bs=102400 count=20000 of=2GB
20000+0 records in
20000+0 records out
2048000000 bytes (2.0 GB) copied, 27.2277 s, 75.2 MB/s

real    0m27.339s
user    0m0.468s
sys     0m6.749s

Woha! There is no SMB2 or other freaky stuff involved, it is just the blocksize of the transfers.

I do have the Program "Total Commander", a Norton-Commander-Style Filemanager, where you can set the block-size when copying. Here are no exact numbers, but readings from task managers network utilizastion:

When copying using Windows-Explorer: 26% (appr. 260MBit/s)
When copying with total commanders standard-routines: 27%
When copying with 100kB blocks: 46%

Although somewhat slower as dd, it is a rather big improvement.

When tuning samba (setting socket options TCP_NODELAY and oplocks=no) I get a decent speedup of 5%.

Edit: When copying from 2 systems, i get:

Code:
dd if=/dev/zero bs=102400 count=20000 of=2GB
20000+0 records in
20000+0 records out
2048000000 bytes (2.0 GB) copied, 32.9023 s, 62.2 MB/s

dd if=/dev/zero of=1GB bs=102400 count=20000
20000+0 records in
20000+0 records out
2048000000 bytes (2.0 GB) copied, 43.5086 s, 47.1 MB/s

which is really about wirespeed.
 
Last edited:
I'm running ESXi 4.1 and Virtual Ubuntu 11.04 installation on my Microserver. If I do a normal copy over SMB from my Windows 7 workstation I get about 75-80MB/s so I don't think there's SMB limit. During that copy the samba service on the virtual machine uses around 50% (1 core) so it seems to be CPU limited.

However I have a HP P410i RAID controller and Intel 1000GT NIC in the microserver as well...
 
I'm running ESXi 4.1 and Virtual Ubuntu 11.04 installation on my Microserver. If I do a normal copy over SMB from my Windows 7 workstation I get about 75-80MB/s so I don't think there's SMB limit. During that copy the samba service on the virtual machine uses around 50% (1 core) so it seems to be CPU limited.

However I have a HP P410i RAID controller and Intel 1000GT NIC in the microserver as well...

It's because you're on 11.04, has a newer kernel and smbd daemon.

10.04 and 10.10 have older kernels and thus don't have support (to my knowledge) of the newer smbd daemons.


I'm a bit skeptical about webstoney's transfer rates as well. How is it with a 100MB file, he only gets 2.5MB/s? Yet with a 2GB file he gets 75.2MB/s? That's a huge increase in speed, and even on my worst day with Samba, a 100MB file still copies at over 30MB/s.

He may be right about the block sizes, but going from 2.5MB/s to 75MB/s is a huge jump, something doesn't add up.
 
I'm a bit skeptical about webstoney's transfer rates as well. How is it with a 100MB file, he only gets 2.5MB/s? Yet with a 2GB file he gets 75.2MB/s? That's a huge increase in speed, and even on my worst day with Samba, a 100MB file still copies at over 30MB/s.

He may be right about the block sizes, but going from 2.5MB/s to 75MB/s is a huge jump, something doesn't add up.

I guess his data is accurate, but it is just not realistic. Note that his first test used bs=1024. That is 1KiB. Who does 1KiB block transfers? Nobody sane. But I am not surprised that he can only get a few MB/s when doing 1KiB IOs over Samba.
 
Last edited:
I'm running ESXi on a microserver, got a file server VM using Ubuntu and ZFS-on-Linux (RAIDZ), transfer performance to my Server 2008 file server was fairly disappointing using Samba 3.5.8 (~60MB/s sustained), but I've just dropped Samba 3.6.0 rc2 on there (add 'deb http://ftp.debian.org/debian experimental main' to your sources.list), enabled SMB2 ('max protocol = SMB2' in global section of smb.conf) and performance has much improved.
Reads @101MB/s from RAIDZ
zRzag.jpg

Writes are similarly high, at least to (a Virtual Drive mapped to a datastore kept on) the Seagate 160GB. Writing the ZFS pool I have a curious issue whereby the first few GB will go at around 100MB/s, but after that performance drops off to under 50MB/s. I don't think it's a RAM thing because performance doesn't recover again if I leave it idle for a while, and It's not a CPU/Parity calcs thing because I can replicate the slow results with dd, without samba/network overheads and thus fairly low cpu usage.

Might also be able to improve network speeds a bit (to get SMB into the 110MB/s range that I see with my Win7 clients), am currently running the microserver with both the onboard and an Intel in a team, iperf results show better results with just the Intel though.

I'm going to try updating my zfs-on-linux version to see if that helps, but for now it's 5am locally and I need to stop playing around!
 
Nice, I will have to change the config file and see where I can get using the older kernels, thanks for the info.
 
It's because you're on 11.04, has a newer kernel and smbd daemon.
10.04 and 10.10 have older kernels and thus don't have support (to my knowledge) of the newer smbd daemons.

This is totally independent of the kernel, just the version of smbd is relevant.
I'm a bit skeptical about webstoney's transfer rates as well. How is it with a 100MB file, he only gets 2.5MB/s? Yet with a 2GB file he gets 75.2MB/s? That's a huge increase in speed, and even on my worst day with Samba, a 100MB file still copies at over 30MB/s.
He may be right about the block sizes, but going from 2.5MB/s to 75MB/s is a huge jump, something doesn't add up.

This absolutely adds up, it's the latency of the ACK-packages, something that is addressed in SMB2. This especially drops in in High-Speed WiFi environments, where SMB2 gets a huge win above SMB1. Note with my testt that both, Client and Server (XP/samba 3,.4) were not SMB2-capable, so there is no visible barrier in smb1 as you predicted.
 
This is totally independent of the kernel, just the version of smbd is relevant.

So an older kernel will support a newer version of the smbd daemon?
 
Back
Top