HP ProLiant MicroServer owners' thread

Ah, okay, I assumed that because you had a 9211-8i, you had SATA3 disks, I wondered why you had it, do you think that a SAS3081E-R differs that much from a 9211-8i, driving 8x disks, as it's only SATA 2?

What models of Samsungs disks are you using?

What OS? (FreeNAS ZFS?) I be interested in your bonnie speeds?
 
Ah, okay, I assumed that because you had a 9211-8i, you had SATA3 disks, I wondered why you had it

Merely future proofing - whether I actually bother with SATA3 disks in the future depends on my bank balance at the time. :)

do you think that a SAS3081E-R differs that much from a 9211-8i, driving 8x disks, as it's only SATA 2?

The 9211-8i is PCIe v2 8x which means it can sustain more than adequate throughput for 8 disks (500Mbytes/sec each lane), whether SATA2 or 3.

The SAS3081E-R is PCIe v1 8x and can therefore sustain only half the throughput of v2, but should still be more than sufficient for SATA2.

What models of Samsungs disks are you using?

What OS? (FreeNAS ZFS?) I be interested in your bonnie speeds?

HD204UI (4x 2TB, 3.5") and HM500JI (4x 500GB, 2.5"), all configured for 4K sectors (even though the HM500JI are not advanced format disks). At some point in the future I intend to replace the 500GB disks with larger capacities - ideally 2TB - as/when they become available.

I haven't bothered testing with bonnie, so I can only give you the results of simple dd write and read transfers:

Code:
freenas# dd if=/dev/zero of=/mnt/share/tmp.000 bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 505.039819 secs (212605380 bytes/sec)

freenas# dd if=/mnt/share/tmp.000 of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 373.910194 secs (287165700 bytes/sec)

The OS is FreeNAS 8.0.1-BETA4 with 4GB RAM (I may upgrade to 8GB but for now it's fine). vfs.zfs.prefetch is enabled.
 
interesting choice of 5400 rpm drives?

The 5400RPM 2TB disks are nice and quiet, cheap, and do the job! :)

The 500GB drives just happened to be what I had lying around...

I also configured another N36L for my brother with just the standard onboard SATA2 and another Intel CT NIC (the onboard Broadcom is totally busted in FreeBSD :(). His unit uses 4x 1TB Samsung HD103UI disks spinning at 7200 RPM and it suffers from far more vibration and resonant noise than my setup.

I doubt there's any noticeable/meaningful difference in performance despite spinning the disks a lot faster.
 
so I'm assuming 1 vdev is the 4x2TB 3.5", and 1 vdev is the 4x500GB 2.5", and then zpool stripped across them both vdevs?
 
vibration can affect I/O performance, in poor storage chassis, especially when not that heavy. So maybe off to Scan for some cheap Samsungs for test!
 
so I'm assuming 1 vdev is the 4x2TB 3.5", and 1 vdev is the 4x500GB 2.5", and then zpool stripped across them both vdevs?

Exactly, both vdevs are RAIDZ1, meaning I can lose up to 1 disk from each vdev before losing the whole array/zpool - this is adequate redundancy for me as I also have backups on a separate server.

vibration can affect I/O performance, in poor storage chassis, especially when not that heavy. So maybe off to Scan for some cheap Samsungs for test!

Unless you really need the performance, I think there's a lot to commend 5400RPM drives.
 
Hi all,

I've installed an LSI SAS 3081E-R RAID card into my Microserver, however I'm unable to access the configuration utility.

During the boot process, I'm prompted to press CTRL + C to access the config utility, however the card is not booting into the config utility. Instead it continues with the normal boot order.

While the card displays details on how to access the config utility, it also states the card has been "disabled by the user". I'm assuming once I can get into the utility I can enable it and configure as required.

I'm unsure if the "disabled by the user" information is linked to the following, but the card does not appear as a boot option in the HP BIOS.

Has anyone got any suggestions?
 
Ive also noticed this weirdness with not being able to get into the LSI 3081E-R bios by selecting CTRL-C and Ive tried with two controllers, also there is a similar issue with HPs own Smart Array P410, so I suspect MicroServer BIOS.

Ive just flashed latest IR BIOS on LSI but not checked, because Im not using any RAID function, just JBOD, so is not a problem for me. So if you are using ZFS dont bother with hardware RAID.
 
I don't think the BIOS in the HP Microserver leaves enough "room" for the LSI BIOS to load - it's a common problem. I also get the same problem with an LSI 9211-8i in the N36L, but fortunately having flashed the 9211-8i with IT (Initiator Target) firmware I've got no reason to access the LSI BIOS config as it's now just a dumb HBA.

Ive just flashed latest IR BIOS on LSI but not checked, because Im not using any RAID function, just JBOD, so is not a problem for me. So if you are using ZFS dont bother with hardware RAID.

You don't want IR (Integrated RAID) firmware on your LSI card if you only want JBOD/dumb HBA functionality, which is the best option when using software RAID (eg. ZFS etc.). For JBOD functionality you want IT firmware.
 
Ive also noticed this weirdness with not being able to get into the LSI 3081E-R bios by selecting CTRL-C and Ive tried with two controllers, also there is a similar issue with HPs own Smart Array P410, so I suspect MicroServer BIOS.

Ive just flashed latest IR BIOS on LSI but not checked, because Im not using any RAID function, just JBOD, so is not a problem for me. So if you are using ZFS dont bother with hardware RAID.

Thanks for the prompt reply.

I shall flash the latest IR BIOS and check. I purchased the card in order to use a RAID setup which is supported by VMware ESXi, so hopefully i can get it working.

I have the latest HP BIOS applied to the Microserver, so if the latest LSI BIOS does not resolve the issue, my method of last resort is to place the card in my i5 desktop and attempt to configure the card with the drives attached to it whiles powered up via the Microserver!

I don't think the BIOS in the HP Microserver leaves enough "room" for the LSI BIOS to load - it's a common problem. I also get the same problem with an LSI 9211-8i in the N36L, but fortunately having flashed the 9211-8i with IT firmware I've got no reason to access the LSI BIOS config as it's now just a dumb HBA.

Any chance a custom HP BIOS may resolve the issue?
 
Last edited:
Any chance a custom HP BIOS may resolve the issue?

I suppose you could try the latest BIOS released Aug 2011, I'm going to try it now with the LSI 9211.

Edit: Didn't work for me - no change.
 
Last edited:
I shall go ahead and trial setting up the card in my desktop with the drives attached while powered on via the Microserver. I will report back tomorrow.
 
Hi all,

I managed to get into the configuration while the card was in another computer. I hooked up the drives while powered on via the Microserver and setup the RAID and enabled the adapter.

Swapped the card back into the Microserver, booted up my ESXi 5 USB stick and the hypervisor can see the RAID volume. Success!
 
Didn't see this in here, but it looks like the microsever has gotten an update. A bit faster processors a bit more memory (now 2GB).
 
HP Microserver N40L, with AMD Turion II Model Neo CPU and 2GB RAM for $20/£10 more than the N36L.

It seems the new CPU might be clocked at 1.5GHz rather than the 1.3GHz of the older AMD Athlon Neo in the N36L. Not a major improvement, but for only $20/£10 the extra speed might come in handy though I'm not convinced the additional 1GB is that appealing (particularly if it's supplied as a second 1GB stick, as this would make upgrading the memory impossible without throwing out the HP supplied RAM).

No sign of HP offering £100 cashback on the N40L however, in which case the N36L+cashback will still be the more sensible choice (until stocks of the N36L have disappeared, which is likely to happen sooner than later with the introduction of the N40L).

Edit: Standard memory in the N40L is 1x2GB stick, leaving the second slot free. :)
 
Last edited:
HP Microserver N40L, with AMD Turion II Model Neo CPU and 2GB RAM for $20/£10 more than the N36L.

It seems the new CPU might be clocked at 1.5GHz rather than the 1.3GHz of the older AMD Athlon Neo in the N36L. Not a major improvement, but for only $20/£10 the extra speed might come in handy though I'm not convinced the additional 1GB is that appealing (particularly if it's supplied as a second 1GB stick, as this would make upgrading the memory impossible without throwing out the HP supplied RAM).

No sign of HP offering £100 cashback on the N40L however, in which case the N36L+cashback will still be the more sensible choice (until stocks of the N36L have disappeared, which is likely to happen sooner than later with the introduction of the N40L).

Edit: Standard memory in the N40L is 1x2GB stick, leaving the second slot free. :)

To bad that isn't a US discount :(
 
I've got one of these serving as my Amahi server. It's great.

My setup:
4GB non-ECC RAM
30GB Vertex boot drive
4x2TB Samsung F4 HD201U (w/ firmware update)

Running Fedora 14 and Amahi 6.

Right now the drives are a JBOD but I want to RAID 5 them with mdadm.

I've seen people reporting mixed results with these drives and this setup, so I'd like to format the drives properly and run mdadm correctly so I don't need to do it again.

Anyone got any advice to share on this?

thanks,
gofasterplease
 
So I aligned the drives, and formatted them with fdisk. Created the raid 5 array. It seemed like it worked, but sdb1 fell out of the array... I see no problems, so i'm recreating the RAID

update----

I created the raid with mdadm but mdadm automatically sets up a hot spare. when i used --force, it created the RAID array properly.

The array is working just fine... transfer rates are still the same as when i was using one drive... about ~60 Mb/s read/write... Not sure why my throughput is slow, but it's nice to have one big drive
 
Last edited:
So this is a massive thread and i've read through some of it to try and answer my questions but I think I may have to admit defeat and just ask.

I'm thinking about a NAS box - maybe the QNAP, Synology or the HP Proliant.

Do I need a RAID card if I put 6 drives in?
What is the process around getting 5/6 drives? I see people saying they had to do xyz for 5, xyz for 6, etc
Any ideas if the LCD screen will work under WHS for SABNZDB
 
No need for a RAID card. Ubuntu Server works fine in way less than 1GB of memory and can do software RAID using mdadm. On gigabit LAN you'll still get 90MB/s transfer rates, so the CPU isn't stretched at all. You can use webmin to administer the server too.

There are 6 SATA ports you can use, although one is external. There are 4 internal bays and one 5.25" bay. So the procedure is
- Fill the 4 interbal bays with drives
- Fit an internal caddy to use the 5.25" bay to hold a 3.5 inch drive. You'll also need a cable to convert the power plug from the optical-drive size to the SATA size.
- Use an external SATA drive to make use of the external eSATA socket. Although I did read that someone squeezed a 2.5" drive inside and routed a cable from the external eSATA socket to this drive.
 
I am having issues installing win 2003 server x64 on the server. If I load the default optimal BIOS which then enables AHCI then the OS does not recognise the HDD on installation. If I disable AHCI, then the computer freezes or takes more than 2 hours before it reboots after the blue screen stages. If I force it to restart it hangs on 19minutes during the installation and takes over 5 hours to complete the remaining 19 minutes and it also shows that it has some Drive write access failures.

I managed to install Win 2003 x86 with no problem but it is just the x64 I am having problems with.

I have currently;

- Ultra micro tower
- AMD Athlon II Neo N36L / 1.3GHz
- 8 GB DDR3 SDRAM
- HDD 1x 250GB 1x 500GB 1x160GB all sata drives
 
Hi Everyone. Been reading this thread for months now and finally purchased an HP Microserver N36L. This will be my first server so I'm really excited to set it up. After reading up on all the Microserver threads here and elsewhere I'm trying to decide between WHSv1 or WHS 2011. Initially I'll have 2 x 1TB and 2 x 2TB setup with the 160GB in the ODD location. The primary purpose of the server will be to download torrents and store media (mostly not duplicated) and documents/pictures (duplicated).

I'm leaning toward WHSv1 since it has DE and seems to have the best flexibility for my needs. I'm new to WHS in general but I'm not seeing a lot of benefits to having WHS 2011 especially sans DE. WIth the recent announcment of DE like functionality in Windows 8 server I'm thinking of skipping WHS 2011 altogether. My only concern with WHSv1 is that one of my 2TB drives is a Samsung HD204UI which it seems some people have had issues with.

Looking for any recommendations or advice you guys may have. I also don't have an optical drive and could use a tutorial on how to install WHSv1 with the necessary drivers via USB stick. TIA
 
I love this little box

Especially since there is a nice cash-back offer in the UK (The N36L though, with 1.3Ghz and 250GB HDD).

I was looking for a small NAS with 'potential' so that was the best choice really. My main concern was a remote management card, so I had a system built based on SuperMicro so I can add an IPMI card.

When a colleague read out loud that this thing supports IPMI 2.0 - I was flicking out my CC immediately.

Now the UK offer comes with 1.3Ghz (N36L), 1GB of RAM and 250GB SATA.

Seing that the thing has an internal USB port, the main question is : Shall I install ESXi .. well OBVIOUSLY .. however, fake raid isn't recognized as such and as a result the Raid 1 is being seen as two single disks.

Didn't want to hack myself some RDMs and run software Raid, so 2008R2 it was (with free ISCSI target). Needless to say I got myself some RAM and upgraded it to its maximum of 8GB ...

The I thought - having a 24/7 running NAS I might as well use it for stuff like ripping DVDs, Mediaserver and the sorts. CPU isn't the strongest, but let's face it, the thing has enough time while I am at work or during the night to churn through those rips ...

Unfortunately some programs refused to install on server (DVD Shrink) but once the installer (and later the executable) is added to the exclusion list of DEP (enabled) - it works all like a charm ....

(tried Windows 7 under Hyper-V first but discovered it doesn't support USB passthrough - WTF - so scrapped it).

I still installed ESXi on a USB stick which I can boot from when I want to test something for study purposes (simply because I bought the stick and it is too small to use for proper stuff, so might as well leave it in there).

For this one I used the built-in 250GB as system disk. Then I insert a spare 750GB Enterprise model SATA from Seagate for non-essential stuff and two 2TB spindles (Seagte green 5900 rpm drives) mirrored for "stuff to keep" such as itunes.

A DVD burner is a no brainer :)

Now the fun bit is that I read a few posts on other forums where people tried to put in 8 disks / raid controller and started to modify the case with a dremel ...

Just in case there are people here trying to acomblish the same, here is how I will "mofidy" my second one on order, giving you 16TB raw storage ...

1. Get a Supermicro Mobile Rack which fits 4x2.5" disk, installed into the ODD bay
2. Get disks for above (4 x ST91000640NS) = 4TB, fitted in the 3.5" ODD bay
3. Get disks for the 4 internal bays (4 x ST33000650NS) = 12TB

Total : 16TB Raw storage

Raid card..... onboard obvioulsy won't cut it but the 8 channel cards from Adaptec, 3Ware and Areca come with low profile plates so they fit into the PCIe slot.

The SuperMicro Mobile Rack has one power connector (already in place for the ODD drive) and 4 SATA connectors. The 4 internal bays are connected via backplane, but also here the backplane is connected via 4x SATA cable, going to a multi lan fan-out straight to the motherboatd.

The Adaptec 5805 for example has 2x Multi-Lane Fan-Outs so you can connect all 8 disks just fine.

This box would even be better with 16GB - but as mass-storage with small footprint - brilliant :)
 
I picked one up a week ago after a Synology DS108j could no longer serve my needs.

Booting FreeNAS 8 on internal 8GB USB stick, stuffed in 2x4GB sticks of RAM that fit perfectly without removing the headspreaders, 4x2TB Hitachi 7200RPM and using raidz

Without doing any tuning and simply using an arbitrary Windows Explorer file copy as the measure, I'm seeing 100MB/sec reads and 70MB/sec writes over gigabit ethernet, using a Win7 laptop as the CIFS client.

There is no need, in my mind, for 7200RPM disks in this, gigabit will be the limiting factor unless you're recovering from a bad disk / etc (knock on wood).

Extremely happy with this little box so far, this thing should last a couple years without needing to expand to something larger -- and if it does, I can pickup an external enclosure and a simple JBOD controller and double the storage pool capacity pretty easily.... or just pickup another one.

Anybody using external enclosures with it? Would love to see pictures if you are.
 
I am looking to buy a hardware raid controller to add more disks, just experienced an issue with LSI SAS3081E controller (which I pulled from another system) where I couldn't enter its BIOS, is this a always an issue all hardware raid controllers or its just one of. Can someone provide me some info on the card which they have used and have been able to configure the raid functionality using BIOS.
 
I am looking to buy a hardware raid controller to add more disks, just experienced an issue with LSI SAS3081E controller (which I pulled from another system) where I couldn't enter its BIOS, is this a always an issue all hardware raid controllers or its just one of. Can someone provide me some info on the card which they have used and have been able to configure the raid functionality using BIOS.

Adaptec 2405 / 5805 / 3805 are perfect and come with a low profile plate as well. The Areca also have a low profile model. As the server has multilane connector to the backplane they are all working just fine. If you intend to use 4 disks only then the 2405 is the best value tbh.
 
Adaptec 2405 / 5805 / 3805 are perfect and come with a low profile plate as well. The Areca also have a low profile model. As the server has multilane connector to the backplane they are all working just fine. If you intend to use 4 disks only then the 2405 is the best value tbh.

Gomjaba, thanks for your quick reply; have you used any of those cards in HP Microserver if yes are you able to configure the RAID functionality using RAID controller BIOS. Reason I am asking is l can't even access LSI SAS3081E bios and read on this same thread that its a issue with Microserver as it doesn't have enough room to load the controller BIOS.
 
Gomjaba, thanks for your quick reply; have you used any of those cards in HP Microserver if yes are you able to configure the RAID functionality using RAID controller BIOS. Reason I am asking is l can't even access LSI SAS3081E bios and read on this same thread that its a issue with Microserver as it doesn't have enough room to load the controller BIOS.

I worked with the Raid Controller before obviously, but not (yet) in the Micro Server.

Give me half an hour - I shall try that for you ...
 
Yes it does work

Here insert in the PCIe slot
2w49t7s.jpg


Here back in the case with the fan out connected
t6rw49.jpg


11b5kxt.jpg


And yes, it can use the bios

348irmb.jpg


Granted, the card used here is its bigger brother - but the 2405 works too :)
 
Yes it does work

Here insert in the PCIe slot
2w49t7s.jpg


Here back in the case with the fan out connected
t6rw49.jpg


11b5kxt.jpg


And yes, it can use the bios

348irmb.jpg


Granted, the card used here is its bigger brother - but the 2405 works too :)

Awesome!!! thanks mate ordering that card now. Thanks a ton
 
So re-use the fan-out from the backplane, add one of these into the ODD bay

z3law.jpg


And you got yourself a 16TB NAS :)
 
here is my Microserver with an Edge10 8 bay external das

119csn5.jpg


At the moment it has a cheap esata card installed that allows me connect to the das using 2 esata cables, i was thinking of getting an Adaptec 1045 which supports 4 external ports, so in theory i could have another 8bay das if needed in the future
 
Hi all, I seem to be getting really slow network transfer speeds on my server (about 6MB/s over gigabit network) - The setup

openindiana oi_151a installed on the 250gb hdd
4x 2TB Samsung hd204ui drives in zpool
napp-it to manage
8gb Ram

test env:
desktop running win7 with gigabit ethernet
Readynas with gigabit ethernet
brand new netgear 16port gigabit switch
New cat 5e cabling for all the network

Some internal write tests:

WRITE
dd if=/dev/zero of=/datapool/dd.tst bs=1024000 count=10000
10000+0 records in
10000+0 records out
10240000000 bytes (10 GB) copied, 43.7487 s, 234 MB/s

READ
dd if=/datapool/dd.tst of=/dev/null bs=1024000
10000+0 records in
10000+0 records out
10240000000 bytes (10 GB) copied, 28.479 s, 360 MB/s

os Drive write:
dd if=/dev/zero of=test/dd.tst bs=1024000 count=10000
10000+0 records in
10000+0 records out
10240000000 bytes (10 GB) copied, 137.347 s, 74.6 MB/s

os Drive READ:
dd if=test/dd.tst of=/dev/null bs=1024000
10000+0 records in
10000+0 records out
10240000000 bytes (10 GB) copied, 93.259 s, 110 MB/s

The internal write speeds seem good? but the speed over the network is appauling! - it took me just over an hour to do a DD test with the 10gb file (over nfs and same over cifs) and speeds were only about 4.8MB/s to the desktop and about the same to readynas

Transfers from the desktop to the readynas go well above that and all other transfers in the network work fine. i've narrowed the problem down to the network card on the microserver. I've tried running knoppix 6.0, latest ubuntu live cd and debian install with the same issue. Are there any bios setting i may have missed? or something firmware wise that could cause this issue? Any one else having this problem?
 
Last edited:
here is my Microserver with an Edge10 8 bay external das

At the moment it has a cheap esata card installed that allows me connect to the das using 2 esata cables, i was thinking of getting an Adaptec 1045 which supports 4 external ports, so in theory i could have another 8bay das if needed in the future

What card (the cheap one you mention) are you using?
 
Hi all, I seem to be getting really slow network transfer speeds on my server (about 6MB/s over gigabit network) - The setup

openindiana oi_151a installed on the 250gb hdd
4x 2TB Samsung hd204ui drives in zpool
napp-it to manage
8gb Ram

test env:
desktop running win7 with gigabit ethernet
Readynas with gigabit ethernet
brand new netgear 16port gigabit switch
New cat 5e cabling for all the network

Some internal write tests:

WRITE
dd if=/dev/zero of=/datapool/dd.tst bs=1024000 count=10000
10000+0 records in
10000+0 records out
10240000000 bytes (10 GB) copied, 43.7487 s, 234 MB/s

READ
dd if=/datapool/dd.tst of=/dev/null bs=1024000
10000+0 records in
10000+0 records out
10240000000 bytes (10 GB) copied, 28.479 s, 360 MB/s

os Drive write:
dd if=/dev/zero of=test/dd.tst bs=1024000 count=10000
10000+0 records in
10000+0 records out
10240000000 bytes (10 GB) copied, 137.347 s, 74.6 MB/s

os Drive READ:
dd if=test/dd.tst of=/dev/null bs=1024000
10000+0 records in
10000+0 records out
10240000000 bytes (10 GB) copied, 93.259 s, 110 MB/s

The internal write speeds seem good? but the speed over the network is appauling! - it took me just over an hour to do a DD test with the 10gb file (over nfs and same over cifs) and speeds were only about 4.8MB/s to the desktop and about the same to readynas

Transfers from the desktop to the readynas go well above that and all other transfers in the network work fine. i've narrowed the problem down to the network card on the microserver. I've tried running knoppix 6.0, latest ubuntu live cd and debian install with the same issue. Are there any bios setting i may have missed? or something firmware wise that could cause this issue? Any one else having this problem?

Have you tried a simple iperf test to rule out if it's the network at fault or the protocol layer on top?
 
Have you tried a simple iperf test to rule out if it's the network at fault or the protocol layer on top?

results below :-

Microserver :-
iperf -c 192.xxx.xxx.xxx -t 60
------------------------------------------------------------
Client connecting to 192.xxx.xxx.xxx, TCP port 5001
TCP window size: 48.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.xxx.xxx.xxx port 39518 connected with 192.xxx.xxx.xxx port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 136 MBytes 19.1 Mbits/sec

Readynas :-
iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 256 KByte (default)
------------------------------------------------------------
[ 6] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 39518
[ 6] 0.0-60.0 sec 136 MBytes 19.1 Mbits/sec


10 multiple connections results

Microserver :-
iperf -c 192.xxx.xxx.xxx -P 10
------------------------------------------------------------
Client connecting to 192.xxx.xxx.xxx, TCP port 5001
TCP window size: 48.0 KByte (default)
------------------------------------------------------------
[ 12] local 192.xxx.xxx.xxx port 57797 connected with 192.xxx.xxx.xxx port 5001
[ 3] local 192.xxx.xxx.xxx port 48766 connected with 192.xxx.xxx.xxx port 5001
[ 4] local 192.xxx.xxx.xxx port 34250 connected with 192.xxx.xxx.xxx port 5001
[ 6] local 192.xxx.xxx.xxx port 62511 connected with 192.xxx.xxx.xxx port 5001
[ 5] local 192.xxx.xxx.xxx port 33149 connected with 192.xxx.xxx.xxx port 5001
[ 7] local 192.xxx.xxx.xxx port 38077 connected with 192.xxx.xxx.xxx port 5001
[ 9] local 192.xxx.xxx.xxx port 61893 connected with 192.xxx.xxx.xxx port 5001
[ 8] local 192.xxx.xxx.xxx port 59938 connected with 192.xxx.xxx.xxx port 5001
[ 10] local 192.xxx.xxx.xxx port 37415 connected with 192.xxx.xxx.xxx port 5001
[ 11] local 192.xxx.xxx.xxx port 55142 connected with 192.xxx.xxx.xxx port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 7.41 MBytes 6.21 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 8] 0.0-10.0 sec 4.99 MBytes 4.18 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 5.17 MBytes 4.33 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 4.41 MBytes 3.68 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 9] 0.0-10.1 sec 5.05 MBytes 4.21 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 12] 0.0-10.1 sec 5.70 MBytes 4.75 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.1 sec 3.72 MBytes 3.10 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 7] 0.0-10.1 sec 4.30 MBytes 3.57 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 11] 0.0-10.1 sec 4.81 MBytes 3.99 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 10] 0.0-10.2 sec 4.16 MBytes 3.44 Mbits/sec
[SUM] 0.0-10.2 sec 49.7 MBytes 41.0 Mbits/sec

Readynas :-
iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 256 KByte (default)
------------------------------------------------------------
[ 6] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 34250
[ 12] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 59938
[ 13] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 55142
[ 14] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 37415
[ 15] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 57797
[ 7] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 48766
[ 8] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 62511
[ 9] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 33149
[ 10] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 38077
[ 11] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 61893
[ 11] 0.0-10.0 sec 5.05 MBytes 4.23 Mbits/sec
[ 6] 0.0-10.0 sec 7.41 MBytes 6.19 Mbits/sec
[ 12] 0.0-10.1 sec 4.99 MBytes 4.17 Mbits/sec
[ 13] 0.0-10.1 sec 4.81 MBytes 4.01 Mbits/sec
[ 15] 0.0-10.1 sec 5.70 MBytes 4.75 Mbits/sec
[ 14] 0.0-10.1 sec 4.16 MBytes 3.47 Mbits/sec
[ 9] 0.0-10.1 sec 5.17 MBytes 4.30 Mbits/sec
[ 8] 0.0-10.1 sec 3.72 MBytes 3.09 Mbits/sec
[ 7] 0.0-10.1 sec 4.41 MBytes 3.66 Mbits/sec
[ 10] 0.0-10.1 sec 4.30 MBytes 3.57 Mbits/sec
[SUM] 0.0-10.1 sec 49.7 MBytes 41.3 Mbits/sec

Does that mean the issue is protocol level?
 
results below :-

Microserver :-
iperf -c 192.xxx.xxx.xxx -t 60
------------------------------------------------------------
Client connecting to 192.xxx.xxx.xxx, TCP port 5001
TCP window size: 48.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.xxx.xxx.xxx port 39518 connected with 192.xxx.xxx.xxx port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 136 MBytes 19.1 Mbits/sec

Readynas :-
iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 256 KByte (default)
------------------------------------------------------------
[ 6] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 39518
[ 6] 0.0-60.0 sec 136 MBytes 19.1 Mbits/sec


10 multiple connections results

Microserver :-
iperf -c 192.xxx.xxx.xxx -P 10
------------------------------------------------------------
Client connecting to 192.xxx.xxx.xxx, TCP port 5001
TCP window size: 48.0 KByte (default)
------------------------------------------------------------
[ 12] local 192.xxx.xxx.xxx port 57797 connected with 192.xxx.xxx.xxx port 5001
[ 3] local 192.xxx.xxx.xxx port 48766 connected with 192.xxx.xxx.xxx port 5001
[ 4] local 192.xxx.xxx.xxx port 34250 connected with 192.xxx.xxx.xxx port 5001
[ 6] local 192.xxx.xxx.xxx port 62511 connected with 192.xxx.xxx.xxx port 5001
[ 5] local 192.xxx.xxx.xxx port 33149 connected with 192.xxx.xxx.xxx port 5001
[ 7] local 192.xxx.xxx.xxx port 38077 connected with 192.xxx.xxx.xxx port 5001
[ 9] local 192.xxx.xxx.xxx port 61893 connected with 192.xxx.xxx.xxx port 5001
[ 8] local 192.xxx.xxx.xxx port 59938 connected with 192.xxx.xxx.xxx port 5001
[ 10] local 192.xxx.xxx.xxx port 37415 connected with 192.xxx.xxx.xxx port 5001
[ 11] local 192.xxx.xxx.xxx port 55142 connected with 192.xxx.xxx.xxx port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 7.41 MBytes 6.21 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 8] 0.0-10.0 sec 4.99 MBytes 4.18 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 5.17 MBytes 4.33 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 4.41 MBytes 3.68 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 9] 0.0-10.1 sec 5.05 MBytes 4.21 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 12] 0.0-10.1 sec 5.70 MBytes 4.75 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.1 sec 3.72 MBytes 3.10 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 7] 0.0-10.1 sec 4.30 MBytes 3.57 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 11] 0.0-10.1 sec 4.81 MBytes 3.99 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 10] 0.0-10.2 sec 4.16 MBytes 3.44 Mbits/sec
[SUM] 0.0-10.2 sec 49.7 MBytes 41.0 Mbits/sec

Readynas :-
iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 256 KByte (default)
------------------------------------------------------------
[ 6] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 34250
[ 12] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 59938
[ 13] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 55142
[ 14] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 37415
[ 15] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 57797
[ 7] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 48766
[ 8] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 62511
[ 9] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 33149
[ 10] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 38077
[ 11] local 192.xxx.xxx.xxx port 5001 connected with 192.xxx.xxx.xxx port 61893
[ 11] 0.0-10.0 sec 5.05 MBytes 4.23 Mbits/sec
[ 6] 0.0-10.0 sec 7.41 MBytes 6.19 Mbits/sec
[ 12] 0.0-10.1 sec 4.99 MBytes 4.17 Mbits/sec
[ 13] 0.0-10.1 sec 4.81 MBytes 4.01 Mbits/sec
[ 15] 0.0-10.1 sec 5.70 MBytes 4.75 Mbits/sec
[ 14] 0.0-10.1 sec 4.16 MBytes 3.47 Mbits/sec
[ 9] 0.0-10.1 sec 5.17 MBytes 4.30 Mbits/sec
[ 8] 0.0-10.1 sec 3.72 MBytes 3.09 Mbits/sec
[ 7] 0.0-10.1 sec 4.41 MBytes 3.66 Mbits/sec
[ 10] 0.0-10.1 sec 4.30 MBytes 3.57 Mbits/sec
[SUM] 0.0-10.1 sec 49.7 MBytes 41.3 Mbits/sec

Does that mean the issue is protocol level?

Where you are doing the iperf from? The windows machine?

Windows --> Switch --> Microserver
Windows --> Switch --> ReadyNAS

If you're seeing the same issue from the Windows machine to both devices, I would suspect the Windows box or the switch?

What happens if you do the same test between the MicroServer and the ReadyNAS?

The outputs you've put look to be very slow for a GigE network
 
Back
Top