10 Gb Connection

N Bates

Limp Gawd
Joined
Jul 15, 2017
Messages
174
I have purchased two Intel X540 T2 and connected my windows 10 pc and the Omnios 101526 NAS, I have updated the drivers on the windows PC, I left the one in the NAS as is as I don't know how to update these.

I have given both static addresses, enabled and Jumbo frames, I have connected both with an ethernet cable and both connections are showing green.

I have added a networ connection on the windows machine for both NICs, however, when I copy files between the two machines is still looking like the 1gb connection is being used, is there any way where I can force the connections to use 10 gbe for file transfer?
 
You probably put them in the same network range your PC was already using, which is the default.

Either change the static IP's on both boxes or add a static route to the NAS ip in windows to force it to go to the 10gb interface
 
I have put both NIC's outside the network range, the network is in 192.168.1.1 and the NIC's are in 10.10.10.10 and 10.10.10.20.

The NAS has already got a static IP, I have added the NAS IP to the windows \etc\host file, is this what you mean by forcing it to go to the 10 gb interface?
 
Go into the adapter properties, make sure the speed is set to 10G. Those support 1G as well. I'm assuming you're using Cat-6 cables?? Cat-5 does not support 10G,
 
Last edited:
The adaptor properties in windows show as 10g and using cat 6A cable.

On the NAS side, I can see in Napp-it that the adaptor on there is at 10g also.
 
You’ll also need to change the network connection priority in Windows to move the 10Gb connection to primary.
 
The 10gb doesn't need to be primary, if he's hitting something on 10.x.x.x and he has a nic in that range it will go out that nic instead of the default one.

On the etc hosts file did you add the NAS as the 10.10.10.20 IP? Or was it in there before from the original IP on the 192 subnet?

From your windows PC can you ping/trace the NAS 10 net IP address?
 
on the etc\hosts on the windows machine I have the below, which is the NAS IP's

10.10.10.10 NAS
192.168.1.x NAS
10.10.10.15 NAS

The 10.10.10.15 is the second port on the intel NIC but not currently used, should I remove the 192.168.1.x IP of the NAS and leave only the ones referring to the Intel x540 NIC?

I have been reading a bit and I think I maybe using the windows machine \etc\hosts file wrongly, the IP addresses on there should only be for the windows machine and not the NAS, not 100% sure though, I will keep trying and report back.
 
The way windows works, it looks at hosts file before doing a DNS lookup. If you are mounting it as 'NAS', then when it tries to associate that name to an IP, it will look at hosts file, if it doesn't have an entry, it does a DNS lookup. So the IP addresses in the hosts file are for other machines/ips, not the windows PC itself.

What IP gets resolved when you ping the NAS by name from the windows PC?
 
This is really strange, I could access the NAS from the windows PC before when typing the NAS IP, however now when I ping the NAS, I get "request timed out" and when I ping the windows PC from the NAS I get "no answer", I might have to change the cable and see if this makes a difference.

EDIT:

Do I have to set new shares for the 10gb NIC's to use instead of the existing share that was created?
 
Last edited:
I would never use two nics with an ip from the same subnet.
This can result in a traffic that goes in into one nic and back through the other - not finding its way back.

If a connect via ip works, set only this entry in the hosts file

For initial tests, remove Jumbo as this can be a good for performance or troubles.

On Windows I would disable interrupt throtteling on Windows in the driver settings for the nic.
Increasing tcp buffers on OmniOS can also increase 10G performance.

btw
You cannot bind a share to a nic as the SMB server is listening on all nics.
 
I have purchased two Intel X540 T2 and connected my windows 10 pc and the Omnios 101526 NAS, I have updated the drivers on the windows PC, I left the one in the NAS as is as I don't know how to update these.

I have given both static addresses, enabled and Jumbo frames, I have connected both with an ethernet cable and both connections are showing green.

I have added a networ connection on the windows machine for both NICs, however, when I copy files between the two machines is still looking like the 1gb connection is being used, is there any way where I can force the connections to use 10 gbe for file transfer?

Seems everyone forgot that you can only go as fast as the HD can read and write. Is your NAS and PC have SSDs? And if so, are you copying between the SSDs? Regular spinning disks will never be able to go over what a 1gb connection can do. You need SSDs for that.
 
So, should the static IP of the dual port NIC be like:
10.10.10.10
10.10.11.15

and the second dual port NIC be:
10.10.12.20
10.10.13.25

Will the above be better, is that what you mean by two NIC's not in the same subnet?

This is what I have in the hosts file of the windows machine:
10.10.10.10 NAS

And on the hosts file in Omnios:
10.10.10.20 windows PC host name

Is the above incorrect, it is strange that I could access the NAS share from the windows PC for about a day, then it stopped working.

I will disable jumbo frame on both the windows PC and the Omnios NAS and try and see if it makes a difference.

Also, interrupt throtteling is disabled on the windows PC.

@ sniper, I can ssh into the NAS, however, I am not sure on how to update the driver on Omnios.
 
You're trying to connect BOTH 10g ports of each host to each other? You can't really do that without some kind of link aggregation and even then it won't be much use for file transfers.

Just use one connection from each nic, each side has to be in the same subnet (not your main one), so windows PC 10.10.10.10 (with mask of 255.255.255.0) and NAS ip of 10.10.10.20 (with mask of 255.255.255.0)
 
Sorry, forgot to say, that just one port of each NIC is connected currently the 10.10.10.10 and the 10.10.10.20 the other two ports are spares ready for when I get the backup server built.
 
Do what Eickst suggests (connect both nics with an ip from same subnet). If you want to connect the second port from each to a backup server later, add ips then for them from a different subnet.

about updates on OmniOS
OmniOS stable is a freeze of Illumos development with a dedicated repository per release where only critical bugs are fixed. As the ixgbe driver is maintained at Illumos (the common development project for Nexenta, OmniOS, OpenIndiana, SmartOS and others) only on critical fixes a "pkg update" gives a new driver release. On non critical updates, the next stable (every 6 months) or a switch to the bloody release (newest beta) gives new features.

For the current ixgbe driver I only know of one bug (SFP+ problems with some fiber tranceivers). For open issues, see https://www.illumos.org/issues
 
Thank you guys for all your help, I will do what you suggest and report back.

EDIT:
Looks like disabeling the static IP's on the none used second ports of both NIC's made it work, now I can access the NAS from the windows PC, however the file transfere speed is still slower than 1g.

I can usually transere file between 60 and 70 mbps on the 1g and now it is between 50 or 60 mbps on the 10g nics, in order to confirm that I am on the 10g network, I have disabeled the 1g nic.

Hopefully, this will only be a setting that could make the X540's work.
 
Last edited:
So, should the static IP of the dual port NIC be like:
10.10.10.10
10.10.11.15

and the second dual port NIC be:
10.10.12.20
10.10.13.25

Will the above be better, is that what you mean by two NIC's not in the same subnet?

This is what I have in the hosts file of the windows machine:
10.10.10.10 NAS

And on the hosts file in Omnios:
10.10.10.20 windows PC host name

Is the above incorrect, it is strange that I could access the NAS share from the windows PC for about a day, then it stopped working.

I will disable jumbo frame on both the windows PC and the Omnios NAS and try and see if it makes a difference.

Also, interrupt throtteling is disabled on the windows PC.

@ sniper, I can ssh into the NAS, however, I am not sure on how to update the driver on Omnios.

If your netmask on both is a /23, then sure...
 
Thank you guys for all your help, I will do what you suggest and report back.

EDIT:
Looks like disabeling the static IP's on the none used second ports of both NIC's made it work, now I can access the NAS from the windows PC, however the file transfere speed is still slower than 1g.

I can usually transere file between 60 and 70 mbps on the 1g and now it is between 50 or 60 mbps on the 10g nics, in order to confirm that I am on the 10g network, I have disabeled the 1g nic.

Hopefully, this will only be a setting that could make the X540's work.

Can you run iperf across the interfaces to see what the actual network transfer rate is and take file transfers out of it?

Also, there are tons of guides out there for tuning adapters for performance
 
I think I see 2 problems causing your issue.
1- You have multiple entries for your NAS in the host file, including 1 that matches your main adapters subnet. If windows grabs that IP during a lookup, it will ALWAYS use the wrong adapter. Only have a single entry pointing at the static IP of the NAS's 10g LAN.
2- The 10g adapters are on different subnets. With that setup, windows will NEVER use your 10g adapter to route traffic to your NAS. If your 10g adapter is configured as 10.0.0.1/24, windows will ONLY use that adapter to route traffic to IPs in the range 10.0.0.1 - 10.0.0.255. If your NAS is configured as 10.0.1.5/24, that subnet will not match your 10g adapter in windows, so it will use the default route (1g adapter). This is most likely the cause of your issue.


Here is what I would do to try and diagnose.

- Disable the second port on the X2 that you are not using on both client/server, so only 1 is enabled
- Use SFP cable to connect client/server on the enabled 10g port
- Set static IP for both client/server in the same subnet (10.0.0.1 and 10.0.0.2 or something similar), but a different subnet than your main 1g adapter.
---- On that adapter, make sure GW/DNS are not configure, just IP and subnet (/24 or smaller). Can enable jumbo frames on both sides as well.
- Update your host file so the only entry for your NAS server is the static IP on the 10g lan


This will work to force all traffic to the NAS over the 10g LAN, and all other traffic will go out default gateway configured on the other adapter. Do a file transfer with Resource Monitor open to confirm the 10g adapter is being used. If you are still getting a 1g limit, it sounds like a config setting on the NAS side. I'm not familiar with that product, so can't help you there.
 
Last edited:
Chiming in here, I recommend you do iperf test.

I have a 10GB lab at work.

10GBE Workstation
10GBE Switch
10GBE QNAP NAS

9.54GB/s

It's lovely!
 
That depends whether you look at

- sequential performance
(one user writes/reads a large file from an empty array)

In such a case a single harddisk gives you around 200 MB/s (twice of 1G).
An array of 4 disks in Raid-0 is capable to give full 10G performance

see chapter 2.3 in http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf


- random performance
(small files, many users or high fragmentation)
This is where disks are bad. Expect around 40-80 MB/s from a disk on average random load,
With many very small files and multi-user access this can go down to a few MB/s

This is where SSDs or NVMe are much better and Intel Optane is the best. A system like ZFS and enough RAM for read/write caching can help

See chapter 2.3 http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
and compare allcache vs nochache for effects of RAM (randomrw 1.2 MB/s vs 271 MB/s on a 4 disk raid-0 array)


- sync write performance
This is a very special workload when you need a crash save write behaviour.
With a disk based pool without Slog you get a few MB/s, an SSD pool can go up to a few hundred MB/s with the best SSDs.
But as sync write performance is mainly limited by an Slog (in case of ZFS), you can get 10G sync write performance if you use Intel Optane or add an Intel Optane 800P/900P as Slog to a larger SSD or even disk pool, you can get 10G
sync write performance sequentially (400MB/s-1000 MB/s).
https://forums.servethehome.com/ind...s-pools-and-slog-on-solaris-and-omnios.19810/
 
Last edited:
Seems everyone forgot that you can only go as fast as the HD can read and write. Is your NAS and PC have SSDs? And if so, are you copying between the SSDs? Regular spinning disks will never be able to go over what a 1gb connection can do. You need SSDs for that.

How is that? My WD Gold 10TB is hitting 230-260 MB/S during backups and its the same read/write. I am sure there are faster spinners than this but maybe not by much.
 
I just need to figure out how to enable iperf via Napp-it, is this already included with omnios or Napp-it or do I need to download and install?
 
Iperf3 is included in napp-it (server and client)

iperf3 server, see menu Services > iperf server
iperf3 client, see menu System > Network Eth > iperf client

If you use OmniOS as server, start iperf3 client on another machine
otherwise start server there and use client on OmniOS.
 
How do I enable the server it says that it is disabeled?

Iperf3 Server

If you want to run a network bench, start Iperf3 server on one machine and then start client in
System > Network Eth on the other client or appliance to run a benchmark.

Current state Iperf3 server: disabled


EDIT:
With windows firewall off:
Running the NAS on omnios as a server and windows (powershell) as a client, I get the below:
./iperf3.exe -c 10.10.10.10
iperf3: error - unable to connect to server: Connection refused

and the other way around, the NAS as a client and the windows machine as the server it just keeps running without results.
 
Last edited:
I think I see 2 problems causing your issue.
1- You have multiple entries for your NAS in the host file, including 1 that matches your main adapters subnet. If windows grabs that IP during a lookup, it will ALWAYS use the wrong adapter. Only have a single entry pointing at the static IP of the NAS's 10g LAN.
2- The 10g adapters are on different subnets. With that setup, windows will NEVER use your 10g adapter to route traffic to your NAS. If your 10g adapter is configured as 10.0.0.1/24, windows will ONLY use that adapter to route traffic to IPs in the range 10.0.0.1 - 10.0.0.255. If your NAS is configured as 10.0.1.5/24, that subnet will not match your 10g adapter in windows, so it will use the default route (1g adapter). This is most likely the cause of your issue.


Here is what I would do to try and diagnose.

- Disable the second port on the X2 that you are not using on both client/server, so only 1 is enabled
- Use SFP cable to connect client/server on the enabled 10g port
- Set static IP for both client/server in the same subnet (10.0.0.1 and 10.0.0.2 or something similar), but a different subnet than your main 1g adapter.
---- On that adapter, make sure GW/DNS are not configure, just IP and subnet (/24 or smaller). Can enable jumbo frames on both sides as well.
- Update your host file so the only entry for your NAS server is the static IP on the 10g lan


This will work to force all traffic to the NAS over the 10g LAN, and all other traffic will go out default gateway configured on the other adapter. Do a file transfer with Resource Monitor open to confirm the 10g adapter is being used. If you are still getting a 1g limit, it sounds like a config setting on the NAS side. I'm not familiar with that product, so can't help you there.

I have go the settings as per above, the windows hosts file entry is the NAS 10g only entry, what should the NAS side hosts file on Omnios be, I have got one entry on there for the 10g windows IP and hostname, is this correct?
 
Napp-it comes with iperf3 binaries. They are installed if /usr/bin/iperf3 is missing
Depending on OS and setup it may be that there is a different version installed or libs are missing.

execute then (console or napp-it cmd field) the following command
rm /usr/bin/iperf3

If you then try to start iperf server, it will re-install the included iperf


about hosts file
You do not need to add anything if you use ip to connect
The host file is there to use/translate hostnames to ip in case of a missing DNS server or DNS entry.

Wrong entries may result in problems so for first tests do not add anything and use ip adresses
 
How is that? My WD Gold 10TB is hitting 230-260 MB/S during backups and its the same read/write. I am sure there are faster spinners than this but maybe not by much.

Umm, mb doesn't equal GB. Read the link I posted.
 
The below is what I'm getting after executing 'rm /usr/bin/iperf3', this is with the windows firewall off:

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

PS C:\Users\N> ./iperf3.exe -c 10.10.10.10
Connecting to host 10.10.10.10, port 5201
[ 4] local 10.10.10.20 port 62655 connected to 10.10.10.10 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 360 MBytes 3.02 Gbits/sec
[ 4] 1.00-2.00 sec 324 MBytes 2.72 Gbits/sec
[ 4] 2.00-3.00 sec 384 MBytes 3.22 Gbits/sec
[ 4] 3.00-4.00 sec 364 MBytes 3.06 Gbits/sec
[ 4] 4.00-5.00 sec 386 MBytes 3.23 Gbits/sec
[ 4] 5.00-6.00 sec 388 MBytes 3.24 Gbits/sec
[ 4] 6.00-7.00 sec 207 MBytes 1.74 Gbits/sec
[ 4] 7.00-8.00 sec 226 MBytes 1.90 Gbits/sec
[ 4] 8.00-9.00 sec 268 MBytes 2.26 Gbits/sec
[ 4] 9.00-10.00 sec 374 MBytes 3.14 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 3.21 GBytes 2.75 Gbits/sec sender
[ 4] 0.00-10.00 sec 3.21 GBytes 2.75 Gbits/sec receiver

iperf Done.

When using the NAS as the client and the windows machine as the server, the test runs for longer than 10 s, whitout results with a connection timed out message on Napp-it:

PS C:\Users\N> ./iperf3.exe -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

On Napp-it
For results, please wait 10s..
iperf3: error - unable to connect to server: Connection timed out

number of streams: 4


EDIT:

The below with tcp changes to the below:

ndd -set /dev/ip ip_lso_outbound 0
ipadm set-prop -p max_buf=4194304 tcp
ipadm set-prop -p recv_buf=1048576 tcp
ipadm set-prop -p send_buf=1048576 tcp


Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

PS C:\Users\N> ./iperf3.exe -c 10.10.10.10
Connecting to host 10.10.10.10, port 5201
[ 4] local 10.10.10.20 port 64653 connected to 10.10.10.10 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 355 MBytes 2.98 Gbits/sec
[ 4] 1.00-2.00 sec 404 MBytes 3.39 Gbits/sec
[ 4] 2.00-3.00 sec 400 MBytes 3.36 Gbits/sec
[ 4] 3.00-4.00 sec 360 MBytes 3.02 Gbits/sec
[ 4] 4.00-5.00 sec 198 MBytes 1.66 Gbits/sec
[ 4] 5.00-6.00 sec 349 MBytes 2.93 Gbits/sec
[ 4] 6.00-7.00 sec 401 MBytes 3.36 Gbits/sec
[ 4] 7.00-8.00 sec 314 MBytes 2.63 Gbits/sec
[ 4] 8.00-9.00 sec 383 MBytes 3.21 Gbits/sec
[ 4] 9.00-10.00 sec 398 MBytes 3.34 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 3.48 GBytes 2.99 Gbits/sec sender
[ 4] 0.00-10.00 sec 3.48 GBytes 2.99 Gbits/sec receiver

iperf Done.
PS C:\Users\N> ./iperf3.exe -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

Still getting the below when using the NAS as the client:

For results, please wait 10s..
iperf3: error - unable to connect to server: Connection timed out

number of streams: 4
 
Last edited:
Anyone know how to disable intr_throttling in Omnbios or Napp-it, the below does not seem to work:

ndd -set /dev/ixgbe0 intr_throttling 1
 
This is an entry in /kernel/drv/ ixgbe.conf
intr_throttling = 0;

see also napp-it menu System > Tuning

But unlike Windows that is pre-optimized for forground/Desktop performance I would not expect a relevant difference.
On Windows, this setting is important - together with newest drivers ex from Intel to go beyond 300 MB/s
 
Those stats confirm it's at least using the 10g adapter now. Are you sure you're not hitting limitations from the disks in the NAS? Have you checked their disk using during the transfers?
 
Thank you _Gea, I will disable intr_throttling and see whether I get any difference, I don't ubderstand though why I am not able to use the NAS as the client when I use iperf3, it always times out, could this be a permission issue?

Those stats confirm it's at least using the 10g adapter now. Are you sure you're not hitting limitations from the disks in the NAS? Have you checked their disk using during the transfers?

How do I check whether the drives are limiting the speed?, I was hoping that I could see more thaty 60 to 70 mbps with the X540 though, I am sure there is something more than just a drive limitation, I will keep tuning and testing hopefully I will come to the bottom of the problem.


EDIT:

At last, I managed to run ipref3 with the NAS as the client, it was not running earlier because when I stopped windows defender, I had only switched the domain side of the firewall off, with it all off, I get the below:

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

PS C:\Users\N> ./iperf3.exe -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.10.10.10, port 33711
[ 5] local 10.10.10.20 port 5201 connected to 10.10.10.10 port 59895
[ 7] local 10.10.10.20 port 5201 connected to 10.10.10.10 port 65009
[ 9] local 10.10.10.20 port 5201 connected to 10.10.10.10 port 42019
[ 11] local 10.10.10.20 port 5201 connected to 10.10.10.10 port 51522
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 50.7 MBytes 425 Mbits/sec
[ 7] 0.00-1.00 sec 49.7 MBytes 417 Mbits/sec
[ 9] 0.00-1.00 sec 50.6 MBytes 424 Mbits/sec
[ 11] 0.00-1.00 sec 51.3 MBytes 430 Mbits/sec
[SUM] 0.00-1.00 sec 202 MBytes 1.70 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 1.00-2.00 sec 60.3 MBytes 506 Mbits/sec
[ 7] 1.00-2.00 sec 60.1 MBytes 504 Mbits/sec
[ 9] 1.00-2.00 sec 55.6 MBytes 466 Mbits/sec
[ 11] 1.00-2.00 sec 69.4 MBytes 582 Mbits/sec
[SUM] 1.00-2.00 sec 245 MBytes 2.06 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 2.00-3.00 sec 62.1 MBytes 521 Mbits/sec
[ 7] 2.00-3.00 sec 60.9 MBytes 511 Mbits/sec
[ 9] 2.00-3.00 sec 57.4 MBytes 481 Mbits/sec
[ 11] 2.00-3.00 sec 63.2 MBytes 530 Mbits/sec
[SUM] 2.00-3.00 sec 244 MBytes 2.04 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 3.00-4.00 sec 58.3 MBytes 489 Mbits/sec
[ 7] 3.00-4.00 sec 55.4 MBytes 465 Mbits/sec
[ 9] 3.00-4.00 sec 53.4 MBytes 448 Mbits/sec
[ 11] 3.00-4.00 sec 55.2 MBytes 463 Mbits/sec
[SUM] 3.00-4.00 sec 222 MBytes 1.86 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 4.00-5.00 sec 60.4 MBytes 507 Mbits/sec
[ 7] 4.00-5.00 sec 58.6 MBytes 491 Mbits/sec
[ 9] 4.00-5.00 sec 57.0 MBytes 478 Mbits/sec
[ 11] 4.00-5.00 sec 57.7 MBytes 484 Mbits/sec
[SUM] 4.00-5.00 sec 234 MBytes 1.96 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 5.00-6.00 sec 50.8 MBytes 426 Mbits/sec
[ 7] 5.00-6.00 sec 51.9 MBytes 436 Mbits/sec
[ 9] 5.00-6.00 sec 48.1 MBytes 404 Mbits/sec
[ 11] 5.00-6.00 sec 55.5 MBytes 466 Mbits/sec
[SUM] 5.00-6.00 sec 206 MBytes 1.73 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 6.00-7.00 sec 53.8 MBytes 451 Mbits/sec
[ 7] 6.00-7.00 sec 52.8 MBytes 442 Mbits/sec
[ 9] 6.00-7.00 sec 50.9 MBytes 427 Mbits/sec
[ 11] 6.00-7.00 sec 52.8 MBytes 442 Mbits/sec
[SUM] 6.00-7.00 sec 210 MBytes 1.76 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 7.00-8.01 sec 49.0 MBytes 408 Mbits/sec
[ 7] 7.00-8.01 sec 48.2 MBytes 401 Mbits/sec
[ 9] 7.00-8.01 sec 47.0 MBytes 391 Mbits/sec
[ 11] 7.00-8.01 sec 53.6 MBytes 446 Mbits/sec
[SUM] 7.00-8.01 sec 198 MBytes 1.65 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 8.01-9.00 sec 53.5 MBytes 452 Mbits/sec
[ 7] 8.01-9.00 sec 49.5 MBytes 419 Mbits/sec
[ 9] 8.01-9.00 sec 47.6 MBytes 403 Mbits/sec
[ 11] 8.01-9.00 sec 53.7 MBytes 455 Mbits/sec
[SUM] 8.01-9.00 sec 204 MBytes 1.73 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 9.00-10.00 sec 54.6 MBytes 458 Mbits/sec
[ 7] 9.00-10.00 sec 53.6 MBytes 449 Mbits/sec
[ 9] 9.00-10.00 sec 53.2 MBytes 447 Mbits/sec
[ 11] 9.00-10.00 sec 58.8 MBytes 494 Mbits/sec
[SUM] 9.00-10.00 sec 220 MBytes 1.85 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 10.00-10.01 sec 416 KBytes 517 Mbits/sec
[ 7] 10.00-10.01 sec 416 KBytes 518 Mbits/sec
[ 9] 10.00-10.01 sec 416 KBytes 518 Mbits/sec
[ 11] 10.00-10.01 sec 398 KBytes 496 Mbits/sec
[SUM] 10.00-10.01 sec 1.61 MBytes 2.05 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.01 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-10.01 sec 554 MBytes 464 Mbits/sec receiver
[ 7] 0.00-10.01 sec 0.00 Bytes 0.00 bits/sec sender
[ 7] 0.00-10.01 sec 541 MBytes 454 Mbits/sec receiver
[ 9] 0.00-10.01 sec 0.00 Bytes 0.00 bits/sec sender
[ 9] 0.00-10.01 sec 521 MBytes 437 Mbits/sec receiver
[ 11] 0.00-10.01 sec 0.00 Bytes 0.00 bits/sec sender
[ 11] 0.00-10.01 sec 572 MBytes 479 Mbits/sec receiver
[SUM] 0.00-10.01 sec 0.00 Bytes 0.00 bits/sec sender
[SUM] 0.00-10.01 sec 2.14 GBytes 1.83 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
 
Last edited:
Little b vs big B. Bits vs. Bytes. Defacto standard: network speed is measured in b, disk read/write speed (NOT INTERFACE SPEED) is measured in B.

Forgot to add 1B = 8b



N Bates You will have to check your NAS for some kind of performance monitor and check the activity time % on each of the disks. If they are all at 100% when doing the transfers, assuming the disks are in a raid array, then you're being limited by disk speed.
 
Little b vs big B. Bits vs. Bytes. Defacto standard: network speed is measured in b, disk read/write speed (NOT INTERFACE SPEED) is measured in B.

Yawn, I am a Storage admin for a Enterprise company, you don't need to tell me about disk limitations when I work on EMC storages all day long.
 
Forgot to add 1B = 8b



N Bates You will have to check your NAS for some kind of performance monitor and check the activity time % on each of the disks. If they are all at 100% when doing the transfers, assuming the disks are in a raid array, then you're being limited by disk speed.

Glad someone here has a brain! :p
 
Back
Top