10GB Network Speed Tweaking

N Bates

Limp Gawd
Joined
Jul 15, 2017
Messages
174
Can anyone suggest 10GB tweaking for best media file transfer speeds between a Windows 10 Workstation and a OmniOS, Napp-it server?

I have the below:

Client
Windows 10 pro for Worksations version 20H2, 19042.746 (Asrok Phamtom Gamin X, Ryzen 9 3900X, 32GB ram)
3 x 14tb WD hdd's in Single parity storage spaces in ReFS file system (backup)
1 x PCIE Intel X540-t2 with driver version 4.1.197.0

Server
OmniOS/ Napp-it r151036 (Asrok rack EPYCD8, EPYC 7251 8 core processor, 32GB Crucial ECC Ram)
6 x 14tb WD in Raidz2
1 x PCIE Intel X540-t2 (not sure of driver version on OmniOS).

Both machines are connected via a Netgear XS708T not attached to network.

I know that Windows storage spaces parity is slow and I have been trying to re-create the storage parity via the PowerShell instead of the Windows automated GIU as this set default for Number of Columns and interleave of 256kb where I would like to change this to 32gb and haven't manages it yet, not sure whether this is even possible with Window 10 pro for workstations or only works in Windows Sever OS.

Currently I haven't changed any of the Intel x540-t2's settings, looks easy on Windows not so on OmniOS.


Edit:

Forgot to say, I am getting 64mb transferring files from Omnios server to Windows backup and 240mb the other way.
 
Last edited:
What are your iperf scores? (both directions)

What are your local performance numbers (without the network, just reading and writing files locally)

How are you sharing files? (nfs, cifs, etc)
 
What are your iperf scores? (both directions)

What are your local performance numbers (without the network, just reading and writing files locally)

How are you sharing files? (nfs, cifs, etc)

I haven't done an iperf score yet, I am using SMB.

Copy paste in Windows 10 a 7gb media file from an NVME Corsair MP510 1tb to the windows single parity is slow, it starts vey quick up to half at 1.25gb and then slows down to an average of 30mb, the other way it's pretty fast starts at 1.34gb and ends at 875mb.

Copy paste within the OmniOS NAS from one folder to another using the same pool is at 450mb.

Copy paste from Windows 10 to the NAS is at 150mb.

I know the Windows 10 copy paste metric is not accurate but as a guide it shows that there is a something limiting the full speed.

Checking the SMB version in Windows 10 I am using in "turn windows features on and off" is SMB version one, I think I have enabled this in the past (although SMB 1 has significant security vulnerabilities) as I am using Nvidia shields TV's that only support SMB v1 if I am not mistaken if I'mot wrong as I haven't checked for a while the shields support with the recent updates.
 
So you have slow local disk performance and you expect it to somehow be faster over network? If you're slowing down to "30mb" (MB/s? mb/s?) on write, then something is horribly wrong, and a network xfer isn't going to just make it faster.

Definitely need a lot more information on specifically source and destination speeds. When you say "from Windows 10 to the NAS", which disk in Windows?

What's the goal of two NASes?
 
So you have slow local disk performance and you expect it to somehow be faster over network? If you're slowing down to "30mb" (MB/s? mb/s?) on write, then something is horribly wrong, and a network xfer isn't going to just make it faster.

Definitely need a lot more information on specifically source and destination speeds. When you say "from Windows 10 to the NAS", which disk in Windows?

What's the goal of two NASes?
The HDD's in both machines are the WD 14tb storage drives, one is a NAS and the Windows machine a backup of the NAS.
 
Checking the SMB version in Windows 10 I am using in "turn windows features on and off" is SMB version one, I think I have enabled this in the past (although SMB 1 has significant security vulnerabilities) as I am using Nvidia shields TV's that only support SMB v1 if I am not mistaken if I'mot wrong as I haven't checked for a while the shields support with the recent updates.

WHAT’S NEW IN 8.2?​

  • Adds SMBv3 support for a faster, more secure connection to SHIELD over a local network
SHIELD Software & Firmware Upgrade V8.2.2 | NVIDIA

There are so many things to configure for 10Gb, and to complicate you have two OS's with different methods of doing such. The Windows system is mostly via Intel's PROSet Adapter Configuration Utility. Somethings can be via Device Manager, but not all such as DMA coalescing (Advise not to have on). There is also, sometimes via the Utility or Device Manager options, but you may need to go via the Console and check the RSS profile (Get-NetAdapterRss). You may want to have on the switch for Flow Control to be only listen on receive frames, or totally off. Flow Control can be off on the host systems, but FC may need to testing to see if there are control frames being sent when shouldn't be on the switch. Max the descriptors for RX and TX. RSS Queues you should have them at least at 4, and possibly 8 on the server. As for the OmniOS, you will have to do that the Nix way.

I just mainly wanted to mention the upgrade to SMB done to the Shield an update ago, and minor networking configurations.
 
Last edited by a moderator:

WHAT’S NEW IN 8.2?​

  • Adds SMBv3 support for a faster, more secure connection to SHIELD over a local network
SHIELD Software & Firmware Upgrade V8.2.2 | NVIDIA

There are so many things to configure for 10Gb, and to complicate you have two OS's with different methods of doing such. The Windows system is mostly via Intel's PROSet Adapter Configuration Utility. Somethings can be via Device Manager, but not all such as DMA coalescing (Advise not to have on). There is also, sometimes via the Utility or Device Manager options, but you may need to go via the Console and check the RSS profile (Get-NetAdapterRss). You may want to have on the switch for Flow Control to be only listen on receive frames, or totally off. Flow Control can be off on the host systems, but FC may need to testing to see if there are control frames being sent when shouldn't be on the switch. Max the descriptors for RX and TX. RSS Queues you should have them at least at 4, and possibly 8 on the server. As for the OmniOS, you will have to do that the Nix way.

I just mainly wanted to mention the upgrade to SMB done to the Shield an update ago, and minor networking configurations.
Thanks for the info, so far I have configured the intel proset adaptor on the windows machine to have the interrupt moderation rate to off, increased the receive buffers to 4096 and transmit buffers to 16384, the number of RSS queues I have left at the default which were already set to 8.

I haven't as yet changed any settings on the NAS with OmniOS and Napp-it.

I will test a little later to see whether this has done any difference at all, I will also upgrade to the 8.2.2 software of the Nvidia Shield and switch off windows SMB v1 and see whether this will make a difference also.
 
When hosting files, and some services it can be better to actually have moderation on. The default driver configuration from Intel is very good at the distribution to the queues and cores. Having interrupt moderation only decreases the latency a bit, and I have not found it too beneficial to disable. Matter of fact, I found it better to enable. Technically speaking, with 10Gb you should have moderation on. The driver will see the flow and be dynamic about it anyways.

8 queues will mean the eight cores and even threads will be tapped. Sometimes you have to test it out to see if 4 or 8 is better.
 
I mean... Magnetic drives tend to be around 1gig in terms of speed, so I don't think you really have to worry about tweaking the network, it isn't your bottleneck. Remember that 10gig is actually faster than a SATA 3 connection, a single SATA SSD can't even max it.

While I can't speak to your setup, at work I easily get 1.5-2 gigabytes per second copying from server to server over 2x 10gig bonded together using SMB3. These are multi-SSD arrays though so the disks/RAID controller can handle the throughput.
 
I mean... Magnetic drives tend to be around 1gig in terms of speed, so I don't think you really have to worry about tweaking the network, it isn't your bottleneck. Remember that 10gig is actually faster than a SATA 3 connection, a single SATA SSD can't even max it.

While I can't speak to your setup, at work I easily get 1.5-2 gigabytes per second copying from server to server over 2x 10gig bonded together using SMB3. These are multi-SSD arrays though so the disks/RAID controller can handle the throughput.
Yup. It's obvious at least to me that the issue isn't network here, which is why I said what I did earlier :) Without more relevant information from OP there's not much that can be done to help guide things along, either.
 
My shields have worked fine since the 8.2 update. Was having to use NFS for a while after synology got rid of smb1 support.
 
I can get 250-300MB/s on my all in one ESXi host using 5 x4 TB drives for Media and a SATA SSD for VMs(windows and FreeNAS OS) Something has to be wrong with the local disk.
 
Back
Top