"System Interrupts" high CPU usage when copying over network

DragonQ

Limp Gawd
Joined
Mar 3, 2007
Messages
351
I have a Windows Server 2012 machine that hosts all of my files. When I try to copy files from a share on my Windows 8.1 desktop (via gigabit ethernet), the transfer starts fine (50+ MB/s) then quickly drops to 7-8 MB/s. Looking at the server's Task Manager, CPU usage has jumped to 80+%, with 50% being used by System Interrupts and ~30% being used by System.

SMB 3.0 is presumably being used but encryption is disabled so it's not that. Nothing else is using the disk being copied from according to Resource Monitor. The strange thing is that sometimes it works fine, but as soon as I notice the speed drop, it stays dropped (probably until I reboot the server). Copying files locally on the server is absolutely fine (100+ MB/s). This happened when my desktop was running Windows 8 too, so it's not specifically to do with Windows 8.1.

My server hardware is listed here:

  • MSI H77MA-G43
  • Intel Celeron G1610
  • OCZ Vertex 4
  • 10 other HDDs
  • Realtek 8111E Ethernet (driver v8.20.815.2013)

The first thing I thought of was the ethernet driver settings but I can't see anything dodgy:

  • ARP Offload: ENABLED
  • Auto Disable Gigabit: DISABLED
  • Auto Disable PCIe: DISABLED
  • Energy Efficient Ethernet: ENABLED
  • Flow Control: Rx & Tx ENABLED
  • Green Ethernet: ENABLED
  • Interrupt Moderation: ENABLED
  • IPv4 Checksum Offload: ENABLED
  • Jumbo Frame: DISABLED
  • Large Send Offload v2 (IPv4): ENABLED
  • Large Send Offload v2 (IPv6): ENABLED
  • Maximum Number of RSS Queues: 4
  • NS Offload: ENABLED
  • Priority & VLAN: ENABLED
  • Receive Buffers: 512
  • Receive Side Scaling: ENABLED
  • Shutdown Wake-On-LAN: ENABLED
  • Speed & Duplex: AUTO NEGOTIATION
  • TCP Checksum Offload (IPv4): Rx & Tx ENABLED
  • TCP Checksum Offload (IPv6): Rx & Tx ENABLED
  • Transmit Buffers: 128
  • UDP Checksum Offload (IPv4): Rx & Tx ENABLED
  • UDP Checksum Offload (IPv6): Rx & Tx ENABLED
  • Wake on Magic Packet: ENABLED
  • Wake on Pattern Match: ENABLED
  • WOL & Shutdown Link Speed: 10 Mbps FIRST

Has anyone experienced this before? Any ideas??
 
I removed the latest drivers and reverted to the standard ones installed by Windows after rebooting. At first I thought the problem was fixed, since the first two files copied at proper speeds and System Interrupts was only using ~20% CPU. Then, half way through the third file copy, it started again. System Interrupts shot to 50% and the file transfer dropped to 7-8 MB/s. :(

I tried turning on Jumbo Packets and it may have fixed the issue somewhat. I now get transfers that start anywhere around 60-110 MB/s then slow down to ~40 MB/s, with no crazy CPU usage. Will report back if I see the problem again though!
 
Last edited:
Interrupts are going to happen, and happen frequently with networking. Each packet will trigger an interrupt, but interrupt moderation changes this. However, there is an issue with latency and interrupt moderation, and how the driver handles the moderation too. Along with RSS, Receive Side Scaling, you can have a tremendous amount of networking interrupts that will be allocated to the cores of a system, even with modern techniques that are supposed to lower them.

The Realtek controller defaults to 4 queues which increases core and interrupt activity the most. Lower this to two queues and you should possibly see the reduction you want. But of course, it can be the driver and how all is handled, or an issue elsewhere. Give the two queues change a try first.

I have the same network controller at work and this is not exhibited at all. Windows 8 and similar, but better dual core processor with the setting at 4 queues.
 
Thanks, setting RSS Queues to 2 does seem to prevent the high CPU usage, even with Jumbo Frame off. I know my CPU is weak but it was fantastic value and (up to now) does everything I need on what is a pretty basic file server.
 
Jumbo frames are obsolete because of TSO, (segmentation offloading), which for your card is called Large Send Offloading. You may gain very little with jumbo frames, due to the fact that the stack does not process the segmentation as it used to. But note that some features are disabled when Jumbo frames are enabled. Therefore, it is better not to use jumbo frames at all.
 
Thanks for the advice, I'm currently running with 2 RSS queues and normal sized frames. No issues so far. :)

Why is the RSS Queue setting on maximum by default anyway, is it just because I have a server OS? I assume for "proper" large servers with loads of CPU cores it makes total sense.
 
It defaults to that setting no matter what. Yes, having more queues and more cores will utilizes the cores more. That was the whole point to RSS, and other changes in the networking stack.
 
im having same problem with a new msi gaming board with the killer nic e2200 and changing the RSS queues didnt change anything =(
 
You're on a Realtek NIC and a slow CPU, what do you expect is going to happen when you put on load the machine?
//Danne
 
Back
Top