Super slow SMB and slow NFS but iperf is ok

Discussion in 'Networking & Security' started by freeski, Oct 10, 2018.

  1. freeski

    freeski n00bie

    Messages:
    11
    Joined:
    Mar 6, 2014
    I have big issues with my home network. I'm running a esxi all in one server with pfsense, solarís, linux + some other vm's. My issue is that the smb performance is pending between 0-150Mbps, nfs performance is pretty stable at 150-200Mbps to my workstation.

    The setup is a esxi all in one server with a trunk to my Cisco SF300-24p switch and then cables to my wireless access points, tv's, htpc and workstation. When I test with iperf I get 950Mbps all the time and between all servers, htpc and workstation.

    Any ideas what could be the reason? Running nfs between the vm's is really fast(no numbers unfortunately). Iperf between the vm's is 15Gbps in average. The same speed problem to my workstation regardless of vm.

    Running Esxi 5.1 since the hardware in the server isn't supported in the newer versions.
     
  2. Eickst

    Eickst [H]ard|Gawd

    Messages:
    1,670
    Joined:
    Aug 24, 2005
    Did you test iperf both directions, did you try UDP mode to chekc for packet loss? Packet loss will absolutely kill SMB transfers.
     
  3. freeski

    freeski n00bie

    Messages:
    11
    Joined:
    Mar 6, 2014
    Yes, I’ve tested both directions and I get the same result, 950Mbps or more. Will test udp mode later today.
     
  4. -zax-

    -zax- [H]ard|Gawd

    Messages:
    1,939
    Joined:
    Jul 21, 2004
    Are you using jumbo frames? If so, try using 1500 byte frames to see if that resolves anything.

    I had a weird issue on my NAS where 9000 byte frames were severely throttling my network throughout, even though all devices were set to jumbo frames and supported 9k frames...
     
  5. freeski

    freeski n00bie

    Messages:
    11
    Joined:
    Mar 6, 2014
    Yes. I’m using jumbo frames. It gives a huge improvement between the vm’s but I will redesign the setup and have a seperate vswitch with jumbo frames between the vm’s and then use standard 1500 mtu on the rest of the network.
     
  6. freeski

    freeski n00bie

    Messages:
    11
    Joined:
    Mar 6, 2014
    Just tested iperf in udp mode with a few different options and this is one of the results

    [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
    [ 3] 0.0-10.0 sec 1.07 GBytes 923 Mbits/sec 0.173 ms 763/785900 (0.097%)


    But which options should I set?
     
  7. -zax-

    -zax- [H]ard|Gawd

    Messages:
    1,939
    Joined:
    Jul 21, 2004
    Set your window in iperf to something larger. Guessing you're using the stock window size which I believe is 64k.

    Add option: -w 256000
    to see if that shows any different throughput amount.

    You can also add some parallel streams: -P 5
    would allow for 5 simultaneous streams. With -P 5, you should get something close to five 200Mbps streams to make up 1Gbps.
     
  8. freeski

    freeski n00bie

    Messages:
    11
    Joined:
    Mar 6, 2014
    Now I've done some more tests with iperf. The TCP test is fine and shows 950 Mbps but UDP? The test result below is with the command : iperf -c "my ip" -u -w 256000 -l 64000 -P 5 -b 1000M.
    Package loss?

    [ ID] Interval Transfer Bandwidth
    [ 7] 0.0-10.0 sec 230 MBytes 193 Mbits/sec
    [ 7] Sent 3761 datagrams
    [ 4] 0.0-10.0 sec 229 MBytes 192 Mbits/sec
    [ 4] Sent 3759 datagrams
    [ 3] 0.0-10.0 sec 230 MBytes 193 Mbits/sec
    [ 3] Sent 3761 datagrams
    [ 5] 0.0-10.0 sec 230 MBytes 193 Mbits/sec
    [ 5] Sent 3776 datagrams
    [ 6] 0.0-10.0 sec 375 KBytes 307 Kbits/sec
    [ 6] Sent 6 datagrams
    [SUM] 0.0-10.0 sec 919 MBytes 771 Mbits/sec
    [SUM] Sent 15063 datagrams
    [ 4] Server Report:
    [ 4] 0.0-10.0 sec 179 MBytes 150 Mbits/sec 4.808 ms 823/ 3759 (22%)
    [ 4] 0.00-10.01 sec 12 datagrams received out-of-order
    [ 3] Server Report:
    [ 3] 0.0-10.0 sec 180 MBytes 151 Mbits/sec 4.628 ms 818/ 3761 (22%)
    [ 3] 0.00-10.01 sec 7 datagrams received out-of-order
    [ 7] Server Report:
    [ 7] 0.0-10.0 sec 179 MBytes 150 Mbits/sec 4.478 ms 825/ 3761 (22%)
    [ 7] 0.00-10.01 sec 12 datagrams received out-of-order
    [ 5] Server Report:
    [ 5] 0.0-10.0 sec 179 MBytes 150 Mbits/sec 5.105 ms 840/ 3776 (22%)
    [ 5] 0.00-10.01 sec 4 datagrams received out-of-order
    [ 6] Server Report:
    [ 6] 0.0-10.0 sec 180 MBytes 150 Mbits/sec 4.609 ms 824/ 3765 (22%)
    [ 6] 0.00-10.01 sec 9 datagrams received out-of-order
     
  9. -zax-

    -zax- [H]ard|Gawd

    Messages:
    1,939
    Joined:
    Jul 21, 2004
    You don't have a MTU mismatch somewhere in the middle do you?

    Something like sending 9216 byte MTU and in the middle of your network somewhere, it's set to 9000, and the receiving end is 9216...
     
  10. Eickst

    Eickst [H]ard|Gawd

    Messages:
    1,670
    Joined:
    Aug 24, 2005
    I would disable jumbo frames. It's not worth it. Maybe if you had 10gig and were using iscsi then MAYBE it would be worth it.
     
  11. freeski

    freeski n00bie

    Messages:
    11
    Joined:
    Mar 6, 2014
    Ok. I will use jumbo frames on a new separate vswitch just for the vm's and disable it for current vswitch and the rest of network. My plan is to upgrade my setup with 10gig switch and a nic in the esxi server since I currently have a 10gig nic in my workstation. So it would be good to solve it.
     
  12. freeski

    freeski n00bie

    Messages:
    11
    Joined:
    Mar 6, 2014
    Now it seems to work. I discovered that one of the vlan interfaces in pfsense had changed mtu to 1500. Changed that and now I get 90-100MBps between my workstation and my server. I will do further tests later this week and post the results here.
     
  13. -zax-

    -zax- [H]ard|Gawd

    Messages:
    1,939
    Joined:
    Jul 21, 2004
    Good to hear.

    That should have also helped fix the out of order packets with iperf as well I'd imagine...
     
  14. Eickst

    Eickst [H]ard|Gawd

    Messages:
    1,670
    Joined:
    Aug 24, 2005
    Glad it's working. This is why we don't use jumbo frames at work, it's just one more thing to troubleshoot when stuff doesn't work the way it should. We have 10gbps networks and don't turn it on
     
  15. freeski

    freeski n00bie

    Messages:
    11
    Joined:
    Mar 6, 2014
    Now I've done some more testing with iperf and everything seems to be working as it should.
     
  16. -zax-

    -zax- [H]ard|Gawd

    Messages:
    1,939
    Joined:
    Jul 21, 2004
    Glad to hear that it's been resolved.

    Thanks for the update.