We need better site-to-site VPN routers/gateways

RavinDJ

Supreme [H]ardness
Joined
Apr 9, 2002
Messages
4,440
We have two Zyxel VPN100 units:
https://www.zyxel.com/us/en/products_services/VPN-Firewall-ZyWALL-VPN100

One in each office. The offices are about 4 miles away from each other. Each office has 400down/100up connection from the ISP.

I set up an IPSec site-to-site VPN between the two offices. But, the throughput is HORRENDOUS. The throughput to transfer a 130MB file shows anything from 600kbps to 1.38Mbps to 355kbps to 0bytes/sec (sometimes it just goes down to zero) to 2.81Mbps for a split millisecond. The time it takes to transfer 130MB is 1 minute and 48 seconds.

Is this normal??? The specs show 500Mbps
'https://www.zyxel.com/us/en/products_services/VPN-Firewall-ZyWALL-VPN100/specifications
I understand it's theoretical and I understand it's UDP in their specs. But should we be getting such slow speeds?

I am open to purchasing new units. We have two small offices and only 2-4 users connecting via VPN to the office network from the outside. Are these Zyxel units just that bad? Or is it a setting issue?

Would we be able to purchase better units within, say, $2000 budget ($1000/unit)?

Thanks, guys! Any help will be greatly appreciated.
 
Have you tried iperf network testing tool?

It be better gauge than a file transfer between sites to narrow down issues.

I’d also monitor the vpn end points CPU and memory during test.
 
Have you tried iperf network testing tool?

It be better gauge than a file transfer between sites to narrow down issues.

I’d also monitor the vpn end points CPU and memory during test.

Good call. Will try iperf (y)
 
Good call. Will try iperf (y)

iperf in main office:
iperf3 -s

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.124, port 61194
[ 5] local 192.168.10.135 port 5201 connected to 192.168.1.124 port 61195
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 177 KBytes 1.45 Mbits/sec
[ 5] 1.00-2.00 sec 1.03 MBytes 8.66 Mbits/sec
[ 5] 2.00-3.00 sec 628 KBytes 5.15 Mbits/sec
[ 5] 3.00-4.00 sec 704 KBytes 5.74 Mbits/sec
[ 5] 4.00-5.01 sec 200 KBytes 1.63 Mbits/sec
[ 5] 5.01-6.00 sec 215 KBytes 1.77 Mbits/sec
[ 5] 6.00-7.00 sec 552 KBytes 4.52 Mbits/sec
[ 5] 7.00-8.00 sec 646 KBytes 5.30 Mbits/sec
[ 5] 8.00-9.00 sec 757 KBytes 6.19 Mbits/sec
[ 5] 9.00-10.02 sec 539 KBytes 4.36 Mbits/sec
[ 5] 10.02-11.00 sec 405 KBytes 3.35 Mbits/sec
[ 5] 11.00-12.00 sec 227 KBytes 1.87 Mbits/sec
[ 5] 12.00-13.00 sec 589 KBytes 4.81 Mbits/sec
[ 5] 13.00-14.01 sec 676 KBytes 5.54 Mbits/sec
[ 5] 14.01-15.01 sec 624 KBytes 5.07 Mbits/sec
[ 5] 15.01-16.00 sec 602 KBytes 4.99 Mbits/sec
[ 5] 16.00-17.00 sec 509 KBytes 4.16 Mbits/sec
[ 5] 17.00-18.00 sec 560 KBytes 4.59 Mbits/sec
[ 5] 18.00-19.00 sec 740 KBytes 6.09 Mbits/sec
[ 5] 19.00-20.01 sec 621 KBytes 5.06 Mbits/sec
[ 5] 20.01-21.00 sec 826 KBytes 6.81 Mbits/sec
[ 5] 21.00-22.01 sec 689 KBytes 5.58 Mbits/sec
[ 5] 22.01-23.00 sec 689 KBytes 5.69 Mbits/sec
[ 5] 23.00-24.01 sec 655 KBytes 5.36 Mbits/sec
[ 5] 24.01-25.00 sec 652 KBytes 5.37 Mbits/sec
[ 5] 25.00-26.01 sec 647 KBytes 5.27 Mbits/sec
[ 5] 26.01-27.00 sec 675 KBytes 5.56 Mbits/sec
[ 5] 27.00-28.00 sec 754 KBytes 6.19 Mbits/sec
[ 5] 28.00-29.00 sec 838 KBytes 6.83 Mbits/sec
[ 5] 29.00-30.00 sec 582 KBytes 4.79 Mbits/sec
[ 5] 30.00-30.89 sec 656 KBytes 6.06 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-30.89 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-30.89 sec 18.3 MBytes 4.96 Mbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------





iperf in satellite office:
iperf3 -c 192.168.10.135 -w -2M -t 30s -i 1s

Connecting to host 192.168.10.135, port 5201
[ 4] local 192.168.1.124 port 61195 connected to 192.168.10.135 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.33 GBytes 11.4 Gbits/sec
[ 4] 1.00-2.00 sec 1.16 GBytes 9.98 Gbits/sec
[ 4] 2.00-3.00 sec 1.28 GBytes 11.0 Gbits/sec
[ 4] 3.00-4.00 sec 229 MBytes 1.92 Gbits/sec
[ 4] 4.00-5.00 sec 128 KBytes 1.05 Mbits/sec
[ 4] 5.00-6.01 sec 512 KBytes 4.14 Mbits/sec
[ 4] 6.01-7.01 sec 640 KBytes 5.24 Mbits/sec
[ 4] 7.01-8.01 sec 896 KBytes 7.37 Mbits/sec
[ 4] 8.01-9.01 sec 512 KBytes 4.21 Mbits/sec
[ 4] 9.01-10.00 sec 384 KBytes 3.16 Mbits/sec
[ 4] 10.00-11.01 sec 256 KBytes 2.07 Mbits/sec
[ 4] 11.01-12.01 sec 512 KBytes 4.20 Mbits/sec
[ 4] 12.01-13.00 sec 640 KBytes 5.29 Mbits/sec
[ 4] 13.00-14.00 sec 640 KBytes 5.25 Mbits/sec
[ 4] 14.00-15.00 sec 640 KBytes 5.26 Mbits/sec
[ 4] 15.00-16.01 sec 512 KBytes 4.15 Mbits/sec
[ 4] 16.01-17.01 sec 512 KBytes 4.20 Mbits/sec
[ 4] 17.01-18.01 sec 768 KBytes 6.32 Mbits/sec
[ 4] 18.01-19.01 sec 640 KBytes 5.23 Mbits/sec
[ 4] 19.01-20.01 sec 768 KBytes 6.31 Mbits/sec
[ 4] 20.01-21.01 sec 768 KBytes 6.25 Mbits/sec
[ 4] 21.01-22.01 sec 640 KBytes 5.24 Mbits/sec
[ 4] 22.01-23.01 sec 640 KBytes 5.27 Mbits/sec
[ 4] 23.01-24.01 sec 640 KBytes 5.26 Mbits/sec
[ 4] 24.01-25.00 sec 640 KBytes 5.25 Mbits/sec
[ 4] 25.00-26.01 sec 768 KBytes 6.24 Mbits/sec
[ 4] 26.01-27.00 sec 640 KBytes 5.30 Mbits/sec
[ 4] 27.00-28.00 sec 896 KBytes 7.34 Mbits/sec
[ 4] 28.00-29.01 sec 640 KBytes 5.19 Mbits/sec
[ 4] 29.01-30.01 sec 640 KBytes 5.24 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-30.01 sec 4.02 GBytes 1.15 Gbits/sec sender
[ 4] 0.00-30.01 sec 18.3 MBytes 5.10 Mbits/sec receiver

iperf Done.

What do you think? It's my first time ever using iperf.
 
Looks slow as you showed in file transfer. Heavy Internet usage during the test?

Where you able to see cpu performance during test on endpoints?
 
What was the encryption algorithm used for the IPSec connection? Also, you're not going to get beyond 80-90Mbs since your uplink speed is limited to 100Mbs -- not that you're anywhere close to that yet.
 
iperf in main office:
iperf3 -s

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.124, port 61194
[ 5] local 192.168.10.135 port 5201 connected to 192.168.1.124 port 61195
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 177 KBytes 1.45 Mbits/sec
[ 5] 1.00-2.00 sec 1.03 MBytes 8.66 Mbits/sec
[ 5] 2.00-3.00 sec 628 KBytes 5.15 Mbits/sec
[ 5] 3.00-4.00 sec 704 KBytes 5.74 Mbits/sec
[ 5] 4.00-5.01 sec 200 KBytes 1.63 Mbits/sec
[ 5] 5.01-6.00 sec 215 KBytes 1.77 Mbits/sec
[ 5] 6.00-7.00 sec 552 KBytes 4.52 Mbits/sec
[ 5] 7.00-8.00 sec 646 KBytes 5.30 Mbits/sec
[ 5] 8.00-9.00 sec 757 KBytes 6.19 Mbits/sec
[ 5] 9.00-10.02 sec 539 KBytes 4.36 Mbits/sec
[ 5] 10.02-11.00 sec 405 KBytes 3.35 Mbits/sec
[ 5] 11.00-12.00 sec 227 KBytes 1.87 Mbits/sec
[ 5] 12.00-13.00 sec 589 KBytes 4.81 Mbits/sec
[ 5] 13.00-14.01 sec 676 KBytes 5.54 Mbits/sec
[ 5] 14.01-15.01 sec 624 KBytes 5.07 Mbits/sec
[ 5] 15.01-16.00 sec 602 KBytes 4.99 Mbits/sec
[ 5] 16.00-17.00 sec 509 KBytes 4.16 Mbits/sec
[ 5] 17.00-18.00 sec 560 KBytes 4.59 Mbits/sec
[ 5] 18.00-19.00 sec 740 KBytes 6.09 Mbits/sec
[ 5] 19.00-20.01 sec 621 KBytes 5.06 Mbits/sec
[ 5] 20.01-21.00 sec 826 KBytes 6.81 Mbits/sec
[ 5] 21.00-22.01 sec 689 KBytes 5.58 Mbits/sec
[ 5] 22.01-23.00 sec 689 KBytes 5.69 Mbits/sec
[ 5] 23.00-24.01 sec 655 KBytes 5.36 Mbits/sec
[ 5] 24.01-25.00 sec 652 KBytes 5.37 Mbits/sec
[ 5] 25.00-26.01 sec 647 KBytes 5.27 Mbits/sec
[ 5] 26.01-27.00 sec 675 KBytes 5.56 Mbits/sec
[ 5] 27.00-28.00 sec 754 KBytes 6.19 Mbits/sec
[ 5] 28.00-29.00 sec 838 KBytes 6.83 Mbits/sec
[ 5] 29.00-30.00 sec 582 KBytes 4.79 Mbits/sec
[ 5] 30.00-30.89 sec 656 KBytes 6.06 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-30.89 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-30.89 sec 18.3 MBytes 4.96 Mbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------





iperf in satellite office:
iperf3 -c 192.168.10.135 -w -2M -t 30s -i 1s

Connecting to host 192.168.10.135, port 5201
[ 4] local 192.168.1.124 port 61195 connected to 192.168.10.135 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.33 GBytes 11.4 Gbits/sec
[ 4] 1.00-2.00 sec 1.16 GBytes 9.98 Gbits/sec
[ 4] 2.00-3.00 sec 1.28 GBytes 11.0 Gbits/sec
[ 4] 3.00-4.00 sec 229 MBytes 1.92 Gbits/sec
[ 4] 4.00-5.00 sec 128 KBytes 1.05 Mbits/sec
[ 4] 5.00-6.01 sec 512 KBytes 4.14 Mbits/sec
[ 4] 6.01-7.01 sec 640 KBytes 5.24 Mbits/sec
[ 4] 7.01-8.01 sec 896 KBytes 7.37 Mbits/sec
[ 4] 8.01-9.01 sec 512 KBytes 4.21 Mbits/sec
[ 4] 9.01-10.00 sec 384 KBytes 3.16 Mbits/sec
[ 4] 10.00-11.01 sec 256 KBytes 2.07 Mbits/sec
[ 4] 11.01-12.01 sec 512 KBytes 4.20 Mbits/sec
[ 4] 12.01-13.00 sec 640 KBytes 5.29 Mbits/sec
[ 4] 13.00-14.00 sec 640 KBytes 5.25 Mbits/sec
[ 4] 14.00-15.00 sec 640 KBytes 5.26 Mbits/sec
[ 4] 15.00-16.01 sec 512 KBytes 4.15 Mbits/sec
[ 4] 16.01-17.01 sec 512 KBytes 4.20 Mbits/sec
[ 4] 17.01-18.01 sec 768 KBytes 6.32 Mbits/sec
[ 4] 18.01-19.01 sec 640 KBytes 5.23 Mbits/sec
[ 4] 19.01-20.01 sec 768 KBytes 6.31 Mbits/sec
[ 4] 20.01-21.01 sec 768 KBytes 6.25 Mbits/sec
[ 4] 21.01-22.01 sec 640 KBytes 5.24 Mbits/sec
[ 4] 22.01-23.01 sec 640 KBytes 5.27 Mbits/sec
[ 4] 23.01-24.01 sec 640 KBytes 5.26 Mbits/sec
[ 4] 24.01-25.00 sec 640 KBytes 5.25 Mbits/sec
[ 4] 25.00-26.01 sec 768 KBytes 6.24 Mbits/sec
[ 4] 26.01-27.00 sec 640 KBytes 5.30 Mbits/sec
[ 4] 27.00-28.00 sec 896 KBytes 7.34 Mbits/sec
[ 4] 28.00-29.01 sec 640 KBytes 5.19 Mbits/sec
[ 4] 29.01-30.01 sec 640 KBytes 5.24 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-30.01 sec 4.02 GBytes 1.15 Gbits/sec sender
[ 4] 0.00-30.01 sec 18.3 MBytes 5.10 Mbits/sec receiver

iperf Done.

What do you think? It's my first time ever using iperf.
Use the -P switch like -P3 or -P5. It took -P20 for me to top out our 400/100 tunnel to our 500/50 site and the max was only 60Mbs+ using Watchguard M200s that were basically at zero cpu stress.
 
Doesn't address the iperf issues but what protocol are you using for the file transfer? CIFS generally sucks over WAN link due to latency. Same carrier at each site? What mtu are you using? Packet fragmentation will destroy throughput. You can use -M on the iperf cli to play around with the numbers without adjusting your mtu.
 
Last edited:
Doesn't address the iperf issues but what protocol are you using for the file transfer? CIFS generally sucks over WAN link due to latency. Same carrier at each site? What mtu are you using? Packet fragmentation will destroy throughput. You can use -M on the iperf cli to play around with the numbers without adjusting your mtu.
Latency shouldn't be bad if the sites are just 4mi away from each other. I have that setup between 2x of my sites and the ping is under 20ms, most of the time <15--which is better than any congested wifi. The problem is cifs is really chatty so it's not that fast on a single stream and you need to really multi-stream to max out the bandwidth. FTP on the other hand uses all the bandwidth quite easily so protocol makes a huge difference.

OP, what are you typically using the tunnel for? Then we can help you tune it.
 
Use the -P switch like -P3 or -P5. It took -P20 for me to top out our 400/100 tunnel to our 500/50 site and the max was only 60Mbs+ using Watchguard M200s that were basically at zero cpu stress.
How are you even getting 60Mbs? Upload from the 400/100 site? The lowest common denominator in this scenario is the 50Mbs. You shouldn't be able to download more than 50Mbs from the 400/100 site and shouldn't be able to upload more than 100Mbs since those are your bandwidth restrictions. Add IPsec overhead and the numbers go down further yet such that 60Mbs sounds reasonable on the 100Mbs upload speed.
 
How are you even getting 60Mbs? Upload from the 400/100 site? The lowest common denominator in this scenario is the 50Mbs. You shouldn't be able to download more than 50Mbs from the 400/100 site and shouldn't be able to upload more than 100Mbs since those are your bandwidth restrictions. Add IPsec overhead and the numbers go down further yet such that 60Mbs sounds reasonable on the 100Mbs upload speed.
That's probably the direction it was. I was running the -r or -d switch too (can't remember which) so I had both 60Mb+ and 40Mbs+ and just posted the larger number. But you are right that was probably from the 100 side since it would be impossible from the other side except maybe in a burst for a few seconds.
 
Thanks for the input guys...

Here's some screen shots:

zyxel1.JPG



zyxel2.JPG


zyxel3.JPG
 
Literally, the ONLY thing we do is RDP from the satellite office into the two servers in our main office. I mean, how much traffic does RDP really take? But 1Mbps connection between the two offices? 😢
 
Literally, the ONLY thing we do is RDP from the satellite office into the two servers in our main office. I mean, how much traffic does RDP really take? But 1Mbps connection between the two offices? 😢
That's all I really use my tunnels for too, lol. RDP doesn't take much bandwidth if you get rid of things like audio and reduce colors to 16-bit, but it is also dependent on the source and client system windows version and speed. In my experience win10 handles choppy or low bandwidth connections better than previous versions.

Your tunnel configurations look good to me. The only thing you could really do to see if it's the routers is to reduce the encryption to des for a test and see if anything changes. Personally, I was looking at the lower version of these Zyxels (the 50) when looking for upgrades to our gear so I think they should be more than capable.

If you don't see a change in the speeds by changing to des, I would suspect it's actually an issue with the isp, the biggest being packet loss. I would run a simple ping -t between the sites overnight and see if you have any measurable packet loss. I would also run a ping from each site to something like the isp gateway to see if there's any loss on those hops as well.

Packet loss is the biggest problem that I run into as the cable modem lines we use at some sites will have packet loss for days (looking at you Wow o_O), and that can quite easily kill bandwidth in the tunnel and make file transfers (even though we rarely do them) pretty spotty on working at all.

Hope this helps!
 
I'd still have a hard look at mtu. IPSEC is notorious for throughput problems due to incorrect mtu settings. IPSEC vpns were my segue from server admin to security and still very near and dear to my heart.
 
Does this VPN tunnel allow all traffic or are you doing split tunneling? I was wondering if you could peg the line with a speedtest.net test and note the CPU utilization during that test. If you can't do that, can you do some type of sustained file transfer and get the CPU utilization while it's going?
Also check the MSS Adjustment on the VPN connection rule. It should be in the range of 1360-1387. If it's higher then that could be a problem. Try the lower number of 1360 and see if that makes any difference in your throughput numbers. Make sure that "crypto boost-tcp" is enabled on your router so that you are using all of the processing power. Do you have any UTM features enabled? If so, that' s probably why your throughput is low.
 
I'd still have a hard look at mtu. IPSEC is notorious for throughput problems due to incorrect mtu settings. IPSEC vpns were my segue from server admin to security and still very near and dear to my heart.
From my understanding, this is why you don't disable ping response from the two endpoints on the tunnel as they are able to auto-adjust mtu with it on.

Realistically, how much of the raw bandwidth should one see inside an ipsec tunnel anyways? I've always wondered that.
 
From my understanding, this is why you don't disable ping response from the two endpoints on the tunnel as they are able to auto-adjust mtu with it on.

Realistically, how much of the raw bandwidth should one see inside an ipsec tunnel anyways? I've always wondered that.
60% should be a realistic expectation.
 
From my understanding, this is why you don't disable ping response from the two endpoints on the tunnel as they are able to auto-adjust mtu with it on.

Realistically, how much of the raw bandwidth should one see inside an ipsec tunnel anyways? I've always wondered that.
PMTU involves more than just the endpoints. The entire path must configured to support this. FYI most firewall will not participate so .... At 64 bytes with good encryption a little less for not so good encryption IPSEC encap has about a 5% penalty on a 1400 byte packet. Throughput depends on your packet size distribution. If you're doing file transfers which tend to use larger packets you'll see 90%+. If your traffic is all 64 byte packets you'll see 50%. Most patterns will fall in between those extremes. All of this assume your hardware isn't the bottleneck.
 
PMTU involves more than just the endpoints. The entire path must configured to support this. FYI most firewall will not participate so .... At 64 bytes with good encryption a little less for not so good encryption IPSEC encap has about a 5% penalty on a 1400 byte packet. Throughput depends on your packet size distribution. If you're doing file transfers which tend to use larger packets you'll see 90%+. If your traffic is all 64 byte packets you'll see 50%. Most patterns will fall in between those extremes. All of this assume your hardware isn't the bottleneck.
Thank you for the additional details. (y) Always great to learn more about the inner workings of all these things.
 
What kind of budget do you have? Have you looked into dedicated fiber between the two locations? Depending on location it may be cheaper than you expect.
 
What kind of budget do you have? Have you looked into dedicated fiber between the two locations? Depending on location it may be cheaper than you expect.
In my experience the words 'fibre' and 'cheap' generally are mutually exclusive. :D

The one time I looked into a fibre run between our two locations that are just a few miles apart, the monthly cost would be triple--and that was just the setup and equipment.

IPsec vpns are the right tool for what the OP is trying to do, but the underlying isps have to provide good service or it's not going to be any better than their service.
 
The specs on these Zyxel boxes are incredibly unspecific in regards to how they're able to achieve "500mbps VPN throughput" - this doesn't indicate if traffic is L2L via IPSec or for remote clients. Only way to know for sure if they're full of shit or not is to directly connect the 2 devices on the same layer 2 segment and try and run the same tests.

For connecting SMB branches, you're better off looking at a SD-WAN type setup with Fortigate. It will be easier to manage and quite frankly better supported than anything from Zyxel. If budget is still an issue with Fortinet, I'd still go with PFSense over Zyxel.
 
I've used dell sonicwalls and unifi gateways for ipsec, getting much higher throughput, though I'm not familiar with the vpn100.
 
The specs on these Zyxel boxes are incredibly unspecific in regards to how they're able to achieve "500mbps VPN throughput" - this doesn't indicate if traffic is L2L via IPSec or for remote clients. Only way to know for sure if they're full of shit or not is to directly connect the 2 devices on the same layer 2 segment and try and run the same tests.

For connecting SMB branches, you're better off looking at a SD-WAN type setup with Fortigate. It will be easier to manage and quite frankly better supported than anything from Zyxel. If budget is still an issue with Fortinet, I'd still go with PFSense over Zyxel.
I looked at PFsense when looking at the zyxel--pfsense had its own set of problems so I avoided it and am glad I did. That was a few years ago, and now the firewalla and netgate products make pfsense a little better because it's closer to a regular router, but then at the price you could just get a regular router.

What is the advantage of an SD-WAN versus and IPsec tunnel? I can do SD-WAN at a couple of site but never looked into it as 'too enterprise' for our needs.
 
I looked at PFsense when looking at the zyxel--pfsense had its own set of problems so I avoided it and am glad I did. That was a few years ago, and now the firewalla and netgate products make pfsense a little better because it's closer to a regular router, but then at the price you could just get a regular router.

What is the advantage of an SD-WAN versus and IPsec tunnel? I can do SD-WAN at a couple of site but never looked into it as 'too enterprise' for our needs.

This could be an all day conversation and honestly requires adult beverages. :)

If you have more than a few sites AND want site to site tunnels (full mesh) vs just hub and spoke then what Fortinet calls ADVPN simplifies things. If you have multiple access connections then you layer in SDWAN. If you're not doing full mesh then ADVPN just complicates things. That said, combining multiple connections into the sdwan interface makes a lot things easier policy wise even without an overlay network. The sdwan rules/policy are basically just the same policy based routing that Fortinet has always had with quality measurement as a decision point. FWIW I use the sdwan interface on a 101F with a fiber, docsis and lte connection here at home for load balancing and application steering.


PS: At the end of day SDWAN is nothing more than using additional layer 3-7 data for routing vs just layer 3. Anyone saying it's more complicated than that is selling an SDWAN solution. :) Don't get me wrong it can get extremely complicated but it doesn't have to be.
 
Last edited:
1) I am not familiar with these firewalls, but can you setup a temporary NAT and port forward 3389 to one RDP server in question? Then RDP to that server on the public IP? Setup the FW rule to only allow from your other office IP of course, and only for short testing. This will remove IPSec from the equation.

2) Set both computers on both sides to have an MTU of 1400. How depends on the OS, then reboot. RDP again. This is to test MTU/fragmentation issues through the VPN.

Neither addresses a CIFS transfer per se, but you can test throughput by transferring a file through RDP. Compare before and after.
 
Last edited:
This could be an all day conversation and honestly requires adult beverages. :)

If you have more than a few sites AND want site to site tunnels (full mesh) vs just hub and spoke then what Fortinet calls ADVPN simplifies things. If you have multiple access connections then you layer in SDWAN. If you're not doing full mesh then ADVPN just complicates things. That said, combining multiple connections into the sdwan interface makes a lot things easier policy wise even without an overlay network. The sdwan rules/policy are basically just the same policy based routing that Fortinet has always had with quality measurement as a decision point. FWIW I use the sdwan interface on a 101F with a fiber, docsis and lte connection here at home for load balancing and application steering.


PS: At the end of day SDWAN is nothing more than using additional layer 3-7 data for routing vs just layer 3. Anyone saying it's more complicated than that is selling an SDWAN solution. :) Don't get me wrong it can get extremely complicated but it doesn't have to be.
Just read up on SD-WANs on the Watchguard site and they're slick! I definitely need to mess with them since I do have redundant links at one site and it will help when one of the wans decides to start having random packet loss. (y)
 
1) I am not familiar with these firewalls, but can you setup a temporary NAT and port forward 3389 to one RDP server in question? Then RDP to that server on the public IP? Setup the FW rule to only allow from your other office IP of course, and only for short testing. This will remove IPSec from the equation.

2) Set both computers on both sides to have an MTU of 1400. How depends on the OS, then reboot. RDP again. This is to test MTU/fragmentation issues through the VPN.

Neither addresses a CIFS transfer per se, but you can test throughput by transferring a file through RDP. Compare before and after.
This would be some good testing to get a baseline of performance. I usually will try to do this when building out a network when I don't know what the performance inside the tunnel should be.
 
1) I am not familiar with these firewalls, but can you setup a temporary NAT and port forward 3389 to one RDP server in question? Then RDP to that server on the public IP? Setup the FW rule to only allow from your other office IP of course, and only for short testing. This will remove IPSec from the equation.
If it helps... can I just keep it like that or is it unsafe? To only allow that IP?
 
If it helps... can I just keep it like that or is it unsafe? To only allow that IP?
This is considered unsafe. IPs can be spoofed relatively easily and RDP is vulnerable to MITM attacks. That's a big reason VPNs are commonly used in RDP scenarios rather than just opening a port for RDP and restricting access to specific IPs.

(That said, I have seen several business who operate this way without issue. It appears to be secure enough to prevent automated bot attacks but wouldn't hold up to a targeted attack.)

I'm not sure what connecting this way would prove if you aren't having connectivity issues \ disconnects. It doesn't do anything to isolate the VPN speed issue.
 
Are you open to using wireguard instead of IPSEC. I've had better experience and performance via wireguard site-to-site on opnsense/(pfsense).
 
I'm not sure what connecting this way would prove if you aren't having connectivity issues \ disconnects. It doesn't do anything to isolate the VPN speed issue.
It eliminates the VPN. If there are still performance issues then we've narrowed it down to device or network problems.
 
Are you open to using wireguard instead of IPSEC. I've had better experience and performance via wireguard site-to-site on opnsense/(pfsense).
How does wireguard work versus normal IPsec?
 
I would not keep RDP open indefinitely, but it is worthwhile just to check whether the IPSec implementation is fubared. If you want a remote access protocol to that server, there are other options besides RDP, and 2FA would be a good idea too.
 
It eliminates the VPN. If there are still performance issues then we've narrowed it down to device or network problems.

Maybe I'm misunderstanding, but it sounds like OP's concern is the throughput of the VPN .

Access to RDP without use of the VPN doesn't do anything to narrow down the throughput issue. If the issue was RDP disconnects then I certainly see this test helping. Did I miss the part were the OP said users were getting dropped? (That sounds like something I would do :ROFLMAO:.)

Edit: I guess this might make sense if the speed tests are being run from within the RDP session (although would be a way weirder problem). l made the assumption the OP is just testing on a local pc connected to the VPN and RDP was just the VPNs purpose but not the concern.
 
Last edited:
Thank you for all the input, info, and knowledge. Yes, the issue is RDP dropping and disconnecting. Hence, my first thought was: VPN throughput.
 
One question was asked and never answered. I know the site locations are reasonably close to each other but are they on the same ISP?
 
Back
Top