Migration from pfSense to OPNSense: Thoughts?

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,880
I'd appreciate any thoughts from those of you who have attempted this.

My pfSense router/firewall based on an Asrock board and consumer hardware went down the other day. I got it up again after some troubleshooting, but having become used to enterprise features like BMC/IPMI in the meantime, I decided it was time to actually use Enterprise-level hardware in this role.

So I am now doing an open bench test on a Supermicro X12STL-F with a Rocket Lake generation E-2314 Xeon (4C/4T, 2.8 Ghz base, 4.8Ghz turbo) and 16gigs of DDR4-3200.

Total overkill for a router, I know, but I got some good deals, and I do lean heavily on my router when it comes to OpenVPN at gigabit speeds, which goes a little heavy on the CPU. (Turns out you can't buy DDR4-3200 ECC modules smaller than 8GB, at least not that I could find, and I'm not one to run a dual channel system in single channel mode, no matter how light the load.)

Anyway, once my testing regime is up, it will be time to install the router software. The easy way would be to just export my pfSense config, install pfSense on the new machine, import the config, and edit the NIC names and be done with it.

That said, after all the shit I've heard about pfSense and how they behaved when m0n0wall shut down, and OPNSense forked them, I'm not sure I want to have anything to do with pfSense anymore.

Politics, drama & bad blood between the two projects

This has nothing to do with technology, so if you don't enjoy reading about flame wars, move on to the technical section.


It's impossible to not mention the elephant in the room - the bad blood between pfSense and OPNSense. For those who are unaware, it got pretty ugly and very unprofessional on the part of the pfSense team.

In 2016 one of the Netgate/pfSense developers took over the opnsense.com domain to insult the competing project. For a time the opnsense.com displayed a "satirical" video from the movie Downfall (depicting the final days of Adolf Hitler and Nazi Germany). The site was full of vulgar language describing the OPNSense software. This ended after WIPO ruling:


In November 2017, a World Intellectual Property Organization panel found that Netgate, the copyright holder of pfSense, had been using the domain opnsense.com in bad faith to discredit OPNsense, a competing open source firewall forked from pfSense. It compelled Netgate to transfer the domain to Deciso, the developer of OPNsense.

More information about this incident, including the website snapshot, can be found on https://opnsense.org/opnsense-com/


Before writing this article I spent an entire day reading various forum threads and it's clear that there's no love between the two projects. There's plenty of antagonization of OPNsense on pfSense forum as well as various subreddits.


I have not seen the same behaviour from the other direction - if you have, let me know so I can add it here for some balance.


The pfSense folks also took over /r/opnsense and refused to give it back to the OPNsense project, created /r/OPNScammed to badmouth the competition. This drama never ends - if you are interested, start reading here

Of course there is more than just my feelings about the pfSense project at play here. There have been some recent security issues that are difficult to ignore, and OPNSense does more frequent security updates.

OPNSense have also been better about implementing stable versions of newer services like WireGuard.

On the flipside, pfBlockerNG is a nice feature in pfSense which is absent from OpenSense. I use it, but I go back and forth regarding keeping it, because it does occasionally break some websites, and some folks in the household have less tolerance for this than I do.

I also like dealing with a project that is not directly linked to a business model selling hardware as well.

I guess the thing is this:

While my system is a home system, it is a somewhat complicated home system, with 10 vlans, rules block and allowing routing between them for very specific things, routing through VPN for the entire network, etc. etc. and custom scripts triggered by cron that change settings, and bring networks up and down based on a set schedule. Migration is going to be a bitch.

So my question to anyone who has migrated is this:

1.) Did you finds it to be difficult?
2.) Was it worth it? Why / why not?
3.) Which would you go with?

Open to any other thoughts that might be interesting/relevant that I didn't ask about, but I might want to know about living with OPNSense or the migration process.

Essentially, I appreciate any input.
--Z
 
I moved from pfSense to OPNsense a number of months ago. I also decided to move from bare metal to a virtualized instance which added some complexity.

While my network isn't quite as complex as yours (only 5 or 6 VLANs) I also do the same thing as far as firewall rules and only allowing certain hosts to communicate. I wanted to make sure that I redid everything by hand when I moved over since the interface is not exactly the same, and I wanted to be sure I understood how everything worked as I set it up. It is similar enough that I didn't find anything too troublesome during the process of moving over. So to answer your first question, I would say no. I did not attempt to import a configuration so I cant speak to how that would go.

One thing that is in OPNsense that I don't recall being in pfSense is that you can toggle logging for individual firewall rules whether the rule is a pass or block. It makes finding issues in your rules a lot easier and you can enable / disable them whenever. I was also changing IPs on my clients when I was doing this so it made it very simple to see where I forgot to update an IP address.

As far as was it worth it, I guess? It functions exactly the same as pfSense did. I have had zero issues with it. Tying into question number three, I moved right around the time that Netgate was messing with the pfSense Plus / Community Edition and licensing. I swapped for similar reasons as I had when I dropped ESXi for Proxmox (which turned out to be the right move) when Broadcom bought VMware. I could see Netgate potentially shafting the homelab users in the future and if they did then I would be moving to OPNsense later anyhow. I'm not at all saying that this is going to happen, but the fact that I could imagine it happening makes me uneasy.

I don't see the past drama as a showstopper for using pfSense. My understanding (from limited research) is that it was one or a couple of bad eggs. Sure it's shitty but I don't think the entire company should be discounted because of a few douchebags. Even the security issues I don't consider to be a big deal. Security issues are going to pop up everywhere. It depends on how quickly the company acts on them. Just skimming what you linked it looks like those needed authenticated access to be exploited and that they were patched within a few of months and before the CVEs were published. It looks like they acknowledged that they were legit only two after they were informed about them.

My honest opinion on the two is pfSense feels like a more mature product. I'm fairly certain there are a lot more enterprises running pfSense then there are running OPNsense and you wouldn't be hosting an enterprise behind it if it wasn't a good product. I've had zero issues with OPNsense though and have no reason not to recommend it. Yes OPNsense pushes out updates more frequently however that also means more chances to break things. One thing I want in my router / firewall is stability. I haven't had an issue yet but admittedly I've only updated two or three times. Now that I have OPNsense virtualized I take a snapshot before updating so if there are any issues I can just roll back and be stable again, so I am less concerned but it might be something to think about.
 
Thanks for your input. I definitely appreciate it!

I moved from pfSense to OPNsense a number of months ago. I also decided to move from bare metal to a virtualized instance which added some complexity.

I used to do this with pfSense back before 2016. I was often tinkering with my ESXi (later Proxmox) host at the time though, and it came to be a real pain to have it be virtualized, as when the host went down, the house was without internet...

So I moved it to standalone hardware to make my life easier.

Now that I have OPNsense virtualized I take a snapshot before updating so if there are any issues I can just roll back and be stable again, so I am less concerned but it might be something to think about.

That is definitely a benefit with virtualization, and I do just that with almost all of my VM's.

My plan is to install whichever I choose (pfSense or OPNSense) so that it boots from a ZFS mirror of two small spare NVMe drives I had kicking around from when I recently upgraded the main Proxmox host. I'm thinking if I do this, I will still be able to snapshot it before upgrades by ssh:ing in and doing a "zpool snapshot" command.

Not sure if it works that way on a root pool (rpool) though, or if that can break booting if it doesn't find the kernel it expects.
 
pfBlockerNG's funkcionality can be replacated with built in OPNsense features:
https://www.comparitech.com/blog/vpn-privacy/pfblockerng-opnsense/
https://www.zenarmor.com/docs/network-security-tutorials/pfblockerng-alternatives-on-opnsense

As for migrating the config, take a look here:
https://forum.opnsense.org/index.php?topic=36683.0
https://github.com/CitraIT/migrate_pfsense

To be fair, the mentioned pfSense's security vulnerabilities are overblown, as is OPNsense's slower OpenSSL 1.1.1 -> 3.0 transition. Neither have real life security issues and people would be well served with both products. I personally chose to go with OPNsense because of Netgate's shitty past and current behavior, and schizophrenic behaviour around pfSense licenses and features.
 
Last edited:
The last time I looked at OPNSense, I didn't like their logging page compared to pfSense, and that is a very important feature to me. Honestly if you have pfSense and pfBlockerNG working well, I'm not sure I would change to OPNSense, but that's my 2 cents.

I use a Palo Alto PA-440 today and keep a pfSense box as a spare.
 
The last time I looked at OPNSense, I didn't like their logging page compared to pfSense, and that is a very important feature to me. Honestly if you have pfSense and pfBlockerNG working well, I'm not sure I would change to OPNSense, but that's my 2 cents.

I use a Palo Alto PA-440 today and keep a pfSense box as a spare.

It's interesting. I've never actually used the log feature in the web interface in pfSense.

If I need to look at the logs, I usually just ssh in and grep through /var/log

I actually set up a separate container on my main server just to run rsyslogd, so it can capture the logs from pfSense, and give me essentially unlimited log space instead of filling up the little SSD on the pfSense box.

I do need to remember to enable logrotate on that box though. I keep forgetting.

The logs are getting large. filterlog.log is now 35GB. Not a problem from a space perspective, but damn, it takes a while to grep through :p
 
Last edited:
The last time I looked at OPNSense, I didn't like their logging page compared to pfSense, and that is a very important feature to me.
Strange, I've found OPNsense's customizable log Live View to be one of its best festures.
 
Just a note on their bug last December...

For this exploit to work, the threat actor needs access to an account with interface editing permissions, hence the need to chain the flaws together for a powerful attack.

So, 2 things...

1, The threat actor needs to already know the credentials to such an account
2. The threat actor has to have access to get into pfsense in some form. Outside of the WAN interface, neither your HTTPS or SSH interfaces should be publicly accessible anyways, so if someone was compromised via this bug, they have already failed security 101.


One thing that is in OPNsense that I don't recall being in pfSense is that you can toggle logging for individual firewall rules whether the rule is a pass or block. It makes finding issues in your rules a lot easier and you can enable / disable them whenever. I was also changing IPs on my clients when I was doing this so it made it very simple to see where I forgot to update an IP address.
Cypher-
Yes, pfsense has, and always has had this.


Personally, drama aside, PFsense has always just worked for me, from originally using it back in my online poker days as our primary firewall on a Dell R610 with dual Xeon X5660's and 48GB of ram :D , to now running on a little HP SFF i5 6 series and 8Gb of ram for my 1Gb fiber at home, just works for me. If it works, and is solid and you know how it works and how to configure it, why switch to something new that you get to work through the nuances of?
 
Last edited:
If it works, and is solid and you know how it works and how to configure it, why switch to something new that you get to work through the nuances of?
Pretty much this.

Since OPNSense has been forked from pfSense, it hasn't been shown (yet) that their devs are more competent than the pfSense devs. Maybe they are; maybe they aren't. I don't recall any glaring differences in capabilities outside of the UI and package support. Until then, unless the OP is wanting a feature only available in OPNSense, I'd stick with pfSense.
 
Yeah, you guys make good points.

I always found it disconcerting that the web interface runs as root on pfSense, but I understand avoiding this is complicated, as so many of the settings that need to be edited require root permissions.
 
Pretty much this.

Since OPNSense has been forked from pfSense, it hasn't been shown (yet) that their devs are more competent than the pfSense devs. Maybe they are; maybe they aren't. I don't recall any glaring differences in capabilities outside of the UI and package support. Until then, unless the OP is wanting a feature only available in OPNSense, I'd stick with pfSense.
You're kidding right? OPNsense was forked from pfSense in 2015. This isn't a project that's unproven and only been around for a few months. Occasionally I've seen a release need a hotfix for a reported issue, but I can't recall any updates breaking major things in the ~4 years or so I've used it or having major vulnerabilities. On the other hand, you had Netgate/pfSense literally fund a developer to make a terribly insecure kernel wireguard module that was pushed out in pfSense 2.5.0 and almost got admitted into the FreeBSD kernel. The fact that Netgate actually pushed this code out in a release instantly confirms the pfSense devs are less competent IMHO.

Open to any other thoughts that might be interesting/relevant that I didn't ask about, but I might want to know about living with OPNSense or the migration process.
There is a python migration script available that takes a pfSense config and converts it to OPNsense. You can read the github README to see how to use it and it's limitations. It could potentially save you time but still may require re-creating some things depending on your environment. Once you have OPNsense up and running, migrating configs between devices is very simple, basically just change the interface names in the exported config. I've migrated my config 3 times (baremetal -> baremetal diff hardware -> virtualized) and the process was very easy. Between that script, OPNsense docs and this guide it shouldn't be too bad.

Running it baremetal with a E-2314 Xeon / 16GB RAM is a little questionable. I would highly advise getting your feet wet with Proxmox and run a virtualized instance (you can either do IOMMU passthrough on NICs or create Linux bridges for your interfaces). OPNsense shouldn't need more than 8GB of RAM (realistically even 6 or 4GB) assigned to it unless you're running something absolutely crazy and that Rocket Lake Xeon should be able to route at line speed easily, probably with Suricata on too if you were going that route. Look at current RAM usage on your pfSense instance to get an idea of what you'll be using on OPNsense. You can try it with 2 or 3 CPU cores assigned to the VM or even do all 4. Now you've got a virtualized firewall, which is amazing for snapshots and upgrades. And you can run a few things as LXCs or VMs in Proxmox like a network controller (e.g. Unifi Controller or TP Link Omada Controller), a PiHole/Adguard Home DNS adblocker, HomeAssistant, NUT Network UPS monitoring, mess around with Docker in a Linux VM. The sky's the limit. Eventually might want to up your RAM if you're going crazy but even 4 cores on Rocket Lake should handle quite a bit of stuff running.

If you're running OPNsense baremetal I would install it using ZFS partitioning, if virtualized then UFS. ZFS will consume some more RAM but in the end if you are using a firewall with 16GB of RAM that's plenty of overhead.

The one thing I frequently see people miss when switching from pfSense to OPNsense is pfBlockerNG. It will take more time to setup initially, but you can use ZenArmor and some additional blocking rules to get even more features. I can't answer your 3 questions since I never migrated from pfSense. But as someone who has used OPNsense for quite a few years I would recommend it. I'm not sure what you want to know with living with it so feel free to ask any specific questions.
 
You're kidding right? OPNsense was forked from pfSense in 2015. This isn't a project that's unproven and only been around for a few months. Occasionally I've seen a release need a hotfix for a reported issue, but I can't recall any updates breaking major things in the ~4 years or so I've used it or having major vulnerabilities. On the other hand, you had Netgate/pfSense literally fund a developer to make a terribly insecure kernel wireguard module that was pushed out in pfSense 2.5.0 and almost got admitted into the FreeBSD kernel. The fact that Netgate actually pushed this code out in a release instantly confirms the pfSense devs are less competent IMHO.

I think you misinterpreted my post. Those same devs also added all that functionality over and above what Manuel did with m0n0wall. And that's a lot. OPNSense does have Zenarmor and some other things (like pfSense has pfBlockerNG), but I haven't seen features that have far and away set it way above pfSense in those extra 9 years. With 9 years, pfSense was light years ahead of m0n0wall.

Although to be fair, perhaps those devs are no longer associated with Netgate.
 
I think you misinterpreted my post. Those same devs also added all that functionality over and above what Manuel did with m0n0wall. And that's a lot. OPNSense does have Zenarmor and some other things (like pfSense has pfBlockerNG), but I haven't seen features that have far and away set it way above pfSense in those extra 9 years. With 9 years, pfSense was light years ahead of m0n0wall.

One thing I hear folks talk about is Open sense having a slightly more user friendly interface, being able to run the web interface not as root, and having earlier/better support for WireGuard, but that's about it.

Most of the motivation for switches seem to surround the license, the pfSense ownership by negate and it's licenses and hardware backed business model.

These are important things, but not necessarily things people who aren't huge open source fans think about a lot.

The drama and some of the shit people on the pfSense team did really rubbed me the wrong way though, but I guess Negate doesn't exactly benefit by me using the free version, so....
 
There is a python migration script available that takes a pfSense config and converts it to OPNsense. You can read the github README to see how to use it and it's limitations. It could potentially save you time but still may require re-creating some things depending on your environment. Once you have OPNsense up and running, migrating configs between devices is very simple, basically just change the interface names in the exported config. I've migrated my config 3 times (baremetal -> baremetal diff hardware -> virtualized) and the process was very easy. Between that script, OPNsense docs and this guide it shouldn't be too bad.

That's pretty cool! I didn't know that existed. I had read that early releases could directly use pfSense config files but that they had diverged too much since then, and this was no longer possible.

That said, unless it is 100%, I think I'd just lean towards doing it manually. Experience had taught me that often doing something from scratch is easier than fixing a failed or incomplete mess. Besides, it would give me a chance to benefit from from familiarizing myself with the interface.


Running it baremetal with a E-2314 Xeon / 16GB RAM is a little questionable. I would highly advise getting your feet wet with Proxmox and run a virtualized instance (you can either do IOMMU passthrough on NICs or create Linux bridges for your interfaces). OPNsense shouldn't need more than 8GB of RAM (realistically even 6 or 4GB) assigned to it unless you're running something absolutely crazy and that Rocket Lake Xeon should be able to route at line speed easily, probably with Suricata on too if you were going that route. Look at current RAM usage on your pfSense instance to get an idea of what you'll be using on OPNsense. You can try it with 2 or 3 CPU cores assigned to the VM or even do all 4. Now you've got a virtualized firewall, which is amazing for snapshots and upgrades. And you can run a few things as LXCs or VMs in Proxmox like a network controller (e.g. Unifi Controller or TP Link Omada Controller), a PiHole/Adguard Home DNS adblocker, HomeAssistant, NUT Network UPS monitoring, mess around with Docker in a Linux VM. The sky's the limit. Eventually might want to up your RAM if you're going crazy but even 4 cores on Rocket Lake should handle quite a bit of stuff running.


Oh, I know it is severely overkill. The i3-7100 I used before it was severely overkill as well (but it did come in handy for routing my entire network through OpenVPN.)

I went with rocket lake as I didn't want to go too old, and get something less efficient or obsolete, but I really wanted something with IPMI/BMC to make my life easier.

Rocket lake just turned out to be the best balance of age and affordability for me, so I went with it. The E-2314 is simply the cheapest lowest end Rocket Lake Xeon, so that's what I wound up with. I know everything about this system is stupid overkill, but I am OK with that. Better have too much and room to grow than too little :p

Several years ago I did run pfSense as a VM in ESXi with passed through NIC's, but I migrated to bare metal as it was a huge pain in the ass to lose internet every time I took the ESXi box down for maintenance. I really don't want to go back to that.

I have since migrated my main server to a beefy all-in-one Proxmox box.

I'd install proxmox on the E-2314 as well, but I wonder what else I would actually run on it, given all the spare capacity I have on the main box, a beefy 32C/64T EPYC-7543 box with 512GB RAM. It seems silly to install Proxmox (or another hypervisor) just to run a single guest...

Maybe I could just do it to benefit from experimenting with a cluster. I've never done that before. I guess it couldn't hurt....

Decisions decisions...


If you're running OPNsense baremetal I would install it using ZFS partitioning, if virtualized then UFS. ZFS will consume some more RAM but in the end if you are using a firewall with 16GB of RAM that's plenty of overhead.

That's exactly the plan. I have like eight 256GB Inland Premium NVMe drives left over from upgrading the main server, so the plan is to just mirror two of them in ZFS and have a little redundancy (and the ability to easily snapshot!)

The one thing I frequently see people miss when switching from pfSense to OPNsense is pfBlockerNG. It will take more time to setup initially, but you can use ZenArmor and some additional blocking rules to get even more features. I can't answer your 3 questions since I never migrated from pfSense. But as someone who has used OPNsense for quite a few years I would recommend it. I'm not sure what you want to know with living with it so feel free to ask any specific questions.

I use pfBlockerNG right now. It is certainly a cool project, and it blocks a lot of trash, but I am also leaning towards no longer using it, ad it breaks a lot of webpages, and it gets really annoying over time.

Worse comes to worse, maybe I could virtualize a pihole install on the proxmox box? Is there even an x86 version?

You have given me things to think about though. I appreciate it.
 
Last edited:
Worse comes to worse, maybe I could virtualize a pihome install on the proxmox box? Is there even an x86 version?
You mean pihole right? And yes they support a wide variety of architectures. https://docs.pi-hole.net/main/prerequisites/

1706897368711.png


And for something like pihole, an LXC container created on Proxmox using your preferred distro (e.g. Ubuntu/Debian/Fedora/CentOS) is less resource overhead than making a standalone VM. On Proxmox I tend to try to use LXCs when possible.

edit: also there is a community repository for OPNsense which has AdGuardHome if you wanted to try to run it on the same device as the firewall. Sadly it listens only on port 3000 so prevents you from running ntopng (included in default repo) or Grafana (from community repo).
 
Last edited:
You mean pihole right? And yes they support a wide variety of architectures. https://docs.pi-hole.net/main/prerequisites/

View attachment 632283

And for something like pihole, an LXC container created on Proxmox using your preferred distro (e.g. Ubuntu/Debian/Fedora/CentOS) is less resource overhead than making a standalone VM. On Proxmox I tend to try to use LXCs when possible.

edit: also there is a community repository for OPNsense which has AdGuardHome if you wanted to try to run it on the same device as the firewall. Sadly it listens only on port 3000 so prevents you from running ntopng (included in default repo) or Grafana (from community repo).


Yep, that was a typo and/or autocorrect. Edited above
 
My honest opinion on the two is pfSense feels like a more mature product. I'm fairly certain there are a lot more enterprises running pfSense then there are running OPNsense and you wouldn't be hosting an enterprise behind it if it wasn't a good product.
Back when I was researching a new platform for us to use for our enterprise routing and VPN tunnels, pfsense wasn't even on the map due to its lack of industry standard IPsec VPN tunnels like everyone else. I'm sure they do that now (dunno, haven't looked), but where enterprise firewalls are today, I could hardly see any real business using pfsense outside of a netgate hardware product. Everyone is on fortinet, juniper, palo alto, etc.
 
I have been running pfsense, truenas/freenas/nas4free, and now proxmox on used consumer gear, starting with pfsense about 15 years ago. I am diligent however to replace them about every 5 years with newer hardware. The only stuff I typically buy new are power supplies and hard drives/SSDs for the nas. Everything else used.

I've never had anything fail on me, but that's my experience. Maybe I'm rolling the dice, only time will tell.

The reason I stick with pfsense is everytime I have an odd networking need, somewhere, someone has already figured it out, and I just follow a guide and I get what I need going.
 
Back when I was researching a new platform for us to use for our enterprise routing and VPN tunnels, pfsense wasn't even on the map due to its lack of industry standard IPsec VPN tunnels like everyone else. I'm sure they do that now (dunno, haven't looked), but where enterprise firewalls are today, I could hardly see any real business using pfsense outside of a netgate hardware product. Everyone is on fortinet, juniper, palo alto, etc.
They also feature enterprise standard backdoors and security vulnerabilities.
 
Having given it some though, I think I might just put proxmox on the new router box, install OPNSense as a VM with a passed through NIC.

Then that will leave me some spare resources to run a small VM. With a 4C4T CPU it won't be much, but I'm thinking maybe an x86 pi-hole as an LXC container.

I might just grab another 16GB of RAM. 32GB total should be OK for the small ZFS pool, OPNSense VM and pi-hole VM.

I haven't done PCIe passthrough in a really long time though. Last time was probably on ESXi 5.5 back in late 2015 / early 2016.

It looks like it is a little more complicated in KVM/Proxmox than it was with ESXi, at least from my recollection.

The most promising guide I have found thus far is this one on Servethehome. It suggests I need to add Intel_iommu=on to the options passed to the kernel (and optionally iommu=pt, but it is unclear to me what that does right now), but I am pretty certain Proxmox replaced Grub with a different boot loader when installed on ZFS to avoid the whole "rpool feature upgrade resulting in system becoming unbootable" problem, so I wonder if Grub is still the right place to make these changes.

I mean, whatever they replaced it with (I will ahve to google and see if I can remember) must have a way to pass options to the kernel, but I just don't know if it is in the same place as in this guide.

I also can't remember if it is possible to pass through the two ports on a NIC independently of eachother, or if they go together since they are on the same physical NIC.

I'm going to pop one of my spare x520 dual port 10gig SFP+ adapters in it. If it is possible to pass through just one of the ports for dedicated use by OPNSense that would be ideal, using one of those ports as the LAN port (so I can route some additional routing traffic through there should I need it, without impacting gigabit internet speeds) and using a passed through on board intel i210 gigabit port as the WAN port, but this may not be possible, so it may be both or nothing.

I absolutely want the WAN port to be passed through one way or another, so I don't expose the proxmox network bonds to the WAN, but I guess on the LAN side I could be fine using a virtual NIC in a pinch, I guess. Not sure how much that would impact performance / CPU load at high throughput though.

Cypher- Dopamin3 SamirD You guys seem to have some recent experience with this. Any thoiughts that might be helpful? I'd appreciate any info you might think is relevant/helpful for this.
 
Last edited:
If you're going to run it bare metal, a lot of the guys on STH have used this board:
https://www.ebay.com/itm/Supermicro...-ST031-6x-10GBE/133362966075?_trkparms=ispr=1

Wow! That looks like a neat little board at a surprisingly affordable cost (presumably because it is x10). I kind of wish I had been aware of it when I was shopping, as I would likely have gone for it.

I bet it uses way more power than my little Rocket Lake though. All of those 10gig NIC's produce a lot of heat, even at idle!

I've already bought the hardware, so no going back now. I'm going to use what I have. At some point when Rocket Lake Xeons get older and cheaper, maybe I'll pick up the top of the line 8C16T 2388G which will allow for more virtualization.


EDIT:

This X10SLH-N6-ST031 is an interesting board, in that I cannot find it on the Supermicro webpage, even among their EOL list. There is an X10SLH-F, which looks very similar (but without all the beefy network ports) but this particular one is nowhere to be found. That might explain why it never showed up in any of my research. I went through pretty much every low end Xeon board from Supermicro one by one before I decided on the X12STL-F...
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
I'm going to pop one of my spare x520 dual port 10gig SFP+ adapters in it. If it is possible to pass through just one of the ports for dedicated use by OPNSense that would be ideal, using one of those ports as the LAN port (so I can route some additional routing traffic through there should I need it, without impacting gigabit internet speeds) and using a passed through on board intel i210 gigabit port as the WAN port, but this may not be possible, so it may be both or nothing.

I absolutely want the WAN port to be passed through one way or another, so I don't expose the proxmox network bonds to the WAN, but I guess on the LAN side I could be fine using a virtual NIC in a pinch, I guess. Not sure how much that would impact performance / CPU load at high throughput though.

I realized the above wasn't really clear, so here is what I am thinking.

On board I have two intel i210 gigabit ports. Those things are the standard old reliable intel Gigabit ethernet chips.

In one of the PCIe slots, I'll be sticking a x510 dual port 10Gig SFP+ adapter.

Option1: (preferred, if possible)
  • i210 Gigabit NIC1 - Passed through OPNSense WAN port
  • i210 Gigabit NIC2 - Unused
  • x520 10Gig NIC1 - Proxmox management interface NIC / other VM traffic
  • x520 10Gig NIC2 - Passed Through OPNSense LAN port
This would allow for full hardware offloads of main internet traffic, and does not expose the Proxmox vmbr0 or vmbr1 to the WAN.



Option2: (x520 ports cannot be separated option, uses virtual NIC/bond for LAN)
  • i210 Gigabit NIC1 - Passed through OPNSense WAN port
  • i210 Gigabit NIC2 - Unused
  • x520 10Gig NIC1 - Proxmox management interface NIC / all VM traffic, including OPNSense LAN
  • x520 10Gig NIC2 - Unused (or maybe link aggregated with above, to switch)
In this config, OPNSense won't see full hardware offloads on the LAN side, but we still maintain VMBR network isolation from the WAN, which is important



Option3: (x520 ports cannot be separated option, passes through both x520 ports to OPNSense)
  • i210 Gigabit NIC1 - Used for Proxmox management interface / other VM traffic
  • i210 Gigabit NIC2 - Possibly link aggregated with above.
  • x520 10Gig NIC1 - Passed through OPNSense WAN port (will require use of copper ethernet adapter in SFP+ port)
  • x520 10Gig NIC2 - Passed through OPNSense LAN port (to switch)
In this config, OPNSense will still have full hardware acceleration for internet traffic, but proxmox will not have 10gig speeds for the VM traffic.


Appreciate any other suggestions.
 
I might just grab another 16GB of RAM.
Honestly for just OPNsense and pihole 16GB is probably fine. OPNsense @ 4 - 6GB RAM, pihole @ 1GB RAM still gives you a good bit for ZFS cache and the other services. By default Proxmox on ZFS will consume up to 50% RAM as arc cache but you can change that

The most promising guide I have found thus far is this one on Servethehome. It suggests I need to add Intel_iommu=on to the options passed to the kernel (and optionally iommu=pt, but it is unclear to me what that does right now), but I am pretty certain Proxmox replaced Grub with a different boot loader when installed on ZFS to avoid the whole "rpool feature upgrade resulting in system becoming unbootable" problem, so I wonder if Grub is still the right place to make these changes.
PT mode is fine to leave on. It allows the device to not need to use DMA translation to the memory and can improve performance. You can tell which steps you need to take - read Step 2 / Step 3a. I think most proxmox did get updated to remove grub for systemd but I started with 8.0 so never had grub in the first place.

Once you make those changes you should be able to add Hardware -> PCI Device and choose Raw Device to pass through it to your VM. I followed that STH guide on Alder Lake and worked great.

1707088159259.png



Option1: (preferred, if possible)
  • i210 Gigabit NIC1 - Passed through OPNSense WAN port
  • i210 Gigabit NIC2 - Unused
  • x520 10Gig NIC1 - Proxmox management interface NIC / other VM traffic
  • x520 10Gig NIC2 - Passed Through OPNSense LAN port

This would work good for you.
 
Wow! That looks like a neat little board at a surprisingly affordable cost (presumably because it is x10). I kind of wish I had been aware of it when I was shopping, as I would likely have gone for it.

I bet it uses way more power than my little Rocket Lake though. All of those 10gig NIC's produce a lot of heat, even at idle!

I've already bought the hardware, so no going back now. I'm going to use what I have. At some point when Rocket Lake Xeons get older and cheaper, maybe I'll pick up the top of the line 8C16T 2388G which will allow for more virtualization.


EDIT:

This X10SLH-N6-ST031 is an interesting board, in that I cannot find it on the Supermicro webpage, even among their EOL list. There is an X10SLH-F, which looks very similar (but without all the beefy network ports) but this particular one is nowhere to be found. That might explain why it never showed up in any of my research. I went through pretty much every low end Xeon board from Supermicro one by one before I decided on the X12STL-F...

Did some more googling. As cool as that board is, it looks like while it was indeed made by supermicro, it was a custom OEM product for a customer of theirs, and they positively will not support it no matter what. I mean, at this point it is so far out of warranty that this does not necessarily matter, but you will have a difficult time finding a manual, BIOS or other firmware for it.

If you google it there are a few reddit complaints that they bought it, couldn't find BIOS/Firmware, reached out to Supermicro who flat out refused to have anything to do with it. (though I have seen some references to custom hacked BIOS:es to include newer microcode, NIC firmware and even add NVMe support, but that is a wee bit on the sketchy side for me.

I wonder who the OEM is and if they still have hosted firmware.
 
Last edited:
Perhaps what I did was wrong so you may be teaching me here. I'm using a Dell R730 as the host. The R730 has 4x Ethernet ports and I have a ConnectX-3 installed. What I did was:

ConnectX-3 -> bridged to vmbr0. This is connected to my switch and tagged with all of the VLANs for my internal network.
eno4 -> Bridged to vmbr1. This is my WAN connection going into the 4th Ethernet port on my server.

The Ethernet cable coming from my ONT goes into a switch configured with an access port for VLAN 2. The cable running to Proxmox (eno4) is also configured as an access port for VLAN 2. As of right now those are the only two ports on my switch on VLAN 2.

All virtual machines have an interface using vmbr0. Only my OPNsense VM has an interface using vmbr1. I'm not passing through the NIC to OPNsense. My understanding is that if I ever wanted a second Proxmox host and wanted to migrate OPNsense from one host to another all I had to do was setup the ports in the exact same way and have a third connection to my switch on VLAN 2. Granted I never got around to trying (and don't know if I ever will) but from what I was reading this should work and allow me to keep OPNsense online with no downtime when I need to update Proxmox.

What is the issue with exposing vmbr1 to the WAN? If my server is down nothing can answer. If the server is up OPNsense grabs my WAN IP and acts as the firewall. Proxmox itself isn't going to grab my WAN address and somehow start talking to the internet. Am I overlooking something here? As far as what you are considering though I don't see an issue with option #1.

As for Pi-hole, I actually run Pi-hole on a Pi as my primary DNS and have a VM as my secondary. They are kept in sync with Gravity Sync and Keepalived. Works well for me.
 
Perhaps what I did was wrong so you may be teaching me here. I'm using a Dell R730 as the host. The R730 has 4x Ethernet ports and I have a ConnectX-3 installed. What I did was:

ConnectX-3 -> bridged to vmbr0. This is connected to my switch and tagged with all of the VLANs for my internal network.
eno4 -> Bridged to vmbr1. This is my WAN connection going into the 4th Ethernet port on my server.

The Ethernet cable coming from my ONT goes into a switch configured with an access port for VLAN 2. The cable running to Proxmox (eno4) is also configured as an access port for VLAN 2. As of right now those are the only two ports on my switch on VLAN 2.

All virtual machines have an interface using vmbr0. Only my OPNsense VM has an interface using vmbr1. I'm not passing through the NIC to OPNsense. My understanding is that if I ever wanted a second Proxmox host and wanted to migrate OPNsense from one host to another all I had to do was setup the ports in the exact same way and have a third connection to my switch on VLAN 2. Granted I never got around to trying (and don't know if I ever will) but from what I was reading this should work and allow me to keep OPNsense online with no downtime when I need to update Proxmox.

What is the issue with exposing vmbr1 to the WAN? If my server is down nothing can answer. If the server is up OPNsense grabs my WAN IP and acts as the firewall. Proxmox itself isn't going to grab my WAN address and somehow start talking to the internet. Am I overlooking something here? As far as what you are considering though I don't see an issue with option #1.

As for Pi-hole, I actually run Pi-hole on a Pi as my primary DNS and have a VM as my secondary. They are kept in sync with Gravity Sync and Keepalived. Works well for me.

I'm not an expert either, but bridging your WAN to vmbr1 is the part that would make me a little nervous, because now you are exposing the inner workings of ethernet bridges and Proxmox to the public internet.

I'm not saying it can't be safe, but it certainly opens up more opportunities for there to be an exploitable vulnerability than directly interacting with the NIC inside of pfSense as you would if you used direct I/O forwarding.

How large of a risk that truly is though, I'll let someone else comment on. Maybe I am just paranoid, but I just prefer to not do it.

I used to not worry about stuff like this, until I actually dove into the logs of WAN connection attempts. Man they are persistent, port scans, attempts to ssh on port 21, and some other things I don't even know what they are. All the time. Which is why these days I take as many safeguards there as I can to minimize exposure.

I don't think I am a big enough target to warrant anyone with real skill trying to break into my network, but for the run of the mill automated script attacks, you bet. All the time. Constant.
 
So, my migration has been working for a while now, but I have noted an ABSOLUTELY MASSIVE increase in CPU use at max WAN load over VPN in going from pfSense/OpenVPN/AES to OPNSense/WireGuard/ChaCha-Poly1305.

I wrote that up in this other thread (I forgot I had started this one for a related subject)

More details over here.
 
They also feature enterprise standard backdoors and security vulnerabilities.
Which any company worth their salt patches immediately. If they didn't, I'm sure the Fortune 500 wouldn't be paying big bucks to run their hardware.
 
Cypher- Dopamin3 SamirD You guys seem to have some recent experience with this. Any thoiughts that might be helpful? I'd appreciate any info you might think is relevant/helpful for this.
Unfortunately, you're above my pay grade at this point as all I've done is read theory at this point--no practical experience with actually doing it yet. :(
 
Cypher- Dopamin3 SamirD You guys seem to have some recent experience with this. Any thoiughts that might be helpful? I'd appreciate any info you might think is relevant/helpful for this.
Unfortunately, you're above my pay grade at this point as all I've done is read theory at this point--no practical experience with actually doing it yet. :(
 
Why not run plain FreeBSD and forget about that GUI nonsense?
 
Which any company worth their salt patches immediately. If they didn't, I'm sure the Fortune 500 wouldn't be paying big bucks to run their hardware.

View: https://youtu.be/7sEI89FAD3c

https://www.zdnet.com/article/some-fortinet-products-shipped-with-hardcoded-encryption-keys/
Fortinet, a vendor of cyber-security products, took between 10 and 18 months to remove a hardcoded encryption key from three products that were exposing customer data to passive interception.

Etc.
 
Last edited:

View: https://youtu.be/7sEI89FAD3c

https://www.zdnet.com/article/some-fortinet-products-shipped-with-hardcoded-encryption-keys/
Fortinet, a vendor of cyber-security products, took between 10 and 18 months to remove a hardcoded encryption key from three products that were exposing customer data to passive interception.

Etc.

Yep, and they've taken a big hit in the reputation department for doing so. I know a lot of people moved to Palo Alto because of that fiasco.
 
Yep, and they've taken a big hit in the reputation department for doing so. I know a lot of people moved to Palo Alto because of that fiasco.
Their PA 400 series price/performance surely helped a lot there compared to their older models lol.
 

View: https://youtu.be/7sEI89FAD3c

https://www.zdnet.com/article/some-fortinet-products-shipped-with-hardcoded-encryption-keys/
Fortinet, a vendor of cyber-security products, took between 10 and 18 months to remove a hardcoded encryption key from three products that were exposing customer data to passive interception.

Etc.


This is why I almost dogmatically insist on open source products for this type of thing.

You won't find me ever using any kind of "system in a box" from any vendor, be it a firewall, a router, a SAN, you name it. Absolutely everything gets custom built from enterprise server hardware and runs open source stuff.
 
Back
Top