Security risks? Allowing guest network access to internal DNS (PiHole)

Discussion in 'Networking & Security' started by iroc409, Oct 14, 2018.

  1. iroc409

    iroc409 [H]ard|Gawd

    Messages:
    1,231
    Joined:
    Jun 17, 2006
    I currently have a PiHole running on a small Debian virtual machine in Proxmox. I had it running on an RPi until an issue cropped up with Raspbian and PiHole, and honestly it runs faster. The VM runs PiHole and is my Unifi controller. The only other thing it *might* see on it is UNMS. I don't have iptables really running at the moment but am working on it (used to pf), and would be willing to install something like Fail2Ban if necessary (but I was under the impression this is mostly for SSH).

    I have a couple separate VLANs on the network, one for internal systems and one guest network that has my tablet, cell phones, guest, work laptop, etc. I would like the guest network to also have access to a PiHole, and see three options:

    1. Spool up another Debian instance with PiHole, or create a container with it that is standalone on the guest VLAN
    2. Allow access from the Guest VLAN to the internal network PiHole
    3. Separate the Unifi/PiHole and put the PiHole on its own VLAN that both networks can access (blech)

    #1 is probably the more secure option, but #2 is attractive as it's less stuff to deal with, less configuration and any stats can all be seen in one place. #3 is a pain in the rear, I think. If I'm only allowing DNS through the firewall to the internal Debian machine (no SSH or anything like that), it doesn't seem like there is a substantial security risk with option #2. Am I missing something? What else should I consider?
     
  2. BlueLineSwinger

    BlueLineSwinger Gawd

    Messages:
    612
    Joined:
    Dec 1, 2011
    It's just DNS queries. Having a node on the guest network use a DNS resolver on your main isn't that big a deal. Just set up your router's ACLs properly.

    Separate Pihole and Unifi into separate VMs, or even better containers (this should be done regardless).
     
    IdiotInCharge likes this.
  3. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,731
    Joined:
    Jun 13, 2003
    I will say that I rolled a test Ubuntu 18.04.1 LTS VM that has pi-hole, Unifi Controller, and UNMS all running and available- remember to wrangle the UNMS ports around!

    As far as containerization, I'm still working on that... I want to run Proxmox or other hypervisor solution on my appliance (a J3160 fanless box) and put pi-hole, Unifi and UNMS on that, but I'll need to learn how to do minimal installs for that stuff.
     
  4. iroc409

    iroc409 [H]ard|Gawd

    Messages:
    1,231
    Joined:
    Jun 17, 2006
    I didn't really see an issue, but didn't know if there was something I wasn't considering or something I missed.

    I suppose I should be containerizing these functions instead of VMs. I considered it when I created them but read discussions about security differences and so forth that suggested a standalone VM. Maybe I'll install UNMS on one and see how it goes and convert the rest.
     
  5. Spartacus09

    Spartacus09 Gawd

    Messages:
    848
    Joined:
    Apr 21, 2018
    I'm running pihole and unifi in docker on 16.04 no issues or complaints so far (haven't tried UNMS yet).
    Its pretty easy to backup the Unifi controller and restore it to the appliance in the docker container.
    Pi-hole is pretty set and forget on the container, I created it from scratch so unsure on it's migrate-ablility, but I imagine should have little to no issue backing up and then restoring to a container.
     
    IdiotInCharge likes this.
  6. iroc409

    iroc409 [H]ard|Gawd

    Messages:
    1,231
    Joined:
    Jun 17, 2006
    It looks like UNMS won't run in a LXC container and only runs on Docker. All I am currently running is an ER-L, so now I'm not sure if it's worth running UNMS or just a good logging system that can handle everything off the network. I guess it will keep me more informed of updates for the router, but the wireless is Unifi (obviously) and the only managed switch I currently have is HP.

    When I migrated my Unifi controller from the rPi to the Debian VM on Proxmox it was pretty simple. Unifi has sort of a migration wizard of sorts, so it was just fire up the new machine, hop through a few steps and adopt the access point and away you go. Pretty simple, and the rPi is still my NUT server for the time being so it just kept the same IP.

    Since my Debian system with PiHole and Unifi is running well, and it ran for quite some time on the rPi with no issues, what is the compelling reason to convert them to separate containers?
     
  7. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,731
    Joined:
    Jun 13, 2003
    Security is one reaons, reliability is another; if perhaps the base OS is hosed, you don't lose it all.

    Less important for UNMS and Unifi, but pihole you'd want to be separate.
     
  8. iroc409

    iroc409 [H]ard|Gawd

    Messages:
    1,231
    Joined:
    Jun 17, 2006
    Well, I got everything changed over. Two PiHole instances and one Unifi controller in separate containers. It wasn't too much of a hassle. Adding a second PiHole for DNS was just a clone of the first and the resources are minimal. I had a head-scratcher for a couple days. DNS was working fine in my internal network and with a Windows VM in a third VLAN for an eventual camera setup, but my guest network wasn't working. I could see the DNS requests showing up in PiHole, but the devices couldn't get a connection. Then I realized I needed an allow established/related on my internal network that I have on all the others. It was letting the Win VM work because I have a rule allowing connections from internal to that machine to allow RDP (etc). Duh.

    I don't think I have the firewall on Proxmox for the containers configured correctly, though. It's allowing ICMP, which is probably a best-practice but the firewall rules I set up to get started didn't specifically allow that. Overall fairly painless, and the containers noticeably use less resources than the Debian VM did. Such is the process.
     
    Farva and FNtastic like this.
  9. FNtastic

    FNtastic [H]ard|Gawd

    Messages:
    1,317
    Joined:
    Jul 6, 2013
    Good stuff. I had a NAT rule kick my ass before. Just remember if you don't use automatic NAT rule creation, you need to go in and create them for each subnet that you want to be able to access the internet. Manual NAT rule creation is not a common option, so most won't ever run in to it. That learning experience has saved my ass many times too.