FreeBSD Jails vs Virtualization/ESXi

iroc409

[H]ard|Gawd
Joined
Jun 17, 2006
Messages
1,385
I have a Sophos UTM and a FreeBSD file server running at home that are pretty lightly loaded most of the time, to the extent it almost seems silly to have that much stuff running all the time. The server is only under load when it's doing a zfs scrub or (md5) backup, and the Sophos box is only stressed if I am home downloading a bunch of stuff.

I have been kind of looking into a ESXi all-in-one, but have also been reading a bit about FreeBSD jails. I have been thinking about getting an EdgeRouter Lite and replacing the UTM by installing Suricata and creating a transparent bridge behind the firewall to do traffic monitoring from the file server. I've looked at maybe running Plex for media streaming. I would be interested in running some sort of http server and OpenVPN, but I know I can run VPN on the router. I could maybe setup a reverse proxy on the router to the server, I'm not fully sure how much security that adds yet but I am still looking into it.

Has anyone done something like this, and is it a good idea? It sounds like a bit of work setting everything up on the front end, but I think it would be an interesting project. What are the big concerns from a security perspective on this?

My biggest priority would be keeping the file server secure and making sure our personal files don't leave the server towards Internet Land. Most of the junk wouldn't matter much, but I do have a small amount of stuff I would consider sensitive.
 
I have been running a AIO Esxi (9450 w\10Gb ram) with Astaro for years. I've had very little problems with it (outside of realtek nics). If your current hardware has the power/compatibility, you can convert your current Sophos box to Esxi and then download the virtual appliance and use your existing config file. This is just to say its a very do-able thing. I'll let the freenas/freebsd folks talk to your actual questions about doing the AIO on the freebsd side. :)
 
The file server isn't really suitable for an AIO itself, and doesn't have passthrough. It's only a dual core with 8GB RAM. The firewall is over-powered for most of its use, and under-powered for significant file transfers (I lose probably 25% of throughput to IPS & antivirus). They are both about the same age, over 5 years old, and could probably use replacement.

I know that some people on the FreeNAS forums crusade against virtualizing it, but here people seem to have a lot of success with the AIO. I guess it's just a philosophical question. We used to pack everything on a server at once, but the push to virtualization has caused us to compartmentalize every small function of a server. The overlap of system resources seems in some instances rather inefficient, but I guess we mostly have the horsepower for it (save for RAM).
 
If you need fancy networking and proper virtualization of the network stack, jails aren't for you. Treat jails as glorified chroots. There was some effort to bring network virtualization to jails (VNET, VIMAGE), but last time I checked it's highly experimental and not really fit for serious use.

Also, jails all run under the same kernel and since FreeBSD doesn't have real resource management, your only tool to manage CPU usage is nice(1).
 
and even the RAM is handled transparently pretty well too.

FreeBSD folks tend to be ol' school - they're not seeing the benefits yet of virtualization like most other OSes have embraced. Is what it is.
 
If you need fancy networking and proper virtualization of the network stack, jails aren't for you. Treat jails as glorified chroots. There was some effort to bring network virtualization to jails (VNET, VIMAGE), but last time I checked it's highly experimental and not really fit for serious use.

Also, jails all run under the same kernel and since FreeBSD doesn't have real resource management, your only tool to manage CPU usage is nice(1).


Some of the stuff I've read suggests "glorified chroots" is an over-simplification and it provides much more segregation, but you can't believe everything you read.

Interestingly, if you don't filter lo0 through pf (most internet examples skip lo0), the jails can talk to each other through the network.

I guess the question is whether I need virtualization at all. I've decided I don't really plan to run http or a VPN endpoint on my file server after the SSL bug that basically left my firewall's cheeks flapping in the breeze. I'll put HTTP or VPN on a separate device if I get rid of the UTM--I have a netbook or I can get a Raspberry Pi.

Basically, to go to a single machine, my options are:

Buy a new server, install ESXi and run:
-Sophos UTM
-FreeBSD file server (with a pass-through USB controller and drive controller)
-Whatever else I want to play with - http server, most likely, maybe a little
monitoring software (Sophos has built-in VPN)

-OR-

Install Suricata & PF, and _maybe_ Squid on the file server, and run a transparent bridge between my switch and a light-weight router (ERL/Mikrotik). I'd buy access to the Snort VRT & ETPro rule sets for blacklisting & malware detection.

Even with the ESXi, I'd still probably run a lightweight perimeter router.

I need to upgrade the server eventually, and definitely the drive array as I think I have one dying. But, I can probably get away with a smaller server (especially RAM) if I don't go the ESXi route. My current server should have enough horsepower to run basic IPS and file server on my network, as the load is fairly low.


and even the RAM is handled transparently pretty well too.

FreeBSD folks tend to be ol' school - they're not seeing the benefits yet of virtualization like most other OSes have embraced. Is what it is.

bhyve was released in version 10. I don't know much about it, other than it's a native hypervisor. I don't know that FreeBSD hasn't embraced virtualization more than the primary hypervisors just don't support it. Everyone uses VMWare, with a smattering of Xen and Hyper-V--none of which support FreeBSD as a host to my knowledge. pparently Virtual Box works well with FreeBSD, but I've never used it.

There are certainly new technologies, but a lot of stuff out there is just a new way interacting with old-school systems. I read an interesting editorial about that recently.

I'm probably just chasing my tail and making things harder than they need to be, but I am extremely tired of the excess heat and noise from two "mostly" low-power machinesrunning all the time in my office.
 
bhyve was released in version 10. I don't know much about it, other than it's a native hypervisor. I don't know that FreeBSD hasn't embraced virtualization more than the primary hypervisors just don't support it. Everyone uses VMWare, with a smattering of Xen and Hyper-V--none of which support FreeBSD as a host to my knowledge. pparently Virtual Box works well with FreeBSD, but I've never used it.

There are certainly new technologies, but a lot of stuff out there is just a new way interacting with old-school systems. I read an interesting editorial about that recently.

I'm probably just chasing my tail and making things harder than they need to be, but I am extremely tired of the excess heat and noise from two "mostly" low-power machinesrunning all the time in my office.

They don't support it as a host because VMware ESXi / Hyper-V / Xen are all type-1 hypervisors that run bare-metal. There's no such thing as a host-os in those, they ARE the host. VMware has had it as a supported guest OS for a very long time, and I've got 3-4 copies of it running right now (and am, in fact, setting up a new 10.0 version right now, or would be if a package wasn't missing off of distfiles...) on ESXi.

bhyve is a type-2 hypervisor like KVM, VMware workstation/player, virtualbox, etc (you can argue that KVM is a 1.5, but whatever). FreeBSD is just too small a market to be a host for most type-2 hypervisors, as the required plumbing to access the needed CPU features at ring-0 for effective virtualization requires significant work either directly in the host kernel, or access through it, and it's just not worth the time commitment for such a small market (since that's home / small scale especially - enterprise is type-1, and you can run it as a guest there, which is why it's supported in that fashion).
 
They don't support it as a host because VMware ESXi / Hyper-V / Xen are all type-1 hypervisors that run bare-metal. There's no such thing as a host-os in those, they ARE the host. VMware has had it as a supported guest OS for a very long time, and I've got 3-4 copies of it running right now (and am, in fact, setting up a new 10.0 version right now, or would be if a package wasn't missing off of distfiles...) on ESXi.

bhyve is a type-2 hypervisor like KVM, VMware workstation/player, virtualbox, etc (you can argue that KVM is a 1.5, but whatever). FreeBSD is just too small a market to be a host for most type-2 hypervisors, as the required plumbing to access the needed CPU features at ring-0 for effective virtualization requires significant work either directly in the host kernel, or access through it, and it's just not worth the time commitment for such a small market (since that's home / small scale especially - enterprise is type-1, and you can run it as a guest there, which is why it's supported in that fashion).

Right, I should have been more specific. I meant VMWare Server/Workstation/XenServer/etc--the type 2 hypervisors from those brands. I've only fired up a FreeBSD VM once in the last few years, but I'm sure they have improved with FreeBSD as a guest. I had one in Player that I was using for some programming nonsense.

I'd agree it's market share, and it doesn't hurt that you can access several operating systems through Linux against only a couple with *BSDs. It also helps a lot that you have several corporations behind several Linux distributions, like OpenSUSE, Ubuntu, RedHat, etc. I don't know that any corporation really carries the torch for *BSD, unless you count Apple--which really isn't driving "*BSD adoption" per se.

It wasn't too long ago FreeBSD had a pretty good market share on internet servers, but I have no idea where that is now. I've heard rumors OpenBSD has been having some issues lately, which won't help the cause either. I use OpenSUSE on my laptops. I tried PC-BSD once. It worked fine, but I wasn't overly thrilled with it.
 
Ironically, OpenBSD is the best BSD guest for VMware, with in-kernel devices for network and disk and a time sensor for ntpd that doesn't require network-level NTP.

FreeBSD still requires VMware Tools installation and kernel modules for disk/network.
 
Right, I should have been more specific. I meant VMWare Server/Workstation/XenServer/etc--the type 2 hypervisors from those brands. I've only fired up a FreeBSD VM once in the last few years, but I'm sure they have improved with FreeBSD as a guest. I had one in Player that I was using for some programming nonsense.

I'd agree it's market share, and it doesn't hurt that you can access several operating systems through Linux against only a couple with *BSDs. It also helps a lot that you have several corporations behind several Linux distributions, like OpenSUSE, Ubuntu, RedHat, etc. I don't know that any corporation really carries the torch for *BSD, unless you count Apple--which really isn't driving "*BSD adoption" per se.

It wasn't too long ago FreeBSD had a pretty good market share on internet servers, but I have no idea where that is now. I've heard rumors OpenBSD has been having some issues lately, which won't help the cause either. I use OpenSUSE on my laptops. I tried PC-BSD once. It worked fine, but I wasn't overly thrilled with it.

I love BSD. I'm an old school Sys-V guy myself, as well as AIX/HP-UX...

But even I have trouble justifying using it. Linux adapts faster to new tech, has more options for the bleeding edge (where I live, given where I work), has more support from vendors and software, and has more complete documentation for many of the things I do. And they're faster too...
 
My first *NIX was FreeBSD, 2.2.7 I think. We only had two floppy disks available, so we used those two to install the whole system on an IBM laptop with like a 60MB hard drive. It was awesome.

I hadn't really used it terribly much over the years until I decided to try out ZFS, naturally on FreeBSD. The Solaris forks seemed a bit premature (and they've changed a bit), otherwise I probably wouldn't have started using it again. It's been running fine for a couple years now, but desperately needs to be updated.

Before FreeBSD, my file server was actually running Win Server 2008 with a Hyper-V instance of WHSv1 for network backups. It worked great.

Linux does have its advantages, that's for sure.
 
Solaris premature? :confused: You do know it was developed by, and still is, a Sun Microsystems Technology, right? The BSD implementation is more mature than Linux for sure, but that's the one thing that Sun nailed - Solaris has the best ZFS system in the world.
 
I run xen and there is FreeBSD version of xen also. I love xen it works great and let you assign pci cards.
Esxi 5.5 seems to be great now since they had removed all restrictions on free versions such as ram and number of cpus.
 
Solaris premature? :confused: You do know it was developed by, and still is, a Sun Microsystems Technology, right? The BSD implementation is more mature than Linux for sure, but that's the one thing that Sun nailed - Solaris has the best ZFS system in the world.

Oh no, the forks, not Solaris--it's ancient (and solid). OmniOS, IllumOS, OpenIndiana, etc. I tried OpenIndiana a few years ago, and installed Gea's excellent napp-it. It worked well, but the user documentation was really incomplete and it was a brand new OS from what I recall.

It seems like several months (year?) later, most people jumped ship from OpenIndiana for OmniOS.

I didn't really want to put data I consider critical on a new system. I don't know why everyone migrated, but I don't follow Solaris and their forks very well. Always wanted to get into it, but never really had a reason to. 7-8 years ago even got the official DVD set from Solaris.
 
Oh no, the forks, not Solaris--it's ancient (and solid). OmniOS, IllumOS, OpenIndiana, etc. I tried OpenIndiana a few years ago, and installed Gea's excellent napp-it. It worked well, but the user documentation was really incomplete and it was a brand new OS from what I recall.

It seems like several months (year?) later, most people jumped ship from OpenIndiana for OmniOS.

I didn't really want to put data I consider critical on a new system. I don't know why everyone migrated, but I don't follow Solaris and their forks very well. Always wanted to get into it, but never really had a reason to. 7-8 years ago even got the official DVD set from Solaris.

Makes total sense - and is precisely why I haven't gone there either (it's also a dead OS, lets be honest). Didn't make that jump on your first statement :)
 
Solaris 11.2 scheduled for Mid-2014. Just sayin.

Solaris 11 Zones have the CrossBow network stack wherein it's trivial to create Vnics in or out of band, have a 1-click SAMP stack, fine grained resource allocation, are trivial to mount on or move to whatever sort of zpool and vdev you might desire, weigh about 200MB/RAM, can boot a 64-bit VBox VM (or 30) at Zone boot, are trivial to clone, can run just fine in an OVM or ESXi instance of Solaris 11.1, are independently rebootable, and do a bunch more tricks. Solaris 11.1 has about 14000 pages of documentation too.

http://docs.oracle.com/cd/E26502_01/html/E29024/zones.intro-9.html#scrolltoc

Code:
Listing 1: Creating a Zone 
root@global:~# zonecfg -z testzone
testzone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:testzone> create
create: Using system default template 'SYSdefault'
zonecfg:testzone> set zonepath=/zones/testzone
zonecfg:testzone> set autoboot=true
zonecfg:testzone> set bootargs="-m verbose"
zonecfg:testzone> verify
zonecfg:testzone> commit
zonecfg:testzone> exit

Listing 2: Installing a Zone 
root@global:~# zoneadm -z testzone install
 A ZFS file system has been created for this zone.
Progress being logged to /var/log/zones/zoneadm.20111016T114436Z.testzone.install
       Image: Preparing at /zones/testzone/root.

 Install Log: /system/volatile/install.6677/install_log
 AI Manifest: /tmp/manifest.xml.zVaybn
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: testzone
Installation: Starting ...

              Creating IPS image
              Installing packages from:
                  solaris
                      origin:  http://pkg.oracle.com/solaris/release/
DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                              167/167 32062/32062  175.8/175.8

PHASE                                        ACTIONS
Install Phase                            44311/44311 

PHASE                                          ITEMS
Package State Update Phase                   167/167 
Image State Update Phase                         2/2 
Installation: Succeeded


        Note: Man pages can be obtained by installing pkg:/system/manual

 done.

        Done: Installation completed in 110.519 seconds.


  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /zones/testzone/root/var/log/zones/zoneadm.20111016T114436Z.testzone.install

root@global:~#

http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-092-s11-zones-intro-524494.html
 
Last edited:
And it's still a dead OS. So is HP-UX, but that doesn't keep HP from releasing it and patching it (AIX is still holding on by some miracle).

Tell me one thing that Solaris does that Linux or BSD doesn't also do, and arguably better, other than ZFS (and BSD is "close enough" at this point on that point).
 
1. Zones.

Sigh.

BSD has jails, but even then, dead feature. Virtualize the thing on a good Hypervisor and run unique instances per install anyway since there's effectively no notable penalty, especially since if you're moving towards the bleeding edge you're doing on-demand provisioning and scale out through an automation suite which is keeping them separate to manage each on its own lifecycle.

In other words - why would I choose to saddle multiple applications to a non-HAed kernelspace when I can distribute them in an ha-capable scenario with no cost?
 
OP was considering approaches for his home NAS, and desired to silo things like a httpd or two, maybe a VPN appliance in his DMZ. Solaris Cluster and RAC seem different topics at this point.

And BSD Jails aren't nearly as isolated or feature-rich as Zones, much less as easy to use. :)
 
Last edited:
OP was considering approaches for his home NAS, and desired to silo things like a httpd or two, maybe a VPN appliance in his DMZ. Solaris Cluster and RAC seem different topics at this point.

Most of the discussion had circled around ESXi at that point.

RAC is a completely different discussion, aside from licensing limitations (Oracle GRID and ASM aren't exactly what one would call cheap), it's also not just for ~any~ app, but for things that are part of the Oracle suite or closely related.
 
OP would have a lot of fun and do a lot worse than a full desktop Solaris 11.1.
I run several S11.1 Zones on 8 cores and 32GB

BIND
OpenVPN (IPMP)
SIP proxy (IPMP)
SAMP (runs an NTRIP caster)
11g
Calibre (old Solaris version)
+ 5xVBox zones - one running PFsense in CARP to my edge, another running TurnKey appliances, others run ManjaroBox, CentOS, Mint. Headless, most nics bridged to CrossBow Vnics.

Does mostly cifs, but does nfs and some iscsi. I enjoy it and think it's a nice little ZFS NAS with a nice retro Gnome2 desktop sitting on top, like a saddle on a bull.
 
Last edited:
OP would have a lot of fun and do a lot worse than a full desktop Solaris 11.1.

Well, if I put the effort into a new OS at this point it's probably going to be Linux-based. I don't know Linux near well enough. I do use Oracle's documentation from time to time for ZFS, as it is very thorough. I did drive by an Oracle building yesterday, so there's that. :)

I kind of almost think I should just keep the storage on a totally separate, dedicated device--especially since a lot of stuff on it is basically irreplaceable. I've thought about just picking up a pair of cheap T20's or one of those dirt cheap new Sempron/Athlons. It seems like a waste to have two systems idling most of their time.

If I can get email reporting working the way I want on the network, I don't really need an externally-facing HTTP. It would be nice to have VPN, but ultimately I suppose it's not necessary. At work our open wireless uses WEP, and it would be nice to be a little more secure than that. It would be nice on the rare occasion that I need to do something to the server to have access, but I've only really used it once recently. I could always open the server SSH to the world.

Ultimately what I need is file serving and some network monitoring & sterilization. The other stuff is just nice to have.

I bought an EdgeRouter Lite to play with, and it's currently doing the routing/firewall for my network. I had both my UTM and server turned off for a couple days, and it was so quiet and not so hot in my office... pure bliss!
 
Sorry..was just going sorta going by the thread title, the OP, and the "Virtualization" part of the forum name when I got into the woolybushes about the best available OS-level virtualization out there, and showed some examples of web-facing VMs I run in clonable Zones running at bare-metal speed, before the scope of the OP somehow got reduced. :D

A Solaris-based OS will provide a better ZFS implementation than any Linux (by law*). You might like a dedicated Napp-it box on a minimalist OmniOS. rpool mirrors rule.

Why the need for a UTM? As a VPN appliance? A work requirement for application-level (L7) monitoring?
IPtables can hold its own in a pinch... ;)




* license incompatibility meant it took over a decade for a ground-up re-engineered ZOL to arrive at it's current level of craptasticality compared to ZFS on Solaris-based OSs.
 
Zones aren't virtualization :p They're just isolated systems for sharing a given OSes kernel space. If I shoot the one kernel, the rest go down too.
 
Zones aren't virtualization :p They're just isolated systems for sharing a given OSes kernel space. If I shoot the one [particular] kernel [the Global Zone], the rest go down too.


Yeah bro. Take out ESX, everything mounted on it stops too. :D


Zones are virtualization (isolated instances with an abstracted hardware layer and kernel space), so is CrossBow. In userspace, a Zone is at least as isolated as your root privs, and 1-click apps run at bare metal speeds. Everything good in BSD came from Solaris, including PF, so it's no surprise Solaris has the keener and more polished implementation of OS-level virtualization, great monitoring and allocation tools, and the next gen network stack. A cloud in a box.
(OpenStack+ a half page of boltons is still years behind Solaris)
 
Last edited:
Yeah bro. Take out ESX, everything mounted on it stops too. :D


Zones are virtualization (isolated instances with an abstracted hardware layer and kernel space), so is CrossBow. In userspace, a Zone is at least as isolated as your root privs, and 1-click apps run at bare metal speeds. Everything good in BSD came from Solaris, including PF, so it's no surprise Solaris has the keener and more polished implementation of OS-level virtualization, great monitoring and allocation tools, and the next gen network stack. A cloud in a box.
(OpenStack+ a half page of boltons is still years behind Solaris)

Except you have no access to the ESX kernel from the guest, but you DO have access to the core host kernel in a zone, because that's all there is to execute against. Protect it all you want, it's still the solaris/linux/BSD kernel there that your userworld and kernel binaries are talking to, which means that each can potentially cause problems with the others by disrupting that single point of failure. And should you just pull the power from the host, because each virtual machine is a separate instance, they get restarted on other hosts via an HA process (same for Hyper-V/Xen/etc). Since each is entirely isolated in a complete sense, any violation of any particular guest kernel only effects that guest, instead of the entire system - they each operate 100% separately, there is no sharing of any code or guest memory at all between them.

Isolated as root privs? Try isolated at the hardware layer in a true Type-1 or Type-2 hypervisor. I'll take that (or, at worst, an LPAR) over any form of software partitioning within a single guest. Eliminate the single point of failure.

Virtualization implies some form of actual representation of virtual hardware - Jails/Zones simply isolate processes within a single operating system, but still allow access to the underlying hardware through the original OS kernel. That is an inherent flaw that limits them, projects like Docker included. That's why they never took off - software packages to do that have been around for decades.

This is also why with a true hypervisor you can migrate machines between hosts, storage, even processor types completely non-disruptively - the hardware layer is completely virtualized and utilizing specific registers to gain near bare-metal performance. Try doing that with a zone.

I'd argue everything good in BSD AND Solaris came from Sys V Unix. Both are descendents of the original Bell Labs code.

Using KVM or Xen for OpenStack gives you FAR more flexibility than zones ever could, never mind any of the other hypervisors they support. I'd rather not have my guests doing anything significant on the networking side personally, because I'd be running some form of network virtualization outside of there (take your pick between Nicira STT, VXLAN, NVGRE, OpenFlow - doesn't matter) to handle that, rather than tasking a guest with it. That level of flexibility blows away anything a single guest can handle, and performance (Especially with dedicated ASICs) will be far better as well (especially if you have some form of logical routing within the internal L3 space). I can understand not having that though, but even then, I'd rather my guests just be sending traffic and not trying to balance out themselves - just concentrate on running the one application, and I'll do the rest from outside of there at the hypervisor.

Run Solaris as a guest if you really want it's style of managing hardware, memory, and networking - but other than the Oracle stack, I suspect you'll struggle to find much significant modern support in the enterprise world for it, because it seriously is a dead OS from that standpoint. No one wants to care about old-school Unix anymore if they can avoid it.

I have to ask - how much experience with enterprise-level x86 Hypervisors do you have? I had much the same opinions as you when I started out and was coming from a mostly SunOS/Solaris/AIX background, and I'm curious if you just don't have full exposure to what all we can do these days.
 
It's theoretically possible to attack a hypervisor through a VM too, lopoetve. Maybe you can link me to some of the Solaris Zone-to-Global vulnerabilities the script kiddies are running these days to kype stuff from the banks and hospitals and government agencies running trusted apps in Solaris Zones?

As dead as Solaris is, it's amazing how many suckers like VMware, Wells Fargo, and CERN still run hundreds and hundreds of cores of that shit. Boy, they sure got hosed, huh?

Now I have to ask - how much experience do you have scripting Solaris in an enterprise or big data environment, or even putzing about on it?

slide-12-638.jpg



Zone migration? Clone and ZFS send. Or ZFS move. Problem?
 
Last edited:
Why the need for a UTM? As a VPN appliance? A work requirement for application-level (L7) monitoring?
IPtables can hold its own in a pinch... ;)

Hehe, that's funny--I think I saw that ages ago. I've ran a UTM for years, and it makes me feel warm and fuzzy. I started with Endian, then Untangle for probably 6-7 years, and the last several months on Sophos. I probably don't really need one at home, as the wife and I try to practice safe web-usage, but it's nice to have. I can do VPN from the firewall/router, but not sure how well it performs.

Everything good in BSD came from Solaris, including PF,

As I understand it, PF was written in 2000 (or so) by Daniel Hartmeier for OpenBSD to replace the previous firewall due to licensing issues. Maybe it borrows something from Solaris?
 
Hehe, that's funny--I think I saw that ages ago. I've ran a UTM for years, and it makes me feel warm and fuzzy. I started with Endian, then Untangle for probably 6-7 years, and the last several months on Sophos. I probably don't really need one at home, as the wife and I try to practice safe web-usage, but it's nice to have. I can do VPN from the firewall/router, but not sure how well it performs.

It's a given that cyphers increase processor time...

I ran Snort for a while on PFsense, but it wasn't really worth the time for my home/office. I now use that ram for a transparent Squid reverse proxy.



As I understand it, PF was written in 2000 (or so) by Daniel Hartmeier for OpenBSD to replace the previous firewall due to licensing issues. Maybe it borrows something from Solaris?

wiki said:
PF was originally designed as replacement for Darren Reed's IPFilter, from which it derives much of its rule syntax.

Darren Reed was with Sun when he wrote IPF...
 
Last edited:
It's a given that cyphers increase processor time...

I ran Snort for a while on PFsense, but it wasn't really worth the time for my home/office. I now use that ram for a transparent Squid reverse proxy.

I've looked into some people that use PF and Emerging Threats blacklists, which probably provides some of the protections as a lot of people use ET for their IPS rules. People have been doing some of it with the EdgeRouter, but it requires some scripting to get it running, so that might be an option with more sanity for a home user. Or, nothing, because most people get by fine without it.

Darren Reed was with Sun when he wrote IPF...

Ah, that's right; I forgot some of it came from IPF. Didn't realize they came from Sun.
 
It's theoretically possible to attack a hypervisor through a VM too, lopoetve. Maybe you can link me to some of the Solaris Zone-to-Global vulnerabilities the script kiddies are running these days to kype stuff from the banks and hospitals and government agencies running trusted apps in Solaris Zones?

As dead as Solaris is, it's amazing how many suckers like VMware, Wells Fargo, and CERN still run hundreds and hundreds of cores of that shit. Boy, they sure got hosed, huh?

Now I have to ask - how much experience do you have scripting Solaris in an enterprise or big data environment, or even putzing about on it?

slide-12-638.jpg



Zone migration? Clone and ZFS send. Or ZFS move. Problem?

Tons, both in Enterprise/Commercial space as well as in Big Data/MPI computing, and I work for one of the companies named and we don't use much Solaris at all ;) All of the others were migrating off relatively rapidly as well for customized Linux instances in various forms, other than Oracle stack bits (where it still is the best, especially on SPARC).

Your zone migration is effectively offline. Why would I do that when I can migrate a VM live between hosts? What's the point when I can completely isolate them 100% with no effective cost?

I'm pretty much asking what's the point, when I can get that with 100% independent kernel space as well, with almost no penalty, and better isolation, control, and the same ease of maintenance?

edit: And so far, the only two type-1 hypervisor guest to hypervisor root privs assults I'm aware of, one was limited to a specific HV (Xen) and hardware (specific intel CPUs), and was rapidly patched, the other effectively required root privileges (or unsanitized imports) to start with, which is a hole you can simply close (it's like leaving ssh open on 22 to the internet... probably not thinking that one through). Do you know of others (honestly curious here)?

edit2: You also didn't provide a way to failover zones between hosts in case of failure. While that does exist, I'd argue that the Process is far from as simple as it is in most Type-1 hypervisors (VMware, for instance, is as simple as having shared storage (which requires no unique configuration of note) and enabling HA on the cluster with a checkbox).
 
Last edited:
lopoetve, in the context of the OP, insisting on talking about deep enterprise sysadmin shit like failover Zone clusters and "migration" in the absence of backups to use is... :rolleyes:
A home user can shut his NAS down without getting fired, an enterprise sysadmin with uptime requirements is going to migrate massive 12c DBs from backups and clones. If I want to put my BIND Zone on a new home system, I'm just going to create a Zone, install BIND, and copy a whole 11 MB of configs and records over from the old system. I might do it with a script if I feel like a crafty bastid. I might just C/P across a Samba share if I feel like it... :eek:


Still waiting, though, for those CVEs pertaining to all those trivial Zone->Global kernel exploits you're so excited about. All I have from you is some confirmation that it is, in fact, beyond theoretically possible to exploit a Type 1 hypervisor like Xen or VMWare through a guest.
Just looking for some confirmation of the theory that the security of kernel space separation provided by a T1 hypervisor is just so much better than the incredible horrible dangerous danger of kernel space separation provided by a virtualized OS with a built-in hypervisor. Especially for a home NAS. ;)
...

T1 virtualizers do have overhead, even with "passthrough" BTW. Zones are weightless - NFS writes anyone?... Mount Solaris on VMWare and pass through enough shit, you can run a 64-bit VM in a Zone with a (virtualized ComStar) iSCSI target in another Zone or two to "quadruple-hull" your lolcat folders and dvdrips if you want. But please, for the love of God, run Solaris and ZFS on its own hardware.

..

The delivered performance of these [VMs] is critical. In general, we use fast server hardware, 10 GbE networks, ZFS for all file systems, DTrace for performance analysis, and Zones wherever possible. We also performed our own port of KVM to illumos, and run KVM instances inside Zones, providing additional resource controls than can be applied, and improved security (“double-hulled virtualization”).

http://dtrace.org/blogs/brendan/2013/01/11/virtualization-performance-zones-kvm-xen/

(Brendan's not with Joyent anymore, since he became Netflix's head systems optimizer...)
...

Failover/HA? WTF needs critical app failover who doesn't use a legitimate failover solution FFS? Zone Clusters, Solaris Clusters, Hadoop clusters (free), in-Zone Apache Clusters/kernel servers, STB OpenStack clusters... And if I seriously need to cluster, then SPARC. Just like you and Verizon do.

...

I noticed Docker is now in the Ubuntu 14.04 repos as Ubu's standard Linux Container manager. Don't you just hate to see a proud Linux outfit like Canonical waste resources on such a dead tech as Containers? :p
 
Last edited:
Zones are virtualization (isolated instances with an abstracted hardware layer and kernel space), so is CrossBow. In userspace, a Zone is at least as isolated as your root privs, and 1-click apps run at bare metal speeds. Everything good in BSD came from Solaris, including PF, so it's no surprise Solaris has the keener and more polished implementation of OS-level virtualization, great monitoring and allocation tools, and the next gen network stack. A cloud in a box.
(OpenStack+ a half page of boltons is still years behind Solaris)

PF the firewall? Because PF was developed by Daniel Hartmeier for OpenBSD. PF derived its syntax from IP Filter, and IP Filter was ported to Solaris, among other operating systems.

FreeBSD Jail first appeared in March 2000 with the release of FreeBSD 4.0. Solaris Zones appeared in January 2005 with Solaris 10.


lopoetve said:
I'd argue everything good in BSD AND Solaris came from Sys V Unix.

Actually, SysV was released in 1983, while BSD Unix was released in 1977. The two major UNIX versions of the 1980s were SysV and BSD.

Taken from Wikipedia, "System V also included features such as the vi editor and curses from release 4.1 of the Berkeley Software Distribution of UNIX developed at the University of California, Berkeley (UCB)..."


Sources:
https://en.wikipedia.org/wiki/PF_(firewall)
http://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE
https://en.wikipedia.org/wiki/Solaris_(operating_system)

https://en.wikipedia.org/wiki/UNIX_System_V
https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
 
PF the firewall? Because PF was developed by Daniel Hartmeier for OpenBSD. PF derived its syntax from IP Filter, and IP Filter was ported to Solaris, among other operating systems.

If you're going to be petty, at least be correct.

I guess you didn't read the part of the Wiki about the licensing issues which forced Daniel Hartmiester to port Darren Reed's IPF into BSD as PF?
For the second time, Darren Reed worked at Sun when he wrote IPF for Solaris. Still does, only now it's called "Oracle".




FreeBSD Jail first appeared in March 2000 with the release of FreeBSD 4.0. Solaris Zones appeared in January 2005 with Solaris 10.

They're not the same thing, Bro. Never really were.
Zones are much more advanced. I doubt Zones and Jails share much code, for the last few years now.




Actually, SysV was released in 1983, while BSD Unix was released in 1977. The two major UNIX versions of the 1980s were SysV and BSD.

Taken from Wikipedia, "System V also included features such as the vi editor and curses from release 4.1 of the Berkeley Software Distribution of UNIX developed at the University of California, Berkeley (UCB)..."


Sources:
https://en.wikipedia.org/wiki/PF_(firewall)
http://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE
https://en.wikipedia.org/wiki/Solaris_(operating_system)

https://en.wikipedia.org/wiki/UNIX_System_V
https://en.wikipedia.org/wiki/Berkeley_Software_Distribution



So? BSD got ZFS, every good improvement in Jails, PF, CIFS, NFS, DTrace , and a whole bunch of stat tools from Solaris, and could learn a shit ton about usability from Solaris. ;)




Don't know why you found it so necessary to come all the way down here to make a post for the specific purpose of correcting my history on a tertiary point and lecturing us from the lofty lectern of Wikipedia, but you did. Thank you for the opportunity to reinforce my points. Solaris has the best ZFS, the best OS virtualization, the best system and services monitoring, the best virtualized networking, the best package server, and the best documentation out there. It's also friendly and easy to learn, and less buggy, more polished, and more advanced than anything comparable. A base Zone will run a 64-bit VBoxHeadless vm with HWvirtex and HPET, can be plumbed with however many zero-overhead Vnics you want to create and bridge to, for free. No network or CPU overhead. You can even register a VBox instance as a service in the Zone integrated with svcadm (Solaris SMF) for higher availability:

http://www.youtube.com/watch?v=eEGBRQDNGO4


Solaris is badass. More people should try it.
 
Last edited:
Nevermind - I'm just walking away from this. We're talking very different things and very different use cases. I'm done here.


Oh, and we haven't had a single core of solaris in 6 years. Just saying. Got tired of re-writing the networking stack all the time.
 
Last edited:
Nevermind - I'm just walking away from this. We're talking very different things and very different use cases. I'm done here.

The OP was about an entry-level home microcloud. You brought up migration and HA, and how Solaris Zones aren't "True virtualization". I pointed out several times that your strenuous objections to Solaris were generally clustered well beyond the scope of most home file servers. I'm glad we finally see eye to eye, and can restrict further discussion to the domain of the OP. :D

So if the subject is a home microcloud, especially for someone who can appreciate a minimal OS or silo-ed dev or server environments with uber features, VMware is not the only 'free' option. A Solaris mothership is going to do more and be easier to install and use than FreeBSD, and have higher function with much less work. For everything except good graphics, most home users can be well served with VBox, either headless on a Solaris Zone, or on the global zone desktop. And remember, everything's ultimately running on ZFS (and Napp-it runs a treat on Solaris 11.1, desktop or server versions). :D

Also consider that VBox instance of Kali Linux 64-bit or a Solaris Apache/MySql/PHP stack instance serving Sparky's MailBomber to your friends can be run as user "whatever", which can be different from the Zone root, which was created by Role root, which is not User root on Solaris. Your lolcat folders can also reside in a UUID-keyed .vdi in a TrueCrypted Linux file system, which is virtualized on a bare-metal-encrypted ZFS file system, so someone would have to beat you with a wrench, or at least trick you out of 3 different userid/password combos, which is about the same as ESXi.



Oh, and we haven't had a single core of solaris in 6 years. Just saying. Got tired of re-writing the networking stack all the time.



I happen to know that VMware happily pays licensing for dozens of cores of Solaris 10 and 11 for its own corporate DB uses, not to mention Solaris development platforms, which I'm sure Oracle provides gratis and likely on-demand.
Moreover, VMware's parent company is a Solaris ISV and Solaris 11 Partner:

“EMC and Oracle have collaborated for more than 15 years and we’re happy to see continued investment in Oracle Solaris,” said Brian Jackson, senior director, technology alliances at EMC Corporation. “We expect the advanced features of Unified Archives to enable our joint customers to simplify their IT environments without compromising performance. EMC plans to leverage Oracle Solaris 11.2 functionality to achieve optimal integration with our storage.”

:rolleyes:


VMware could take a few lessons from CrossBow, BTW. The Solaris network stack is extremely well engineered and standards compliant, in the long Sun tradition. :cool:

Code:
$ man flowadm




tl;dr: If you want a ZFS microcloud or home virt lab, you might want to try the free download of the world's best cloud operating system if you get a chance. :)


.
 
Last edited:
If you're going to be petty, at least be correct.

I guess you didn't read the part of the Wiki about the licensing issues which forced Daniel Hartmiester to port Darren Reed's IPF into BSD as PF?
For the second time, Darren Reed worked at Sun when he wrote IPF for Solaris. Still does, only now it's called "Oracle".






They're not the same thing, Bro. Never really were.
Zones are much more advanced. I doubt Zones and Jails share much code, for the last few years now.








So? BSD got ZFS, every good improvement in Jails, PF, CIFS, NFS, DTrace , and a whole bunch of stat tools from Solaris, and could learn a shit ton about usability from Solaris. ;)




Don't know why you found it so necessary to come all the way down here to make a post for the specific purpose of correcting my history on a tertiary point and lecturing us from the lofty lectern of Wikipedia, but you did. Thank you for the opportunity to reinforce my points. Solaris has the best ZFS, the best OS virtualization, the best system and services monitoring, the best virtualized networking, the best package server, and the best documentation out there. It's also friendly and easy to learn, and less buggy, more polished, and more advanced than anything comparable. A base Zone will run a 64-bit VBoxHeadless vm with HWvirtex and HPET, can be plumbed with however many zero-overhead Vnics you want to create and bridge to, for free. No network or CPU overhead. You can even register a VBox instance as a service in the Zone integrated with svcadm (Solaris SMF) for higher availability:

http://www.youtube.com/watch?v=eEGBRQDNGO4


Solaris is badass. More people should try it.



I did read the licensing issue, it meant that derivitive or modified works were not permitted without the author's prior consent; and because the OpenBSD team tightly coupled IPFilter to their network stack, they had to modify each IPFilter release. The licensing issue meant they couldn't use IPFilter at all anymore, thus PF was born. The OpenBSD guys didn't want anything to do with IPFilter, why would they let Daniel port it and call it a different name, which I doubt would get Darren's consent.

Sources:
http://lwn.net/2001/0524/
http://www.benzedrine.cx/pf-paper.html


The only sources I can find say IPFilter was ported to Solaris, not that I care really because I was only saying the similarities between IPFilter and PF was the syntax. In the IPFilter 4.1 README file it says, "This package has been tested on all versions of SunOS 4.1 and Solaris 2.4/2.5, running on Sparcs," and "It has also been tested successfully on all of the modern free BSDs as well as BSDI, and SGI's IRIX 6.2." There is no date in the file, but it seems old. Not really sure why he would need to say it was tested on Solaris since it was developed for it. I also see in the HISTORY file that version 1.0 was released April 22, 1993; and taken from http://www.osdevcon.org/2008/program_detail.html, "Darren Reed has been working at Sun since early 2005 in Solaris Networking. For over 10 years prior to this, he has been worked in various roles, including systems integrator, systems administrator, consultant and programmer, all the while hacking away on his open source project, IP Filter." It is certainly possible he was working at Sun at the time, since it says over 10 years, not that working at Sun means it was developed for Solaris, but again, all the sources I can find say it was ported to Solaris. Where can I find this information?

Sources:
http://sourceforge.net/projects/ipfilter/files/ipfilter/


I never said FreeBSD Jails and Solaris Zones were the same thing. You said, "Everything good in BSD came from Solaris, including PF, so it's no surprise Solaris has the keener and more polished implementation of OS-level virtualization, ...", as if FreeBSD Jail was built with the knowledge of Solaris Zones; which if anything, was the other way around. Actually, rereading it, I don't understand what you're saying. Everything good in BSD came from Solaris, so are FreeBSD Jails bad, or were they good because they came from Solaris Zones, I don't get it.


The SysV and BSD discussion was directed at lopoetve.


And I don't know, just seems hard to believe that everything good in BSD came from Solaris, since BSD Unix (I don't know what BSD, or maybe all of them?, you are referring to when you say "BSD"), came out in 1977 and Solaris in 1991. I think the TCP/IP stack seems good.
 
Back
Top