Untangle Hardware recomend 50mb

shinygecko

Weaksauce
Joined
Nov 23, 2007
Messages
114
I currently have a 50mb cable connection and i am looking to install an untangle rackmount box in bridge mode. Originally i was looking at the atom 330 barebone kits form super micro. However After more reading i am beginning to understand that this will not nearly be enough for my connection.

Slowdowns are not an option.

Do i really need something along the lines of a xeon or i series?

I found this to, however the power consumption would be outrageous.

http://cgi.ebay.com/Rackable-Systems-Dual-Xeon-3-2GHz-1U-Server-4GB-RAM-/300494680300?pt=COMP_EN_Servers&hash=item45f6e0f0ec
 
As an eBay Associate, HardForum may earn from qualifying purchases.
I use dell r210's. 1 has ESXI on it and untangle virrtualized and the other runs straight untangle.

Curious, why in bridge mode?
 
I currently have a 50mb cable connection and i am looking to install an untangle rackmount box in bridge mode. Originally i was looking at the atom 330 barebone kits form super micro. However After more reading i am beginning to understand that this will not nearly be enough for my connection.

Slowdowns are not an option.

Do i really need something along the lines of a xeon or i series?

I found this to, however the power consumption would be outrageous.

http://cgi.ebay.com/Rackable-Systems-Dual-Xeon-3-2GHz-1U-Server-4GB-RAM-/300494680300?pt=COMP_EN_Servers&hash=item45f6e0f0ec

This box, is LOUD! VERY LOUD! Id do what Scotty suggested. I think you would be fine with a dualcore atom, BUT it also depends on how many connections and people are behind it, i would also run "all the " traffic though it.

When buying don't skimp out on some off the shelf no name brand thing like the one you posted.

Its the first layer in your security, buy decent have decent, never worry :)
 
As an eBay Associate, HardForum may earn from qualifying purchases.
About 25 high bandwidth users and the bridge mode is because i would like to leave the routing to my cisco 3725, as well as some vpn's set up there.


I would love to go the rackmount atom route, however will i get any slowdowns?
I will just be using the standard feature set.
 
yes you want a hi9gh end dual or a quad, 25 high bandwidth users and toss in AV scanner and such and you will need some power, also dont forget make sure you use Intel NICS, not some cheap ones or built in ones.
 
I have another server that is fairly strong that i might just load untangle in vmware. It is currently running some other roles but never goes above 25% cpu usage.

I might just be better off loading vmware on that and doing it that way, anyone have any problems running this in vm?


server is a q6600 with 4gb ddr2 with gigabit intel nics.

One nic is on the standard pci bus but that should not make a difference for this.
 
All you need is a dual core pentium and 2gb of ram. a 50mb connection isnt that taxing.
 
DEDICATE a single machine to your untangle unit. You won't regret it, Using intel NICS will lower cpu usage also.

j'
 
Hmm, that server by itself would be fine to run untangle. Not sure I would run it in a vm unless you added some more RAM. Also, I would dedicate NICs to untangle only, not share them with your VMs.
 
Planned on using dedicated nics as i have a total of 3 nics in the system. Would you recommend 8gb ram then?
 
Don't get hung up in thinking you need dual quad core Xeons and go all overboard.
I've had first generation Pentium 4 hyperthread workstations with one gig of RAM running networks of 50 or 75 users on 20 meg pipes, I have a client with an Intel Dual Core (that's before the Core 2 Duo) with 2 gigs that ran a large network with 30 remote VPN users on a 50 meg pipe without even breaking a sweat.

To look at it this way, pretty much any full size Intel processor you can purchase today will be more than enough (save for Celerons)

The key is getting good hardware controller based NICs...Intel, or even the Broadcom server grade NICs...work very well. Stay away from the realsucks..I mean...realtecs...and other software controller based desktop NICs.

I'd stick with 32 bit Untangle with minimum 2 gigs of RAM.
 
Don't get hung up in thinking you need dual quad core Xeons and go all overboard.
I've had first generation Pentium 4 hyperthread workstations with one gig of RAM running networks of 50 or 75 users on 20 meg pipes, I have a client with an Intel Dual Core (that's before the Core 2 Duo) with 2 gigs that ran a large network with 30 remote VPN users on a 50 meg pipe without even breaking a sweat.

To look at it this way, pretty much any full size Intel processor you can purchase today will be more than enough (save for Celerons)

The key is getting good hardware controller based NICs...Intel, or even the Broadcom server grade NICs...work very well. Stay away from the realsucks..I mean...realtecs...and other software controller based desktop NICs.

I'd stick with 32 bit Untangle with minimum 2 gigs of RAM.


I'm still running my core 2 duo 2 gigs ram and the 2 pci NICS that you suggested when i first built my box, still running till this day. STRONG and purrs like a kitten. Having the intel NICS was worth the money.

Personally its a over built box for what i'm using it for. I hope in the new year i can afford the supermicro dual core atom 1.6 with 2 gigs ram in the front facing 1U supermicro case.

If i could sell what i have now, id be set :)

j'
 
I think I have run into the limit of the hardware I run pfsense on, I have 2 WANs (each 30x1), and I have been unable to max them simultaneously. I believe it is some old 667-800mhz budget chip (celeron/duron).

Going to go to a fast single core or dual core and get some Intel 10/100 nics...
 

Thats an NG 25 on a comcast 25x3 cable connection.
Load during the test barely topped 0.12% I have yet to push the appliance over 1.0 load, and that was while running over 1000 concurrent connections through it.
 
DEDICATE a single machine to your untangle unit. You won't regret it, Using intel NICS will lower cpu usage also.

j'

I virtualize my UT box at home and I do not regret doing so.


Thats an NG 25 on a comcast 25x3 cable connection.
Load during the test barely topped 0.12% I have yet to push the appliance over 1.0 load, and that was while running over 1000 concurrent connections through it.

This is on my box at the house, untangle is on a esxi server. Server is an e6600 c2d with 6gb of ram.
utsession.PNG


I'd like to upgrade my esxi box and add more cpu and faster disks.
 
With a C2D your processor is still under utilized. You have an effective load of 50% with a cpu load of 2.2.
 
With a C2D your processor is still under utilized. You have an effective load of 50% with a cpu load of 2.2.

My CPU also runs 3 other virtual machines besides the untangle guest. Logging into vsphere confirms I need more CPU, however there isn't anything else I can upgrade this server to, it is past end of life:(
 
well your hypervisor may but according to your screen shot untangle is only utilizing its virtual processor at 50%
 
My understanding is that the problem with running any kind of routing software in virtualization is that it leads to very uneven network load. The hypervisor keeps switching between OS'es to run, so there is a lot of latency as each OS is coming back into execution. Therefore, under heavy network load, you'd have a buffer of packets building up. so that network utilization would look something like a square wave.

I would imagine that a multicore CPU would be excellent for UTM use. You have the advantage that there isn't much routing tasks being done (just bridging the network interfaces, not doing any kind of NAT). I'm not sure whether the linux kernel is capable of processing network data across multiple CPUs. I can only imagine that multithreaded connection management would be a synchronization nightmare.
 
My understanding is that the problem with running any kind of routing software in virtualization is that it leads to very uneven network load. The hypervisor keeps switching between OS'es to run, so there is a lot of latency as each OS is coming back into execution. Therefore, under heavy network load, you'd have a buffer of packets building up. so that network utilization would look something like a square wave.

I would imagine that a multicore CPU would be excellent for UTM use. You have the advantage that there isn't much routing tasks being done (just bridging the network interfaces, not doing any kind of NAT). I'm not sure whether the linux kernel is capable of processing network data across multiple CPUs. I can only imagine that multithreaded connection management would be a synchronization nightmare.

That is why my Untangle is NOT in a esxi or any virtualized environment, it is it's own box.

Stable running and perfectly happy on it's own.

j'
 
I've been virtualizing UT for over a year, with QOS setup and downloading items through ut and gaming at the same time it has been very smooth and consistent, I would like to see proof on your statement, if the hypervisor is switching between virtual machines I don't think very many people would run multiple machines on one host.
 
The hypervisor _has_ to switch between virtual machines. If I have 3 VM's running on a dual CPU system, there is absolutely no way all 3 vms can run at the same time. It's the same thing for desktop applications. I only have a dual core CPU. Therefore, I can only have two concurrent processes running.

In a small environment with low network utilization, and if no latency sensitive applications are being used (VoIP, and potentially gaming and media streaming), then having a virtualized router is not going to be a catastrophe. There's not much opportunity cost virtualizing a router in a low-use environment. Although there is a slight loss of maximum performance, the savings in TCO and hardware is more than likely worth it.

Chances are that your VMs are not loaded very heavily, therefore, ESXi does not spend a lot of time stuck executing one VM. Alternatively, if you have multiple CPU cores, then ESXi is being very efficient at allocating the untangle VM so that it is almost always in active executing. The latter would make sense, because having to switch VMs is a very computationally expensive task. Same with context-switching multiple processes.
 
The hypervisor _has_ to switch between virtual machines. If I have 3 VM's running on a dual CPU system, there is absolutely no way all 3 vms can run at the same time. It's the same thing for desktop applications. I only have a dual core CPU. Therefore, I can only have two concurrent processes running.

UM, this is not true, i have 5 vm's running right now on my dual core 3.0 with 4 gigs ram scrap machine, that runs FINE with win2008r2 and 5 vm's,,, so yeah..
 
Dual core CPU. That means that it only supports the execution of two simultaneous tasks. Let's say that Firefox is rendering a webpage on core 1. You're also running a mail client (among many other processes). The operating system will then assign the mail client's process core 2. No matter what, there is absolutely zero way that a single CPU core can execute two processes concurrently. It only has enough instruction decoders, enough ALU's, etc to support the instruction one at a time. The process itself breaks down into a series of many instructions. Furthermore, the operating system has to reserve time for itself. The operating system needs execution time to schedule processes, manage memory, handle network requests. Time scheduling at the OS/CPU level is basically like time division multiplexing. There are many different processes that need to execute. However, there are a finite number of execution resources. Therefore, the operating system has to divide up time on the CPU.

Right now, I have 54 processes running. I only have a dual core processor. My operating system needs to be able to run 54 processes, or I won't be able to post to this thread, or login to my computer. But, there's an advantage. I'm not always hammering on the keyboard, and I'm not always trying to load files. In fact, most of the time, these programs (firefox, openoffice, etc) are just waiting for me to do something. I have 54 running processes right now, but only about 10% CPU usage. My operating system is taking advantage of this. If I'm not loading any webpages, and I'm not doing any typing, there's really no reason to let Firefox get CPU time. Therefore, Windows will let another process, for instance, putty.exe run. If I load a new tab, then Putty has to wait until Firefox is done.

Think of it like an intersection on a roadway. There could be 1000 cars backed up. However, only a few cars can go through at a time. The traffic light (or operating system) has to schedule which car (or task) can go through the intersection (execute on the CPU core).

Virtualization of operating systems is the same concept. Dashpuppy has 5 instances of an operating system (actually, there's 6--the hypervisor needs to run too). However, his computer only has 2 CPU cores. Therefore, it only has the resources to execute 2 programs at any instance in time.

Virtualization works because most systems are not running at 100% all the time. Most servers are going to sit around at a fairly low load. Most servers are spending time waiting for disk I/O, or waiting on the network, or waiting for the user to click the next link. Therefore, most of these machines are spending their time running 'System Idle Process'. Virtualization works because most applications are not demanding 100% CPU usage. Look at the specs for a standard VM host. Most of the time, it's a maximum of 8 or so cores per system. However, it would have 64-128GB of ram. As long as nothing is sucking up 100% of the CPU time in there, it all works fine.

However, under heavy load, virtualization has a few problems. It takes a long time to switch between processes. The stack has to be saved to memory, and replaced with the stack of the other process. In fact, the entire 'context' (the environment that the process is running in) has to be replaced. As I recall, it can take several hundred CPU (clock) cycles to do this. Therefore, if a heavily loaded VM host has to keep switching between operating systems (which is even harder), and those operating systems have to keep context switching processes, then there will be an additional performance hit (because of the virtualization).

UM, this is not true, i have 5 vm's running right now on my dual core 3.0 with 4 gigs ram scrap machine, that runs FINE with win2008r2 and 5 vm's,,, so yeah..

Believe it or not, I am not a moron. I have had ESXi installs that had 10 VMs configured and running. However, the server only had 8 CPU cores. Therefore, only 8 VMs could be actually executing at any given clock cycle.

Back to my original point, which is regarding the virtualization of UTMs:

-There are several concerns why this is a potentially bad thing.
-The first concern is security. If ESXi (or Hyper-V, or Xen) were to have a security hole, then an attacker who was able to compromise the hypervisor would be able to access the internal memory and state of every OS instance running on the vm host. Therefore, the attacker could pull data such as private keys right out of main memory.
-The second concern is performance. If Untangle is running in a VM environment, there is no way to guarantee that Untangle will be ready to execute at every single cycle. Therefore, network traffic would have to queue up until the Untangle instance was ready to process it. This would increase latency, as traffic would not be processed until the Untangle instance is ready to execute (meaning that a. the untangle instance has been scheduled, and b. the untangle instance has the context swapped in, and c. the Untangle instance isn't busy executing a reporting generator or something).
 
Last edited:
Untangle and other routing products present a very unique load. Most applications, for instance, Apache, run as a single monolithic binary image, that calls in external libraries (.dll or .so) as required.

Linux does all routing in the kernel. This includes NAT, which is really great at sucking up a lot of CPU time. However, Untangle (and any other UTM) is running a lot of different inspections on every single packet. That means that every single packet has to be checked by several different processes. Then, the kernel has to perform the routing on that packet.

To put what I just said another way, several processes have to be run to check every single packet. Remember what I said about context switches earlier? Yep, they still take a long time. The problem with Untangle (and any other UTM) is that it has to do a _LOT_ of context switching between all those different processes (and the Linux kernel) in the datapath.

Intel NICs are the best because the drivers are quality. They don't suck up a lot of CPU cycles (and thus, impose a lot of context switches). This is good.

Another thing: When computing hardware requirements: packets per second is the real challenge. Really huge packets take time to kick around in memory, but packets per second is the real limiting factor. If you look at Cisco's router performance comparisons, they are listed in packets per second.
 
It also comes down to what apps to you plan to run?

you said high bandwidth usage users... high usage on what, email, spam filter can eat up resources easily as can other modules in UT, why i said at least a quad or something myself,
 
Believe it or not, I am not a moron. I have had ESXi installs that had 10 VMs configured and running. However, the server only had 8 CPU cores. Therefore, only 8 VMs could be actually executing at any given clock cycle.


.

Wasn't saying you were a moron, i was stating i have a DUAL core Hyper V server running 5 vm's all running perfectly with out ANY problems at all, not sluggish not slow either.

j'
 
Wasn't saying you were a moron, i was stating i have a DUAL core Hyper V server running 5 vm's all running perfectly with out ANY problems at all, not sluggish not slow either.

j'

And what I am saying, if you read my post, is that even though you can run 5 VM's on a dual core machine, it only has the execution resources (CPU cores) to execute 2 of those vms (or 1 vm and the hypervisor code) concurrently. Oversubscribing the VM host works fine--since most of the time, the load is minimal.
 
i could run 50 VM's on my 6 core AMD but then it comes down to what do you actually use them for, i am sure you don't have all 5VM's open and doing something in all 5 at the same time or they arent doing anything that is CPU intensive all at the same time.
 
Well i have looked at both sides and i have figured i would give it a shot in vmware and if i have to get a system i will get a system.

I loaded it up in vmware workstation 7 and it does not even appear to be functioning. I have set it as a transparent bridge with 2 nics bridged to vmnet0 and vmnet2. I set its static ip to 10.0.0.2. My actual gateway is 10.0.0.1. I so set my dhcp to hand out 10.0.0.2 as the default gateway and sure enough it is using the untangle as the gateway and getting out from there. I have verified this by shutting down the untangle box and sure enough the internet dies on all systems with that default gateway.

The problem is it is not using any of the modules and there is no sessions either. It is just relaying the packets, no filtering being done. Is this a untangle bug anyone is aware of?

external network:
10.0.0.2
255.0.0.0
10.0.0.1
Primary DNS
Secondary DNS

Internal network:
Bridge to external


Any of you vmware/untangle experts have any ideas? Or should i just say screw the attempt and get another system?


It also comes down to what apps to you plan to run?

you said high bandwidth usage users... high usage on what, email, spam filter can eat up resources easily as can other modules in UT, why i said at least a quad or something myself,


Voip, Torrents, Gaming, Large Downloads , Email
 
Last edited:
Well i have looked at both sides and i have figured i would give it a shot in vmware and if i have to get a system i will get a system.

I loaded it up in vmware workstation 7 and it does not even appear to be functioning. I have set it as a transparent bridge with 2 nics bridged to vmnet0 and vmnet2. I set its static ip to 10.0.0.2. My actual gateway is 10.0.0.1. I so set my dhcp to hand out 10.0.0.2 as the default gateway and sure enough it is using the untangle as the gateway and getting out from there. I have verified this by shutting down the untangle box and sure enough the internet dies on all systems with that default gateway.

The problem is it is not using any of the modules and there is no sessions either. It is just relaying the packets, no filtering being done. Is this a untangle bug anyone is aware of?

external network:
10.0.0.2
255.0.0.0
10.0.0.1
Primary DNS
Secondary DNS

Internal network:
Bridge to external


Any of you vmware/untangle experts have any ideas? Or should i just say screw the attempt and get another system?









Voip, Torrents, Gaming, Large Downloads , Email

You need to put it behind your router and have your traffic go through it.

Internet->-router->-untangle->-everything else.
 
Any of you vmware/untangle experts have any ideas? Or should i just say screw the attempt and get another system?





Voip, Torrents, Gaming, Large Downloads , Email

Get a small based system put 2 Intel NICS inside call it a day :)
 
Sorry to hijack the thread, but it's somewhat related. At what point/ROI would you recommend getting rid of old systems and getting one of those new D510 pinetrail SM motherboards instead for untangle duties?

In NY, it costs $.206 per kWh. I transplanted the same components (2x512mb DDR2, Intel E6300, 1 7200rpm SATA HD, 2 120mm fans, dual port intel gigE card) into two different S775 motherboards to compare power usage. The SM PDSME+ (full atx, intel 3100 chipset) board consumes 70w idle, roughly $126.40 a year. An Asus P5G41-M LX2 (microatx, g41 chipset) consumes 52w idle, costing at least $93.90 a year idle.
If I purchase the X7SPA which consumes <30w idle, I would recoup the costs at worst between 2.5 and 5 years in the future, depending on which system I'm comparing it to. If I include the money I'd get from selling a motherboard/cpu whilst getting the new board, recoup point moves up to 1.5-3 years.

Or should we wait for Q2 2011 for Cedar Trail which supposedly has even lower power draw
 
If you're trying to run Untangle on VMWare workstation, the interface configuration will be complicated if you want the host OS to have network connectivity.

What will the production system be running as a hypervisor?
What else is going to be running on the hypervisor? If you're talking about users doing file transfer, that is ideal, since file sharing should consist of a stream of larger packets, which would require a smaller rate (packets per second) which is good for performance. The VoIP is a potential concern, however the data rate should be relatively low. If latency is an issue, just crank up the priority of the Untangle VM, and, if possible, experiment with increasing the rate of the kernel timer.

To elaborate on what scotty8 posted:

You would set it up like this:

(Internet) -> c3725 WAN Ethernet (c3725) c3725 LAN Ethernet -> vmnic0 (untangle vm) vmnic1 -> private subnet

So therefore, any server VMs running on the vmhost would be using vmnic2, vmnic3. You have 3 nics in the box, are you using one for management, or is the management in-band? Ideally vmnic0 and vmnic1 would be dedicated, physical network interfaces.

Internal network:
Bridge to external

What do you mean by this?
This is saying that Untangle should act as a transparent bridge, correct? Traffic flows into one interface, passes through whatever software is put in the data path, and then goes out the next one, correct?

If you're having issues with modules not loading, that is a software issue. Throwing hardware at the problem will not necessarily fix it.
 
Back
Top