Pix 515e "Memory Allocation Errors"

Uncut_Diamond

Supreme [H]ardness
Joined
Nov 16, 2000
Messages
4,461
Looking for some advice from any Pix experts out there. I have a pair of Pix 515e that are driving me nuts. At random intervals the memory usage will spike and keep climbing until it completely runs out of memory (128MB total) and starts throwing "memory allocation errors". I originally noticed this when our DNS server went crazy because BIND wasn't running and flooded us with ICMP requests. Cisco confirmed that this could cause a crash but this morning we had the same thing happen again only without any evidence of the DNS server problems.

Specs:
Cisco Pix 515e UR
P2 433Mhz
128MB RAM
IOS 7.2.2
ASDM 5.2.2

Normal Usage:
CPU usage 10-20%
Memory usage 100MB-110MB (128MB max).

I also would like to know if anyone has any methods for monitoring and sending an alert when the memory reaches a certain threshold such as 120MB/128MB. We have WhatsUp Gold v11 and so far I haven't been able to figure out how to get it to work with that. I dump some syslog data to our syslog server running Kiwi that can send e-mails too.

Thanks!
Uncut_Diamond
 
What is the specific error the PIX sends back?

As far as monitoring the memory, you can get the current value via an SNMP poll. I forget what the specific OID is, but I do know that it exists. I used to graph the memory usage on my PIX firewalls with MRTG. I'm sure you could write a simple script to alert you if you the memory value went above 120.
 
What is the specific error the PIX sends back?

As far as monitoring the memory, you can get the current value via an SNMP poll. I forget what the specific OID is, but I do know that it exists. I used to graph the memory usage on my PIX firewalls with MRTG. I'm sure you could write a simple script to alert you if you the memory value went above 120.

Thanks for the reply. The specific error is "Memory Allocation Error". Basically it runs out of memory and can't build any new connections and crashes [H]ard.

I got the OID and am now successfully monitoring the memory stats with a threshold alert. I just got done testing it and that is working. Knowing is half the battle, the other half is solving the problem. :(
 
What version of the software are you running? When did you first notice the problem? Have you enabled debug logging?
 
What version of the software are you running? When did you first notice the problem? Have you enabled debug logging?

Yes, 7.2.2. The problem started after a power outage we experienced and thinking that the hardware memory might have been hosed I had it replaced. The problem still exists.

The config hadn't changed much significantly since the 7.2.2 update back in November to when this started.

Thanks.
 
Yes, 7.2.2. The problem started after a power outage we experienced and thinking that the hardware memory might have been hosed I had it replaced.

Where they on a UPS with AVR? Something else could have been damage by the power spike that usually accompanies restoration of power.
 
I get these from time to time on my PIXs running 6.3(3). A reboot solves it every time. About a month or so later they start appearing again though.

The thing is traffic still passes through the PIX, so I really don't know what this is from or what effect its having on the network.
 
I get these from time to time on my PIXs running 6.3(3). A reboot solves it every time. About a month or so later they start appearing again though.

The thing is traffic still passes through the PIX, so I really don't know what this is from or what effect its having on the network.

After working with Cisco level 2 engineers we were able to determine we have an excessive amount of embryonic connections. These connections even when idle use a small amount of memory. We changed the timeout settings to clear idle connections and it seems to be running much more smoothly right now.

We can't pinpoint the single cause for the massive embryonic connections but it's likely some odd server behavior or general spam/dos type activity from the outside. The servers most affected are the ones in the DMZ.
 
You can also limit the number of embryonic connections to a specific machine via the static statement.
 
I get these from time to time on my PIXs running 6.3(3). A reboot solves it every time. About a month or so later they start appearing again though.

The thing is traffic still passes through the PIX, so I really don't know what this is from or what effect its having on the network.


That isn't right. You shouldn't be seeing those at all. Have you opened a TAC case?
 
Just an update in case someone else runs across this problem. The timeout settings and the amount of embryonic connections have serious impact on Pix memory usage. If you are receiving memory allocation errors check both.

To check the number of embryonic connections:
Code:
show local-host | include host|count/limit

The default timeout settings:
Code:
timeout xlate 3:00:00
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
timeout uauth 0:05:00 absolute
 
Back
Top