How often do you shutdown your servers and for what reasons?

iamkion132

n00b
Joined
Feb 27, 2007
Messages
57
I'm in a single server environment and from what I can tell has been up for 140 days. I'm wondering if I ever need to shut it down even for a couple of hours over a weekend. The server provides email, file server and is our DNS server. It's pretty important to our organization during the week but and there are people who work from home during the weekend who use email but I'm thinking of shutting down from like 11pm-4am on a Saturday to early Sunday.
 
Why would you shut it down just to shut it down?

Most of my clients servers have been up for quite a long time. Only ever restarted for updates (software or OS) and sometimes if a weird issue has come up or something. That usually equates to once every few months.
 
If for whatever reason it seems sluggish with no apparent cause, I'd recommend a reboot.

Other than that, the only reason I could see shutting it down would be cleaning it physically if it's exposed to a dusty environment. Servers can usually go till hardware fails, but I have had to do less linux reboots than windows for sluggish response.
 
i never shut down a server just to shut them down. we get very little to almost no dust in our server room so thats also not an issue for my company.

Only time is to reboot for updates
 
like others said, i only reboot for kernel upgrades. usually, if something's getting out of hand memory-wise, i'll restart the service, but i haven't had to do that in months.
 
I might shutdown if I smell something burning. Maybe.. depends how bad it is
 
I also don't understand why you'd just shut it off for the heck of it... Rebooting for the heck of it is even more understandable (preventable maintenance type of thing)...

Only times I reboot:
A) New App
B) Windows Updates
C) New Hardware

And D would kindof vary...
D) Hung threads

Alot of times simply restarting the service(s) will fix D, but sometimes a good old fashioned reboot is really the only way to go.
 
I don't mean to look stupid here. I am still thinking in terms of desktop operations and am still getting used to server side operations. I wasn't about if a server actually ever needed to be shut down as a routine operation every once in a while.
 
If anything, just a quick restart at 9pm on a sunday should do it. No reason to have it down for more then a few minutes.
 
I don't mean to look stupid here. I am still thinking in terms of desktop operations and am still getting used to server side operations. I wasn't about if a server actually ever needed to be shut down as a routine operation every once in a while.

Shutting down a server is allways risky. I've seen more failures due to server shutdown then anything else (disks don't wake up again).

I only shut down a server if I really have to. Updates might require it, but most decent disk controllers can keep the disks spinning during a simple restart. Last time we shut down our servers was when we moved to this new location about two years ago (If you have to move the server make sure the disks have cooled down before moving teh server chassis).
 
Shutting down a server is allways risky. I've seen more failures due to server shutdown then anything else (disks don't wake up again).

Agreed. I can remember quite a few instances where, due to reasons such as moving offices, or re-organizing an office and moving the servers, or for some part replacement...when you have a server that's been running for years on end..and you power it down and let her cool....you go to power it up and you'll find some RAID error that a drive went offline. Thermal changes can create issues like this.

A server that doesn't need reboots for performance/stability is a good thing. Means shes running lean, mean, and...well, it's doing good.

Some applications hosted on a server can get sluggish, and they need a reboot to help bring performance back. So be it.

Since you mention AD and e-mail on the same box, is this Small Business Server? It runs many things, and scheduled reboots can be beneficial. I have a few clients that really push their SBS hard, and I have a reboot scheduled weekly on some of them, like Sunday morning at 3am.
 
Since you mention AD and e-mail on the same box, is this Small Business Server? It runs many things, and scheduled reboots can be beneficial. I have a few clients that really push their SBS hard, and I have a reboot scheduled weekly on some of them, like Sunday morning at 3am.

I have one client I have to do that with.
Never could figure out why it gets bogged down.
What is weird it isn't the most heavily used server either.

Otherwise the only time they get rebooted is windows update or a kernel update on the Linux servers.
Speaking of kernel updates never do that unless you absolutely have to and do it by hand not with the update software.
 
Shutting down a server is allways risky. I've seen more failures due to server shutdown then anything else (disks don't wake up again).
It's for this reason that I make a point to reboot all my servers about once a month. I would rather be in control of when it doesn't come back online than have it be a surprise ( like, say, after a power outage ).

And yes, I've had servers not come back up. And I've spent the night getting the damn thing working or moving it's functions elsewhere. But again, I'd rather control this aspect than leave it to chance.
 
Had to change a UPS battery recently, so had to down everything briefly; other than that we only ever restart things for software updates.
 
Agreed. I can remember quite a few instances where, due to reasons such as moving offices, or re-organizing an office and moving the servers, or for some part replacement...when you have a server that's been running for years on end..and you power it down and let her cool....you go to power it up and you'll find some RAID error that a drive went offline. Thermal changes can create issues like this.

We had a Novell box that had been online for like 5 years. When we moved offices we were very scared that it would even start back up. We had to keep old servers around just to have parts for it. It was an old Proliant 8000 series box. Even though we only moved about .5 miles, it was still risky to move something that had been online for so long.
 
We had a Novell box that had been online for like 5 years. When we moved offices we were very scared that it would even start back up. We had to keep old servers around just to have parts for it. It was an old Proliant 8000 series box. Even though we only moved about .5 miles, it was still risky to move something that had been online for so long.

My scarey one was last winter...a hospice agency I take care of, their primary database server...about 4 years old and had been running constant, PowerEdge 1600. I think it's RAID 1 and RAID 5 or 10 if I recall...moved about 3 miles to new building, go to power up...2x drives on the data array were on being a pain. Finally one of them cooperated and it was up enough til replacements came in.
 
The only times I can remember our servers where shutdown ...

A/C and redundant A/C both failed and when the server room hit 92deg we shut everything down until it came back online.

Failed UPS system (see below)

Replacing our UPS system.

Blown (bad) main breaker to the server room.

Physically moving the entire data center to a new location.

Hardware failure or hardware upgrade; now that most of them are virtual the latter just requires a vMotion and not a shutdown.
 
my windows servers get restarted every bloody month due to updates that always seem to need a reboot. I have had a Ubuntu Squid Proxy running for over 12 months without a reboot and a Trixbox server running for about the same.
 
I'm in a single server environment and from what I can tell has been up for 140 days. I'm wondering if I ever need to shut it down even for a couple of hours over a weekend. The server provides email, file server and is our DNS server. It's pretty important to our organization during the week but and there are people who work from home during the weekend who use email but I'm thinking of shutting down from like 11pm-4am on a Saturday to early Sunday.
Shutting down a server is worst than leaving it running all the time.

Heat expands most materials and cooling contract them.
The expansions and contractions are what destroy most hardware.

Heck, the only time I lost a drive was recently when I powered down my HTPC to add a new drive.
My old drive died just right there and then.

my windows servers get restarted every bloody month due to updates that always seem to need a reboot. I have had a Ubuntu Squid Proxy running for over 12 months without a reboot and a Trixbox server running for about the same.
Stop updating then. :confused:
It amazes me when people think they have to apply every single update.

Get yourself a firewall (heck even Windows' built-in IP filtering will fit the bill) and update only if there is a new feature you really want or a stability patch you really need.

Even a cheap router with basic port forwarding will protect you better than applying every security patch.
I only apply service packs and only reboot for hardware upgrade.
 
I only apply service packs and only reboot for hardware upgrade.

I could see doing this if you do some serious IPS and server/network monitoring and have things locked down to the VLAN filter-map level.

I would have to have someone who dedicated almost all their time keeping up-to-date on 0-day exploits and writing IPS rules to sleep at night though. Never underestimate the power of incompetent users to completely destroy your security by randomly plugging in a cable and completely bypassing any safeguards that are in place.
 
you are joking right? Enterprise environment servers need all security patches!

You're kidding right? Criticals are a must, everything else is gravy. I can't imagine you run all your enterprise life cycle machines through the update process for every update that's put out. You're nuts. You must have one helluva maintenance window. Initial build ok, after that most production systems can't be bothered to be taken offline for even the slightest downtime unless it's a patch that critically affects its ability to function.
 
after that most production systems can't be bothered to be taken offline for even the slightest downtime

"Most production systems" is a pretty broad assumption.

*What's the percentage of system admins here that have servers needing 24x7x365 uptime?

*OK, what's the percentage of system admins here that have servers that are pretty much just daytime and early evening production..and can afford reboots of said servers late in the evening, early in the morning, and/or over weekends?

I bet that 2nd group is the larger percentage..by far. ;) As an SMB consultant (and I know there are many more here on the boards)....I VPN into clients networks early morning/late evening/weekends and take care of updates 'n reboots.
 
"Most production systems" is a pretty broad assumption.

*What's the percentage of system admins here that have servers needing 24x7x365 uptime?

*OK, what's the percentage of system admins here that have servers that are pretty much just daytime and early evening production..and can afford reboots of said servers late in the evening, early in the morning, and/or over weekends?

I bet that 2nd group is the larger percentage..by far. ;) As an SMB consultant (and I know there are many more here on the boards)....I VPN into clients networks early morning/late evening/weekends and take care of updates 'n reboots.

I'm talking enterprise, which is what he brought up. I don't consider SMB on the same level as Enterprise (/ducks, no offense seriously. I'm sure you work your asses off just like everyone else.) Enterprise to me means 24/7/365. You don't take it offline, even if it's doing nothing at night/early morning/whatever, unless you absolutely have to. I used to use that excuse to take things down during off hours all the time. But changes for the sake of changes, when not absolutely necessary are just asking for trouble IMHO and from my experience.

No one tolerates downtime, even if you have scheduled maintenance. In a larger environment this is unacceptable to take a machine offline for patching unless it was critical to the system functionality.

I do agree that most environments are not this way, but every 'enterprise' environment I've seen will not allow maintenance unless it's critical to the business function. And even then they bitch. :D
 
Shutting down a server is allways risky. I've seen more failures due to server shutdown then anything else (disks don't wake up again).
I've dealt with that first hand.

Shut server off to install RAM, and then the PSU went dead...

And I've spent the night getting the damn thing working or moving it's functions elsewhere.
Moving elsewhere? Are you pointing that server's DNS record to another server or ???

I'm talking enterprise, which is what he brought up. I don't consider SMB on the same level as Enterprise (/ducks, no offense seriously. I'm sure you work your asses off just like everyone else.) Enterprise to me means 24/7/365.
Enterprise to me also means clustering and failovers, and scheduled maintenence periods in which you can patch systems quarterly or whatnot, without any downtime whatsoever.

I'd maintain It's folly to leave systems unpatched.
 
Moving elsewhere? Are you pointing that server's DNS record to another server or ???
Depends on the service. I usually don't have multiple boxes available for a single service, so if I have to improvise I usually have to install and configure the server on a different box while I fix the old one.
 
I used to work somewhere where the machines had to be up 24x7x365, and since we were taxpayer funded (public university) we had a good amount of cash on hand, so we had a fair amount of redundancy.

There was another system that we had that was a bit interesting, it only had to be up for arond 20 minutes / month, and if it wasn't, it would mean between 500k-2million dollars would be wasted. :D Fun times.

"Most production systems" is a pretty broad assumption.

*What's the percentage of system admins here that have servers needing 24x7x365 uptime?

*OK, what's the percentage of system admins here that have servers that are pretty much just daytime and early evening production..and can afford reboots of said servers late in the evening, early in the morning, and/or over weekends?

I bet that 2nd group is the larger percentage..by far. ;) As an SMB consultant (and I know there are many more here on the boards)....I VPN into clients networks early morning/late evening/weekends and take care of updates 'n reboots.
 
Depends on the service. I usually don't have multiple boxes available for a single service, so if I have to improvise I usually have to install and configure the server on a different box while I fix the old one.

But I'm sure you don't want to touch all your clients connecting into "servername.domain.local" so instead you just point that A record to another server for time being, or how did you handle that?
 
At my old job we rebooted our old terminal servers daily. Otherwise they'd crap out. FYI they were 5 yrs old, P3 boxes running windows 2000. Sucky huh.
 
But I'm sure you don't want to touch all your clients connecting into "servername.domain.local" so instead you just point that A record to another server for time being, or how did you handle that?
Oh sure, I make my job as easy as I can. I use CNAMES, but it's the same idea.

However, I've run into a few services which encapsulate the server name in the application packet; if they don't match it won't work ( ya piss poor design! ). For those I have to get creative, either modifying the app behind it's back or running a vm. Worse comes to worse, I simply push out new registry entries for my clients.

All depends on the server. I take the easy way out where and I when I can.
 
I have a few Win Servers in the office and one Sco Unix box still kicking around.

Never shut them down unless critical updates require them to be. We moved about 1 mi. at the beginning of October last year and I remember carrying the Sco Box myself. I handed it to my wife in the car who was instructed to hug the box and hold on tight for the 2 minute drive....

placed it at the new office and crossed my fingers... worked like a charm!

note: Sco box is running an old Medical Manager program for collections and billings for my medical office. We have a new web based program that does the same thing, but the old version works much better! Still working with the new version vendor to add features that the old one has... lol.
 
you are joking right? Enterprise environment servers need all security patches!

Same to you... are you joking?

Enterprise is all about "if it ain't broke, don't fix it".
I cannot even get my operation team to execute needed operations let alone operations that have no visible impact on daily transactions.
Ah!

Do you even read what the patches are fixing before you install them?
I bet you most of those patches don't apply to you.

You do understand that patches always come after the fact. Right?
If not having the said patch is a catastrophe, then such catastrophe would have likely happened before you get to apply the patch.

Remember, there is no patch for bad firewall and security configuration.
If your software needs an OS patch, you will know it.

So, just like you don't upgrade to every new kernel and package update in Linux, you don't apply every patch on Windows.
 
one thing i didn't see any mention of is to have a spare drive on hand when powering down systems after long periods of uptime.

at one of my jobs we did a power upgrade in our data center with over 250 servers, plus SANs, tape libraries, etc. during the planning we made sure to have spares of every type for our systems and sure enough more than a handful of drives didnt make it. not to mention a few servers, but all staff was on hand that weekend.
 
Once a month I will install the latest patches I have to (usually a bunch of criticals in there and being SBS...welll....).... on a friday night just in case it goes to hell.
 
Back
Top