Got uptime? :)

B00nie

[H]F Junkie
Joined
Nov 1, 2012
Messages
9,327
What's your record? I'll post a baseline

Näyttökuva 2020-10-15 kello 17.52.02.png
 
Congrats, you haven't upgraded your kernel (and who knows what else) in five years.

Uptime e-peen displays are dumb.

(Also, denyhosts of all things is choking that system and the load average is above the core count? Either the box is getting slammed and/or it's beyond time for an upgrade.)
https://www.vultr.com/docs/update-linux-kernel-without-rebooting-using-livepatch-on-ubuntu-16-04

I had to shut down yesterday to work on a PC equipped with a GPU running only Mini DP outputs, my main monitor is the only one I have with a DP input and I can't find my adapter. I usually run an uptime of about two weeks to a month between reboots, bearing in mind this is my desktop and not a server.

You need more cores/ram. ;)

YMs3wqr.png
 
Last edited:
Congrats, you haven't upgraded your kernel (and who knows what else) in five years.

Uptime e-peen displays are dumb.

(Also, denyhosts of all things is choking that system and the load average is above the core count? Either the box is getting slammed and/or it's beyond time for an upgrade.)
Congrats, you haven't been doing your research and don't know about unattended upgrades, Ksplice, kernelcare and the multitude of other ways you can maintain your kernel without reboots. And yes, a 5 year old box/virtual machine is beyond time for upgrade. It will be replaced next week, which is why I thought to document this.

Denyhosts has grown quite a list of banned IPs during 5 years so it spikes up for a second or so once in a while. It doesn't remain that way. Most of the load comes from a couple of heavy background processes that run database runs.

Kernel live patch comes in handy in the kind of weird edge cases where you want to give your customers 99.9999% uptime without having to resort to expensive cluster farms. The services are never down except for a second or two when they're restarted for updates.
 
Last edited:
https://www.vultr.com/docs/update-linux-kernel-without-rebooting-using-livepatch-on-ubuntu-16-04

I had to shut down yesterday to work on a PC equipped with a GPU running only Mini DP outputs, my main monitor is the only one I have with a DP input and I can't find my adapter. I usually run an uptime of about two weeks to a month between reboots, bearing in mind this is my desktop and not a server.

You need more cores/ram. ;)

View attachment 289490
The node in question has 24 cores and 144Gb of ram. This was just a single virtual machine that runs background services.
 
https://www.vultr.com/docs/update-linux-kernel-without-rebooting-using-livepatch-on-ubuntu-16-04

I had to shut down yesterday to work on a PC equipped with a GPU running only Mini DP outputs, my main monitor is the only one I have with a DP input and I can't find my adapter. I usually run an uptime of about two weeks to a month between reboots, bearing in mind this is my desktop and not a server.

You need more cores/ram. ;)

View attachment 289490
But hey - one day of uptime is already great if you were running Windows!!
 
Yeah, but I've never seen or heard of anyone using a kernel live patch mechanism outside of weird edge cases.

I use it, because I can.

Congrats, you haven't been doing your research and don't know about unattended upgrades, Ksplice, kernelcare and the multitude of other ways you can maintain your kernel without reboots. And yes, a 5 year old box/virtual machine is beyond time for upgrade. It will be replaced next week, which is why I thought to document this.

Denyhosts has grown quite a list of banned IPs during 5 years so it spikes up for a second or so once in a while. It doesn't remain that way. Most of the load comes from a couple of heavy background processes that run database runs.

Kernel live patch comes in handy in the kind of weird edge cases where you want to give your customers 99.9999% uptime without having to resort to expensive cluster farms. The services are never down except for a second or two when they're restarted for updates.

That load average ain't looking too great for a dual core processor, I'd be adding another core to the VM. Otherwise, impressive uptime!

But hey - one day of uptime is already great if you were running Windows!!

If you're running Windows 10 with a 5400RPM spinner and 4GB of ram, it'll take a day for the hard drive to stop thrashing as a result of indexing and loading background applications. :D
 
I use it, because I can.



That load average ain't looking too great for a dual core processor, I'd be adding another core to the VM. Otherwise, impressive uptime!



If you're running Windows 10 with a 5400RPM spinner and 4GB of ram, it'll take a day for the hard drive to stop thrashing as a result of indexing and loading background applications. :D
That whole environment will be moved to a new server next week, retiring the old hardware.
 
My power outages exceed my UPS time too often out here in the boonies. Best I've had was 6 months or so in the last couple years. At 15 days and change atm, but haven't been trying too hard for uptime.
 
Comparing uptime - what a blast from the past :ROFLMAO: I remember being so proud of Windows 98SE staying up for a few weeks and positing it on IRC. Windows 2000, and then basically every OS(especially newer versions of Linux) after that changed the game - my PFSense router's uptime is only dependent on when the last time I lost power long enough that the UPS ran out of battery power. Great thread! (I mean that genuinely, no sarcasm) :)
 
Comparing uptime - what a blast from the past :ROFLMAO: I remember being so proud of Windows 98SE staying up for a few weeks and positing it on IRC. Windows 2000, and then basically every OS(especially newer versions of Linux) after that changed the game - my PFSense router's uptime is only dependent on when the last time I lost power long enough that the UPS ran out of battery power. Great thread! (I mean that genuinely, no sarcasm) :)

Back when I was running Windows 98, it was more about OS install time between reinstalls as opposed to actual system uptime! People hated Windows ME, I actually found it no worse than Windows 98...Meh. Still got a Windows 2000 boxen here, IMO the last true and great version of Windows.
 
Back when I was running Windows 98, it was more about OS install time between reinstalls as opposed to actual system uptime! People hated Windows ME, I actually found it no worse than Windows 98...Meh. Still got a Windows 2000 boxen here, IMO the last true and great version of Windows.
Windows 2000 was 100% game changing. Not having to worry about frequent reboots, or reinstalling the OS, etc. XP to me was Windows 2000 with just some graphical tweaks at least before the service packs came out, so I was pretty happy with XP as well.
 
Now I could show an image of my server's uptime...If it's SSD didn't completely dump it's SSD pants.
 
What's your record? I'll post a baseline

View attachment 289224
I've had to reboot stuff with around that. Always makes me sad, but usually also means I've managed to upgrade or retire enough machines that I can bump the build machine to a newer build. I haven't had the pleasure of admining Netware boxes drywalled in with 20+ years of uptime though. :)
 
Mines not that high, but hey... look at that load average!!! I'm using this crap out of this box
1603056748058.png


Using a whopping 4.9GB of 94.4GB, but I guess having only 90GB free and 99% of my CPU cycles available is ok for now... but that 9.5MB of swap... really, lol, maybe I should just remove that 8gb of swap, honestly if I go over 94GB it's probably a memory leak and 8 GB of swap isn't really going to help much. Prior to this it was up probably 6 months, I finally had some updates that I needed to reboot so that's what I did. One of these days I need to go through this thing and try to clean it up a bit, could probably reduce these loads a little bit but hasn't been a priority for (hopefully) obvious reasons.

Anyways, the most used CPU is my minecraft server, so this is a super duper important thing at my house (and of course, minecraft is almost completely single threaded, so this setup might be a bit overkill). I do have other stuff running like a plex server, samba file share, dhcp, caching dns server, and a few odds and ends (remote development server).
 
Unless you run a Kubernetes node, disabling swap is usually not advisable. It doesn't hurt performance. Better just to set vm.swappiness to a low level.
 
I was told that swap should be roughly half of your ram capacity. Not too sure how true that is, but Linux is so lean when it comes to the root file system I may as well devote some of my boot drive to it.

Watching htop I never even get close to using it.
 
I was told that swap should be roughly half of your ram capacity. Not too sure how true that is, but Linux is so lean when it comes to the root file system I may as well devote some of my boot drive to it.

Watching htop I never even get close to using it.
That advice is legacy stuff from the 16-32 bit era setups. If you have a machine with 512Gigs of ram, you should allocate 20-30gigs max to swap because you don't want to be using swap because you're running out of ram like it used to be in the old days. Some swap is still good to have because some processes use swap space to save non-active data pages even if there is plenty of free ram left. The default for vm.swappiness is 60 which means that the system will use swap aggressively. This can be good if you run a fileserver. If you run databases or a desktop, setting vm.swappiness to 1 will maximise the use of ram for applications and minimize i/o buffers etc. in ram.
 
I was told that swap should be roughly half of your ram capacity. Not too sure how true that is, but Linux is so lean when it comes to the root file system I may as well devote some of my boot drive to it.

Watching htop I never even get close to using it.
Like B00nie said; swap size as a ratio to ram was good advice in the 90s and maybe early 2000s. Nowadays, I'd say anything more than 512 MB of swap is too much (other people may have a different rule of thumb), but I wouldn't disable it unless you really know that you know what you're doing and everyone who wrote software for your environment also knew what they were doing and that swap was going to be disabled. A small amount of swap helps you know when you're close to the edge of ram, and often buys you enough time for an orderly shutdown instead of the kernel killing whatever it feels like; and swap usage stats (% free and pages/second) are easy to measure and react to.
 
That advice is legacy stuff from the 16-32 bit era setups. If you have a machine with 512Gigs of ram, you should allocate 20-30gigs max to swap because you don't want to be using swap because you're running out of ram like it used to be in the old days. Some swap is still good to have because some processes use swap space to save non-active data pages even if there is plenty of free ram left. The default for vm.swappiness is 60 which means that the system will use swap aggressively. This can be good if you run a fileserver. If you run databases or a desktop, setting vm.swappiness to 1 will maximise the use of ram for applications and minimize i/o buffers etc. in ram.

Like B00nie said; swap size as a ratio to ram was good advice in the 90s and maybe early 2000s. Nowadays, I'd say anything more than 512 MB of swap is too much (other people may have a different rule of thumb), but I wouldn't disable it unless you really know that you know what you're doing and everyone who wrote software for your environment also knew what they were doing and that swap was going to be disabled. A small amount of swap helps you know when you're close to the edge of ram, and often buys you enough time for an orderly shutdown instead of the kernel killing whatever it feels like; and swap usage stats (% free and pages/second) are easy to measure and react to.

Fair enough.

Perhaps I'll adjust swap usage in the future. It's not affecting anything ATM, memory usage/utilization is good and it's not like the root drive is running out of space.

DG3dS60.png

6qVkolR.png
 
That advice is legacy stuff from the 16-32 bit era setups. If you have a machine with 512Gigs of ram, you should allocate 20-30gigs max to swap because you don't want to be using swap because you're running out of ram like it used to be in the old days. Some swap is still good to have because some processes use swap space to save non-active data pages even if there is plenty of free ram left. The default for vm.swappiness is 60 which means that the system will use swap aggressively. This can be good if you run a fileserver. If you run databases or a desktop, setting vm.swappiness to 1 will maximise the use of ram for applications and minimize i/o buffers etc. in ram.
Hmm... my swappiness is whatever was default.. it seems it's prioritizing RAM pretty much since it's using 4.9GB of ram and 9MB of swap. I haven't messed with it, nor did I really have any plans to mess with it, just mostly laughing at how little swap it had in use. I'm not sure what the 9.5MB is, but it pretty much is there when I boot and is still the exact same at all times.
 
Hmm... my swappiness is whatever was default.. it seems it's prioritizing RAM pretty much since it's using 4.9GB of ram and 9MB of swap. I haven't messed with it, nor did I really have any plans to mess with it, just mostly laughing at how little swap it had in use. I'm not sure what the 9.5MB is, but it pretty much is there when I boot and is still the exact same at all times.
If you have plenty of resources you don't have to worry about it. But if your server/workstation starts to use more than half of the ram, it's going to start swapping a lot. You may not want that if you run several applications in the background or use something memory intensive like database or video rendering. And the swapfile oversize is negative mainly because it reserves your storage space for nothing. A few gigs of swap will do, no need to waste a hundred.
 
If you have plenty of resources you don't have to worry about it. But if your server/workstation starts to use more than half of the ram, it's going to start swapping a lot. You may not want that if you run several applications in the background or use something memory intensive like database or video rendering. And the swapfile oversize is negative mainly because it reserves your storage space for nothing. A few gigs of swap will do, no need to waste a hundred.
Yeah, makes sense. I guess if I ever see that much being used maybe I'll have to check into it, for now I'm pretty safe.
 
As soon as you hit swap you'll know about it, the system slows to a crawl. It's one of the reasons I like the Intel 5520 LGA1366 chipset, affordable ECC ram with good capacity density, you just keep adding memory without raping your wallet.
 
Too bad I didn't have a screenshot...we once had an old external monitoring box that broke 2100 days....then we retired it.
 
Back
Top