Fedora 32 Looking At Using EarlyOOM By Default To Better Deal With Low Memory Situations

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,875
Considering how light of a footprint Linux has on RAM compared to... ...certain other operating systems... I'm surprised this is even a problem.

My free -h with more open tabs than God in Firefox, while editing a high resolution image in Gimp and playing my favorite musing in the Spotify client

Code:
$ free -h
             total        used        free      shared  buff/cache   available
Mem:            62G        3.0G         57G        270M        2.0G         58G
Swap:            0B          0B          0B

So, 3Gig used, of which 2 gigs are cache/buffers. So in reality, 1 gig used. That should cover most non-specialty, non-VM desktop loads right there.

Just size your RAM appropriately, and you don't have to worry about OOM, or even having a swap partition!

Linux would be better off if it spent more time worrying about supporting all new hardware from major vendors on launch day, and worried less about how to get shit to run well on old 15 year old RAM constrained systems.
 
Last edited:
Considering how light of a footprint Linux has on RAM compared to... ...certain other operating systems... I'm surprised this is even a problem.

My free -h with more open tabs than God in Firefox, while editing a high resolution image in Gimp and playing my favorite musing in the Spotify client

Code:
$ free -h
             total        used        free      shared  buff/cache   available
Mem:            62G        3.0G         57G        270M        2.0G         58G
Swap:            0B          0B          0B

So, 3Gig used, of which 2 gigs are cache/buffers. So in reality, 1 gig used. That should cover most non-specialty, non-VM desktop loads right there.

Just size your RAM appropriately, and you don't have to worry about OOM, or even having a swap partition!

Linux would be better off if it spent more time worrying about supporting all new hardware from major vendors on launch day, and worried less about how to get shit to run well on old 15 year old RAM constrained systems.
I tend to agree, however (and this isn't hypothetical–I have experienced this personally) you still have to design with the edge case in mind. If you have a program that is being evil with memory or has a memory leak, you have to be able to handle that gracefully. This does not always happen, and every time it fails, bad things happen. I must say, it's improved substantially, and I rarely experience any issues of the sort anymore, but then I don't have a computer with less than 16GB of RAM, either.

I honestly think a new Linux compatible kernel (as in, retains most of the interfaces and module compatibility) needs to be written with desktops being the primary focus, and have the desktop bits ripped or refactored with servers/embedded/thin clients as the focus in the Linux kernel. It'd help focus, clean up, and refine the Linux kernel, which I don't think anyone would complain about. Just don't know if anyone would be willing to put in the work...
 
Back
Top