Intel iGPU Linux users are advised to avoid kernel 5.19.12

https://lore.kernel.org/all/[email protected]/
https://www.phoronix.com/news/Intel-iGPU-Avoid-Linux-5.19.12

Kernel 5.19.12 messes up some power sequencing which is burning out laptop LCD panels being run by some Intel iGPU's.
They are pushing 5.19.13 which reverts the bad Intel Drivers

Are any current distributions even shipping with 5.19 kernels, or is this only.impavting people who pull kernels straight from kernel.org?

Most of my boxes are still on 5.4, my main desktop is on 5.13 or 5.15 as part of a HWE backport I think (can't remember). Don't think I've seen anything with 5.19 yet.

Either way, as previously mentioned, this is why you let things mature before jumping on them.

Still, poor design from Intel if software drivers can cause hardware harm. Anything that can cause hardware harm should be locked down in firmware.
 
Mr. Sta-puff says "buy Nvidia 4xxx series", because "it would never harm anyone."
 
Same with my Mint install.
I did like to play with experimental kernel stuff, but this one is nasty. You wouldn't even be able to tell it's running out of spec before it died.
I remember trying out Ext4 really early, it accidentally some of my files.
 
Same with my Mint install.
I did like to play with experimental kernel stuff, but this one is nasty. You wouldn't even be able to tell it's running out of spec before it died.
I remember trying out Ext4 really early, it accidentally some of my files.

I wasn't sure and I was away from a terminal, but I just checked, and yea, I'm on 5.15, but that's with Mint 20.3. I haven't upgraded to Mint 21 yet, as the 19 and 20 series are still supported, and I haven't seen anything in the 21 release notes that I feel like I need.

No idea what kernel 21 uses, but I am guessing it's probably the same, as the 20 series is based on Ubuntu 20.04 LTS which I believe shipped with the 5.4 kernel, versions of which are still supported. The only reason I have 5.15 in this machine is through the hardware enablement stack which backports the kernel from later releases. LM 21 is based on Ubuntu 22.04 LTE, and I think that is the HWE kernel I am pulling, so they are probably the same kernel.
 
5.19.7-1 on Manjaro at the moment. A bunch of updates just dropped earlier and I haven't installed them yet but it's likely there's a kernel update mixed in.

If I really want to live on the edge I could install and attempt to run the 6.0.0rc4-1 experimental kernel as that's an option.

Just took a quick look, the new kernel pushed for Manjaro is 5.19.13-1.
 
openSUSE Tumbleweed is at 5.19.13 (in case anyone wonders)
Fedora 36 just pushed out 5.19.13.
5.19.7-1 on Manjaro at the moment. A bunch of updates just dropped earlier and I haven't installed them yet but it's likely there's a kernel update mixed in.

If I really want to live on the edge I could install and attempt to run the 6.0.0rc4-1 experimental kernel as that's an option.

Just took a quick look, the new kernel pushed for Manjaro is 5.19.13-1.

Wow, really surprising how irresponsible many of these distributions are.

The rule of thumb is you  always run the oldest kernel that is still patched for security and fully supports your hardware.

That's the only responsible approach.

For bleeding edge hardware, sure, it's a good idea to use a newer kernel that properly supports that hardware, but to blanket push out the latest kernels to everyone is just a bad idea.

And this goes for software in general.

Always use the oldest version that works and is still patched.

Software is one arena where you really never want to be "first"
 
Wow, really surprising how irresponsible many of these distributions are.

The rule of thumb is you  always run the oldest kernel that is still patched for security and fully supports your hardware.

Irresponsible? You need to look up what the distributions are you say anything, because you don't know what they are.

OpenSUSE Tumbleweed is a rolling release bleeding edge distribution.

Fedora is also a bleeding edge distribution. Redhat uses it as a testing ground for updates to eventually be backported to RHEL once they're deemed stable.

If you want to run your crusty old Linux kernel on some ancient Linux distribution, you do you. But don't say anything about those of us who are on the leading edge. If it weren't for us doing the development and testing, you'd still be back on Linux 0.1 on a 386.
 
openSUSE, for bleeding edge, is incredibly well tested. Just saying. They "bleed", but not to the point of going over the cliff. Very stable for what it is.
 
Fedora is generally pretty stable as well. There are the occasional annoyances like UI issues and some application misbehaving, but you can install it as a general purpose OS and it's pretty reliable.
 
Manjaro isn't exactly a bleeding edge distro. It's a stable rolling release. They don't push updates for the very newest things immediately; there's a certain amount of testing before they release and they don't release what amounts to beta updates.

I've been running it on my main machine as my primary OS for more than two years and about the only issues I've had are hardware related. And that has to do with hardware that has degraded such as my 5800x.

It's definitely not fully stable and completely bug free but no OS is. However, it's way more than stable enough for daily use. It's also been stable enough that I swapped my home server over to Manjaro when I had issues with some of the software on openSUSE Leap being too old.
 
Baby bear bed ftw!

(and one of the biggest reasons why there's a new Linux distro every time you turn around)
 
Last edited:
Irresponsible? You need to look up what the distributions are you say anything, because you don't know what they are.

OpenSUSE Tumbleweed is a rolling release bleeding edge distribution.

Fedora is also a bleeding edge distribution. Redhat uses it as a testing ground for updates to eventually be backported to RHEL once they're deemed stable.

If you want to run your crusty old Linux kernel on some ancient Linux distribution, you do you. But don't say anything about those of us who are on the leading edge. If it weren't for us doing the development and testing, you'd still be back on Linux 0.1 on a 386.

Bleeding edge is always irresponsible.

Always always always wait for something to be thoroughly tested before implementing.

Bleeding edge stuff should really be considered only for beta testers and debuggers.

In fact, the best approach for any user (home or enterprise) provided their hardware has all the drivers it needs is to use the oldest long term service branch that still gets patched, and only upgrade when it goes EOL, and when you do upgrade, upgrade not to the newest, but to the new "oldest" long term service branch release that is still patched.

For instance, for an Ubuntu user that would mean being on 18.04 LTS (Bionic) today, and only upgrading to 20.04 LTS (Focal) in April when 18.04 LTS goes EOL. This despite the fact that 22.04 (Jammy) has already been released. There will be plenty of time to get to Jammy in 2027 when Focal goes EOL. Keep the stock kernel unless something doesn't work. Don't even bother wit non LTS releases, especially those right before an LTS release.

When it comes to software, the older the the better, as long as:
1.) It works; and
2.) it is patched against vulnerabilities
3.) It isn't missing some super important feature you absolutely have to have.

There are no exceptions.

I know, a lot of enthusiasts are excited about getting tech and other things "first", but there is no benefit to being "first" in software. Quite the opposite. Being first, or an early adopter has no value at all (provided your hardware is supported, and the features you need exist) and has many many drawbacks.

I once too chased the newest greatest thing back in the 90s and early 2000's. I got that silly notion beaten out of me by experience almost 20 years ago.
 
Last edited:
Bleeding edge is always irresponsible.
As I mentioned, "baby bear".... that is, the "just right" is what people are after, and it's very very very subjective.

I'm a big fan of TW and openSUSE's test bench (fairly unique to them). I might go so far as to "wager" latest TW vs. whatever you think is "responsible", and not on features (because you'd lose), but on sheer stability, including security.

But, with that said, I'm ok with losing that wager (because the fix is already on the way... you know?)

Hanging on to the very oldest (if that's responsible as you say) would mean something like RHEL 7 (for example), where you have an absolutely ancient (very) set of versioned software that is still supported. Take it from me, it can be frustrating world to live in.
 
Hanging on to the very oldest (if that's responsible as you say) would mean something like RHEL 7 (for example), where you have an absolutely ancient (very) set of versioned software that is still supported. Take it from me, it can be frustrating world to live in.

Are you being serious? RHEL7 only came out 2020. Now, I'm not a Red Hat user (I tried once in 2002, but I hated it) but in general there should be nothing frustrating or outdated about a Linux distribution which is only 2 years old. At 2 years they are barely broken in.

Heck, we could go back 5 or even 10 years, and see very little beyond skin/layout/UI changes in most major software packages.

Desktop computing really hasn't changed a whole lot in 15 years. Sure, we've made CPU cores faster, and added more of them, added more RAM, higher resolutions, and faster GPU's, but the basic things we do really haven't changed that much.

For instance, while, I really don't care very much for the initial version of the Microsoft's Ribbon interface in Office 2007 (it got a little bit better in later versions) but that aside, Office 2007 does just about everything I or 99.9% of Office users need, in almost exactly the same way the latest versions do.

I just don't buy using older software packages being frustrating in any way shape or form. Most software packages have had stagnant feature sets for a very very long time now.

On my daily driver I use Mint 20.3 which is based on and uses most major packages from Ubuntu 20.04 LTS, about the same age as RHEL7. Only reason I use this version is because the older kernels were a little bit iffy with some of my hardware, and I didn't want to use a non-validated kernel combination, so I prefered to go 20.3 and install the HWE stack kernels as they have been tested together and thus likely are more stable.

Otherwise I'd probably be on Mint 19.3, based on Ubuntu 18.04 LTS.

I still have Mint 19.3 live image USB sticks kicking around. There is very little difference between the software packages and their features between those included with 18.04 LTS from April 2018 and those in 20.04 LTS from April 2020.

I also just installed the new Mint 21 on my better halfs laptop (again, hardware issues, so I had to go newer). it is based on 22.04 LTS from April of this year. Again, very little changes to the underlying software packages even if we compare them to their 2018 versions.

LibreOffice, Gimp, Gparted, the window manager, you name it, except for a few UI tweaks and some subtle functionality changes it is still pretty much the same. I'm not sure what is supposed to be frustrating about it.

The one thing I found frustrating is that 22.04 LTS deprecated apt-key, which is annoying, and that's the other way around, with the new version being frustrating, not the old one. Actually, I also find the fact that the likes of FlatPak and Snaps keep sneaking onto my systems frustrating. I am fundamentally opposed to using anything except apt to install packages. To me, the single unified package manager for your entire system is key, and I am fundamentally opposed to dependency duplication, and having every single little software project manage their own dependencies. You can't trust them to keep them updated in their flatpaks/snaps.

If anything, at least Debian/Ubuntu based distributions have become more frustrating over time. If I had my way, I'd still be using Ubuntu 14.04 LTS based distributions. I feel like that was when Ubuntu peaked (at least on the server side, I hated Unity).

I mean, having to go to SystemD was a major bummer. Not that I loved Sysvinit either, but that little project Ubuntu used in the interim called Upstart was actually quite nice. I also love if up/down and absolutely hate netplan. It was so convenient and easy to set up my network by just editing plain text "/etc/network/interfaces" I really miss that.

Not everything new is bad though. Wayland promises to be a big improvement, but it still isn't quite ready yet. I'm hoping by the time I migrate to a version of Mint that uses wayland it will be nice and stable.
 
Bleeding edge is always irresponsible.

Always always always wait for something to be thoroughly tested before implementing.

Bleeding edge stuff should really be considered only for beta testers and debuggers.

By this definition, Windows 8, 10 and 11 are still beta quality software and we should all be on Windows 7.


When it comes to software, the older the the better, as long as:
1.) It works; and
2.) it is patched against vulnerabilities
3.) It isn't missing some super important feature you absolutely have to have.

There are no exceptions.

Yeah, no. You may as well unplug your computer and never touch the internet again. There's no such thing as old stable software that is patched against all vulnerabilities and works. That's a vision from a fat bong rip.

The past 12 years have shown that even if you think some piece of software or even hardware is safe, it most definitely isn't.
 
Always always always wait for something to be thoroughly tested before implementing.

Bleeding edge stuff should really be considered only for beta testers and debuggers.

define "thoroughly tested"

and please feel free to elaborate...being that i was a RHEL tester for 3 years im interested in hearing what that is
dont worry i have the Orville Redenbacker truck delivering now.
 
Heck, we could go back 5 or even 10 years, and see very little beyond skin/layout/UI changes in most major software packages.
Normally I agree with you on most things, but this is not one of them.
After 2-3 years with *NIX distros of any kind, versioning differences start to become apparent, perhaps not in the GUI but certainly in CLI.

After 5-6 years the versioning differences start to become a hindrance more than anything, and can become frustrating quickly.
I've noticed this over the years just when attempting to mount CIFS/SMB shares via CLI, and the newer and newer commands and tags that are needed for older operating systems to mount the shares from newer operating systems - that is unless I force the newer operating systems back to the older/slower/vulnerable settings/versions to keep that compatibility smooth.

After 10 years, almost nothing is going to work the way it should without manual user intervention and never ending workarounds.
Also, after 10 years (outside of paid support) patching is virtually non-existent.

Setting up my PS3 cluster back in the late 2010s with Yellow Dog Linux 6.2 (2009) was less than fun since all of the YDL repositories went poof ages ago, and finding PPC64-compiled programs, or sources compatible with PPC64, was less than fun.
Granted things are getting better, but that was less than a decade and was far more troublesome than it should have been - but I did get everything working after manually completing everything - Java, WebDAV, Apache, MPICH, etc.

10 years of support for any *NIX distro is like 50 years of support for anything else in the world.
It just isn't happening outside of megacorp, mega bank, and government contracts.

Need a new part for your 1968 Jeep Gladiator?
Time to go to the junk yard...
 
Normally I agree with you on most things, but this is not one of them.
After 2-3 years with *NIX distros of any kind, versioning differences start to become apparent, perhaps not in the GUI but certainly in CLI.

After 5-6 years the versioning differences start to become a hindrance more than anything, and can become frustrating quickly.
I've noticed this over the years just when attempting to mount CIFS/SMB shares via CLI, and the newer and newer commands and tags that are needed for older operating systems to mount the shares from newer operating systems - that is unless I force the newer operating systems back to the older/slower/vulnerable settings/versions to keep that compatibility smooth.

After 10 years, almost nothing is going to work the way it should without manual user intervention and never ending workarounds.
Also, after 10 years (outside of paid support) patching is virtually non-existent.

Setting up my PS3 cluster back in the late 2010s with Yellow Dog Linux 6.2 (2009) was less than fun since all of the YDL repositories went poof ages ago, and finding PPC64-compiled programs, or sources compatible with PPC64, was less than fun.
Granted things are getting better, but that was less than a decade and was far more troublesome than it should have been - but I did get everything working after manually completing everything - Java, WebDAV, Apache, MPICH, etc.

10 years of support for any *NIX distro is like 50 years of support for anything else in the world.
It just isn't happening outside of megacorp, mega bank, and government contracts.

Need a new part for your 1968 Jeep Gladiator?
Time to go to the junk yard...

I guess our usage must be very different? because my experience has been completely different from yours.

I am a huge console fan. I run my servers without any GUI managing them entirely from SSH. Even on my GUI desktop, I do a ton of stuff from the command line simply because it is more convenient. I almost always have a console window open when I am sitting at my computer.

While I occasionally struggle when it comes to syntax differences if I have to use - say - FreeBSD, I have never come across an instance where command line commands have significantly changed in syntax, other than when major components of the operating system have changed, like the aforementioned transition from ifup/down to netplan and its yaml bullshit.

And this is across over 20 years as a Linux user, running Linux servers since day one. (I originally got into Linux when I started running dedicated Counter-Strike servers in college back in 2000)

True, some things have improved and I have gradually transitioned to them. Debian based distributions moving towards a unified "apt" command instead of a ton of different "apt-get", "apt-cache", etc. etc. was a nice convenience update, but all the old commands still work, and I occasionally still use them (Muscle memory is a hell of a thing)

I can't speak to your exact example, as I don't really use SMB/CIFS shares much. Most of my clients are Linux (but dual boot to windows) all of my servers are either Linux based or BSD based. My linux clients mount NFS shares as they both perform better and give me more granular control of the file share access and file permissions controls, and the command line syntax there has been the same for at least 15 years if not longer.

I do have a small LXC container on my main VM/Container server (which runs Debian Bullseye) dedicated to just one thing, sharing files using SAMBA so that my clients when booted into Windows can access my NAS That container is still running Ubuntu 18.04 LTS, which is 4.5 years old now, and I have never had any issues with it, but granted it is going the opposite direction of your complaints. It is a Linux SAMBA implementation set up to share folders, which Windows clients access in GUI.

So, I don't know what to tell you. I am not an Enterprise IT department, but I run what I call a "home production" setup which rivals the capability of what many small to medium businesses would use. (heck, my home infrastructure far beats out what we have in the office, in a small company with some 50 employees only one of which is in IT) I am constantly tinkering with things in command line and have never come across the incompatibility or syntax change isasues you mention.
 
define "thoroughly tested"

and please feel free to elaborate...being that i was a RHEL tester for 3 years im interested in hearing what that is
dont worry i have the Orville Redenbacker truck delivering now.

Fair.

I'm talking about time in the wild, to have real world use bugs and kinks worked out.

Since you appear to be a software QA guy, I am sure you are aware that you never achieve 100% coverage in test. Things are always discovered in the field.

If you give software a year or two to mellow and discovered issues get patched you wind up with a much more stable solution.

In the real world this has to be balanced against the time to next upgrade. The truth is that if you install the latest version, you have a longer support window, and more time until you are forced to upgrade again, and since upgrades can be time consuming and cause interruptions, this might be worth it for many.

More mature releases - as long as they are still actively maintained - tend to be more stable though.
 
Are you being serious? RHEL7 only came out 2020. Now, I'm not a Red Hat user (I tried once in 2002, but I hated it) but in general there should be nothing frustrating or outdated about a Linux distribution which is only 2 years old. At 2 years they are barely broken in.

RHEL 7 GA'd in 2014. Just saying.

Current RHEL 7.9, uses kernel 3.10 (think about it, think about what is NOT supported by that kernel).

I support RHEL 7 hosts. This is why I understand what it is like to support very old distros.

But, I do hear you. It is painful to not have a stable "thing" that runs forever. Just don't expect it to keep pace. It can't hardware wise, and often times can't software wise leading to eventual deprecation as "good things" from circa 2014 are found to be deficient in 2023 (9 years later).

Think about, just the idea of buying a "new" server and trying to put RHEL 7 on it can be problematic. Technology moves fast, even when we try really really hard to make it move slow, it eventually breaks and lets us down.

And, God help you if you're using anything "closed".

Heck, we could go back 5 or even 10 years, and see very little beyond skin/layout/UI changes in most major software packages.

Desktop computing really hasn't changed a whole lot in 15 years. Sure, we've made CPU cores faster, and added more of them, added more RAM, higher resolutions, and faster GPU's, but the basic things we do really haven't changed that much.

Interesting opinion. And maybe "ok", today. But I don't think it's wise to hold onto too much of the past and parts of are already on the chopping block (full removal).

For instance, while, I really don't care very much for the initial version of the Microsoft's Ribbon interface in Office 2007 (it got a little bit better in later versions) but that aside, Office 2007 does just about everything I or 99.9% of Office users need, in almost exactly the same way the latest versions do.

If security matters not, this is ok. You might be surprised at how hackable Office 2007 is though.

I just don't buy using older software packages being frustrating in any way shape or form. Most software packages have had stagnant feature sets for a very very long time now.
On my daily driver I use Mint 20.3 which is based on and uses most major packages from Ubuntu 20.04 LTS, about the same age as RHEL7. Only reason I use this version is because the older kernels were a little bit iffy with some of my hardware, and I didn't want to use a non-validated kernel combination, so I prefered to go 20.3 and install the HWE stack kernels as they have been tested together and thus likely are more stable.

Again, you are "baby bear". Old ftw!... well.. except for "my sacred cow", which needs to be "new". Once you diverge off the distro support path, you have something unique. We'll call it the "baby bear distro". It's "just right" for you. It does come with additional responsibility once you choose to diverge, as you are now your own support, at least integration wise.

Otherwise I'd probably be on Mint 19.3, based on Ubuntu 18.04 LTS.

I still have Mint 19.3 live image USB sticks kicking around. There is very little difference between the software packages and their features between those included with 18.04 LTS from April 2018 and those in 20.04 LTS from April 2020.

I also just installed the new Mint 21 on my better halfs laptop (again, hardware issues, so I had to go newer). it is based on 22.04 LTS from April of this year. Again, very little changes to the underlying software packages even if we compare them to their 2018 versions.

LibreOffice, Gimp, Gparted, the window manager, you name it, except for a few UI tweaks and some subtle functionality changes it is still pretty much the same. I'm not sure what is supposed to be frustrating about it.

The one thing I found frustrating is that 22.04 LTS deprecated apt-key, which is annoying, and that's the other way around, with the new version being frustrating, not the old one. Actually, I also find the fact that the likes of FlatPak and Snaps keep sneaking onto my systems frustrating. I am fundamentally opposed to using anything except apt to install packages. To me, the single unified package manager for your entire system is key, and I am fundamentally opposed to dependency duplication, and having every single little software project manage their own dependencies. You can't trust them to keep them updated in their flatpaks/snaps.

If anything, at least Debian/Ubuntu based distributions have become more frustrating over time. If I had my way, I'd still be using Ubuntu 14.04 LTS based distributions. I feel like that was when Ubuntu peaked (at least on the server side, I hated Unity).

I mean, having to go to SystemD was a major bummer. Not that I loved Sysvinit either, but that little project Ubuntu used in the interim called Upstart was actually quite nice. I also love if up/down and absolutely hate netplan. It was so convenient and easy to set up my network by just editing plain text "/etc/network/interfaces" I really miss that.

Not everything new is bad though. Wayland promises to be a big improvement, but it still isn't quite ready yet. I'm hoping by the time I migrate to a version of Mint that uses wayland it will be nice and stable.

You say much that isn't "uniform" (again, "baby bear"). So I'll stop.... IMHO, you want what you want (which is "ok"). I just wouldn't call someone else "irresponsible" if they disagree with you.
 
RHEL 7 GA'd in 2014. Just saying.

Current RHEL 7.9, uses kernel 3.10 (think about it, think about what is NOT supported by that kernel).


Ah, my bad, I'm not a RHEL user, I just looked it up on Wikiedia which suggested RHEL7 was a 2020 thing. Not familiar with their versioning and release dates.

But I see now the mistake I made. I glanced at it a little too fast:

1665267997105.png


I support RHEL 7 hosts. This is why I understand what it is like to support very old distros.

I mean, I manage about 12x Ubuntu server edition 18.04 LTS server edition installs in my house, between bare metal servers, VM's and containers, and I've been upgrading those release by release since about 2010, so I'd argue I have a fair amount of experience running older releases as well.

Think about, just the idea of buying a "new" server and trying to put RHEL 7 on it can be problematic. Technology moves fast, even when we try really really hard to make it move slow, it eventually breaks and lets us down.

If you look back at my post, all of my claims that "older was better" explicitly mentioned "as long as it has the features and hardware support you need.

Even so, I'd rather run an older release with a Hardware Enablement Stack kernel to make newer hardware work, than a newer release.


And, God help you if you're using anything "closed".

Well, yeah, that's one of the benefits of being a home user and completely in control of my environment. I don't have to cater to any users (or corporate) demands to run closed software or have Microsoft compatibility. I'm not quite a Stallman-esque GNU License open source purist, but I do prefer to avoid closed source stuff unless I absolutely need it.

I want my entire environment to be pulled from the central package environment, with nothing statically installed, open or closed source, so thats how I design my network.

Interesting opinion. And maybe "ok", today. But I don't think it's wise to hold onto too much of the past and parts of are already on the chopping block (full removal).

If security matters not, this is ok. You might be surprised at how hackable Office 2007 is though.

I used it as an example just because it is a software package most people are familiar with. Again, if you refer back to my posts I believe I wrote "as long as it is still maintained/patched" several times.

That said, unless you use Microsoft's pathetic server tie in and cloud storage stuff, unless you are in the habit of downloading word, excel or Powerpoint files from questionable sources and trying to open them, it's really not a huge issue.

Again, this reinforces my often ranted point that software should never be explicitly network/cloud integrated. Statically save your file on your local file system and manually share it with people if you must. Sharepoint/Teams/OneDrive and all that other cloud bullshit needs to die a fiery death.


Again, you are "baby bear". Old ftw!... well.. except for "my sacred cow", which needs to be "new". Once you diverge off the distro support path, you have something unique. We'll call it the "baby bear distro". It's "just right" for you. It does come with additional responsibility once you choose to diverge, as you are now your own support, at least integration wise.

I do plenty of custom stuff on my servers, if and when it is necessary in order to get what I want to do to work. Why should I reinvent the wheel just because though?

That, and whenever you break the "everything is installed and updated by the systems one package manager" which is one of Linux' greatest advantages you wind up with an unmanageable mix of statically installed bullshit, especially if there are multiple static dependencies, and it becomes that much easier to miss a crucial security patch or something like that.

Generally if a software package is not available in official distribution repos, or at the very least in a trusted large projects PPA (not just any random idiots PPA but a verifiable source source tied to a trustworthy project or compay) then I am generally of the opinion that it is better to just pass. You wind up finding yourself in a Windows-like environment, with a ton of unmanaged software installed and not getting updated, or - even worse - trying to run its own update algorithms separate from the systems main package manager/ update service.

I've noticed that a lot of Enterprise IT types don't mind doing this, probably because they are used to the shitty Windows experience, but as soon as you wander away from the "one package manager that installs and updates 100% of the software on a system" you are creating a mess for yourself, no matter how good you are.

Again, you are "baby bear". Old ftw!... well.. except for "my sacred cow", which needs to be "new". Once you diverge off the distro support path, you have something unique. We'll call it the "baby bear distro". It's "just right" for you. It does come with additional responsibility once you choose to diverge, as you are now your own support, at least integration wise.

I cut my teeth back in the day when you needed to compile your kernels all the time to change configurations.

I tried Suse back in 1993, but at that time Linux wasn't for me. I was a kid who liked games, and Suse couldn't do that for me in 1993.

I came back to it in the early 2000's with Red Hat but didn't like it, particularly the RPM package manager drove me nuts. I moved to Gentoo in ~2001, using the fat binder of a manual to line by line bootstrap my systems during install, and loved it at the time. The portage package manager with custom cflags for my hardware appealed to me. It worked, but it was high maintenance, which once I graduated and got out in the real world and had to work for a living got to be a bit annoying to keep up with.

Once I tried Ubuntu in ~2007 I liked how things that I used to have to troubleshoot "just worked", and switched to that. But I got pissed off when they went Unity, and I hated that interface after being used to a AGnome2 like interface for years, so I switched to Mint.

Back when I switched to Mint in 2011 it was not a beginner distribution. It was more of a refuge for Ubuntu users who hated Unity. Over time it has unfortunately changed though.

Yes, Mint today has a lot of crutches, but you don't have to use them. I actually disable most of the GUI setup stuff as I feel it just gets in the way, preferring to use the command line. My biggest annoyance with Mint is that they target beginners to Linux and are trying to make everything GUI managed, which IMHO dumbs things down and makes them less usable. In the end this may be what drives me away from Mint. Its just nice to not have to bother messing around with the desktop environment, and have it work, and look nice out of the box.

All of my servers are Debian, Ubuntu Server edition or FreeBSD, none of them with window managers or desktop environments installed.

There really is nothing "Baby Bear" about what I do at all. If anything that's kind of the way the way I feel about RHEL and CentOS judging by many of the stupid questions I see from the crowds who use them. It feels like the enterprise releases dumb things down so that even Windows server admins can use them. I'm not exactly impressed by the levels of understanding most "enterprise" folks have when it comes to managing *nix servers.

These are the same folks who usually wind up using "systems in a box" like Sonicwall firewall/VPN's rather than something like pfsense or doing it by hand in *nix and spinning up a BSD OpenVPN server or wind up using QNAP garbage instead of a custom file server. This is the real "baby bear" bullshit right here, and thats what Enterprise types do all day every day because they don't want to manage a real server.

Once I tire of Mints downward spiral, I'll likely give Debian a try on the desktop, probably still with the Cinnamon desktop as it is nice and sleek and works the same way Gnome2 did, which honestly was the pinnacle of Linux desktops.

Debian is nice on my servers, and follows a decent configuration and layout mindset that makes sense. Nothing beats apt as a package manager, so I am unwilling to move away from it, which limits my options somewhat. It will likely suffer from the absence of so much of the work that the Ubuntu team does in patching their kernels, but I could always use Ubuntu kernels if I really wanted to.

I don't fear doing off the beaten track advanced stuff on my systems. I do it all of the time. I just don't want to reinvent the wheel. Mint is appealing because it comes with the desktop environment set up perfectly out of the box, and I don't need to spend hours setting it up to my liking. I could just as easily run any number of other distributions (preferably apt based ones) but why woiuld I waste the time if I don't have to? I'll put in the effort when it is necessary, not when it isn't.
 
Bleeding edge is always irresponsible.

Always always always wait for something to be thoroughly tested before implementing.

Bleeding edge stuff should really be considered only for beta testers and debuggers.
Not sure I understand, aren't the distros you quoted, Fedora and openSUSE, considered the testbeds where RHEL and SLES test the software before repacking them as stable versions under their respective names? What's irresponsible about that?
 
LTS releases still receive security and hardware updates even though they run an older kernel, with kernel updates every second point release. I've never had a security issue as a result of running an older kernel and I've never had a hardware issue. Having said that, I do run Nvidia hardware/drivers and admit that running AMD hardware the situation may be different.

When people complain about Nvidia drivers, it's usually a result of one of three things:

  1. Running bleeding edge kernels.
  2. installing Nvidia drivers using the .sh script, therefore bypassing the package manager altogether.
  3. Laptops with switchable graphics, something that doesn't always work perfectly under Windows either.
 
It's frustrating watching people argue about stuff like this because of a minor (but possibly destructive) bug that hit a few people and was immediately reverted. The number of people hit by this was miniscule. It wasn't even a bug that affected every Intel based setup.

Yet it suddenly makes people like me who run Arch Linux (or any rolling distro) "irresponsible" which is beyond laughable. Am I irresponsible for adding this to my kernel arguments? https://make-linux-fast-again.com/ Short answer: No I'm not. I'm not at risk for a speculative attack so why cripple my performance?

I understand the risks of running Arch Linux on my computer. I also understand the benefits (my computer is 100% AMD based and recent kernel upgrades have increased performance). I also understand that I probably don't want a rolling release to be a server. That's why all my servers are on Ubuntu LTS.

But yeah I guess I'm irresponsible. :rolleyes:
 
I don't think of those running bleeding edge kernels are irresponsible, I simply get equally frustrated when the ignorant few complain that every problem is caused by something other than the fact that they're running bleeding edge kernels.

But if you're aware of the risks and don't clog up r/linuxgaming with arguments about how everything should be FOSS, all the power to you.
 
Back
Top