My 2018 Linux Test

Every movie I like enough to own tends to be at least 20 years old. Makes it easy... I own the movies I care to own already. So streaming of all the crap they make these days is fine. Honestly I pirate a ton as well... sue me, ah never mind I'm Canadian. lol Really though streaming to me is for the serials more then movies anyway. Very few modern movies are worth a 2 hour investment.

I know a lot of people love plex... but I have always used;
http://www.universalmediaserver.com/
Its been my go to for a long time now. I have no idea if its in the Ubuntu repos. Last time I messed with Mint I had to install it from source, which works but of course means you have to update by hand as well. I'm a Manjaro/Antergos/Arch user we have a git pulling AUR entry for that.
Great software though... simple, streams FLAC and pretty much every Video format no issues, even converts things like 265 on the fly for devices that don't have native support like a PS3 I still have around.
 
No I'm not stuck on any one browser. However, in this case, everything I've read says Chrome doesn't solve the HBOgo problem. I'm pretty sure the problem is HBOgo uses Flash to stream videos.

I also read that Firefox uses Google DRM tech so if it was a DRM issue, neither would work. Don't know how true that is. You know the internet. lol

Its possible they do. Its my understanding Google uses http://www.widevine.com/
If you want to use Chrome On Linux... I normally suggest installing Chromium instead the open source project Chrome is based on. Then installing widevine. Again I'm not sure if a widevine drm package is in the Ubuntu repos or not. Check and see if their repos have widevine first I guess. I know for me Chromium + widevine covers off Netflix and anything else I have used... HBO using Flash though may be a massive pain in the rear. (really though not sure what HBO is/was thinking using Flash when everyone is trying to dump it)
 
Every movie I like enough to own tends to be at least 20 years old. Makes it easy... I own the movies I care to own already. So streaming of all the crap they make these days is fine. Honestly I pirate a ton as well... sue me, ah never mind I'm Canadian. lol Really though streaming to me is for the serials more then movies anyway. Very few modern movies are worth a 2 hour investment.

I know a lot of people love plex... but I have always used;
http://www.universalmediaserver.com/
Its been my go to for a long time now. I have no idea if its in the Ubuntu repos. Last time I messed with Mint I had to install it from source, which works but of course means you have to update by hand as well. I'm a Manjaro/Antergos/Arch user we have a git pulling AUR entry for that.
Great software though... simple, streams FLAC and pretty much every Video format no issues, even converts things like 265 on the fly for devices that don't have native support like a PS3 I still have around.
Wow, I remember using that way back in the day. I didn't realize it was still in active development. I didn't use it for very long as I was an early user of XBMC(Kodi) back on my original Xbox, but cool to see that it's still around. Does it provide indexing, or does it just setup a browse-able share?
 
No I'm not stuck on any one browser. However, in this case, everything I've read says Chrome doesn't solve the HBOgo problem. I'm pretty sure the problem is HBOgo uses Flash to stream videos.

I also read that Firefox uses Google DRM tech so if it was a DRM issue, neither would work. Don't know how true that is. You know the internet. lol

Its possible they do. Its my understanding Google uses http://www.widevine.com/
If you want to use Chrome On Linux... I normally suggest installing Chromium instead the open source project Chrome is based on. Then installing widevine. Again I'm not sure if a widevine drm package is in the Ubuntu repos or not. Check and see if their repos have widevine first I guess. I know for me Chromium + widevine covers off Netflix and anything else I have used... HBO using Flash though may be a massive pain in the rear. (really though not sure what HBO is/was thinking using Flash when everyone is trying to dump it)

Almost all browsers going forward are going to disable flash entirely. I have yet to use HBO Go, but I refrain from using anything flash related. I use both the open source Chrome and Chromium on my system. I also use Iceweasel from time to time.
 
If there was an option to register a default that would probably solve the problem. Also, not having a way to manage which output device is being used by default is super annoying. I'm just glad I remembered the solution I found years ago because having to start from scratch again trying to figure this one out would have been mind numbing.

There is a way to set a default sound card. Its just not as user friendly as it should be.

It however requires you to edit .conf files. (I know we convinced you to try Linux again saying things are easier now... ok multiple sound cards can still be a bit annoying)

I am not sure which .conf files are best to create edit in Ubuntu, the distros do very a bit on their sound setups... so hit the google machine a bit to make sure this sounds right for Ubuntu.

cat /proc/asound/modules
Should list your sound devices.
Then edit or create
nano /etc/modprobe.d/alsa-base.conf

Add the names of your cards to the file (or edit the #)

You should see something like
options snd_usb_intel index=-2
options snd_hda_intel index=-1

I know it doesn't seem logical but make sure you main card is -2 .... and your other cards are -1.

Its been awhile since I actually did mess with multiple sound cards but I'm pretty sure that should work. The nuclear option is to simply blacklist the second device if you never plan to use it.
nano /etc/modprobe.d/sound.blacklist.conf
add
blacklist NAMEofCARD
adding the name of the card reported from cat /proc/asound/modules
 
Last edited:
Wow, I remember using that way back in the day. I didn't realize it was still in active development. I didn't use it for very long as I was an early user of XBMC(Kodi) back on my original Xbox, but cool to see that it's still around. Does it provide indexing, or does it just setup a browse-able share?

I'm really not sure... I use it as a pretty simple share to be honest. I believe the answer is yes. I know it does everything I care about and just works, I used to mess with render engines and stuff. Honestly the last few years I pretty much just use the default install settings and go... never had it not work on a file format or anything so haven't had to dig deeper in years. It does remember where I left off on video files.... my directory structure is logical to me so I never messed much with search functions ect but I do believe they are supported. It also allows me to browse compressed archives which is not a big deal but cool anyway. lol

http://www.universalmediaserver.com/comparison/
 
There is a way to set a default sound card. Its just not as user friendly as it should be.

It however requires you to edit .conf files. (I know we convinced you to try Linux again saying things are easier now... ok multiple sound cards can still be a bit annoying)

I am not sure which .conf files are best to create edit in Ubuntu, the distros do very a bit on their sound setups... so hit the google machine a bit to make sure this sounds right for Ubuntu.

cat /proc/asound/modules
Should list your sound devices.
Then edit or create
nano /etc/modprobe.d/alsa-base.conf

Add the names of your cards to the file (or edit the #)

You should see something like
options snd_usb_intel index=-2
options snd_hda_intel index=-1

I know it doesn't seem logical but make sure you main card is -2 .... and your other cards are -1.

Its been awhile since I actually did mess with multiple sound cards but I'm pretty sure that should work. The nuclear option is to simply blacklist the second device if you never plan to use it.
nano /etc/modprobe.d/sound.blacklist.conf
add
blacklist NAMEofCARD
adding the name of the card reported from cat /proc/asound/modules

PulseAudio Volume Control gives me all the control I need. Thx for the info though. :)
 
PulseAudio Volume Control gives me all the control I need. Thx for the info though. :)

No doubt for the most part even with multiple sound devices the basic controls get the job done. I understand your issue though.

Believe it or not the talking about editing /modprobe.d and blacklists ect. After awhile using Linux that type of stuff stops sounding so cryptic. It does start to become more logical as you understand what something like modprobe is. What cat is.... what /proc is (not a real file system but a list of system hardware and software in the kenrel)

I know new Linux users look at every distro as a completely different OS then the other distros. If you use Linux long enough, seeing how they are all connected and use the same sub systems and in general the same locations for .conf files ect things do get easier. In the same way that after using Windows for years people know how to pop up regedit, msconfig ect... it can all be as odd in its own way, most people just have more XP with it.
 
  • Like
Reactions: Lunar
like this
There is a way to set a default sound card. Its just not as user friendly as it should be.

It however requires you to edit .conf files. (I know we convinced you to try Linux again saying things are easier now... ok multiple sound cards can still be a bit annoying)

I am not sure which .conf files are best to create edit in Ubuntu, the distros do very a bit on their sound setups... so hit the google machine a bit to make sure this sounds right for Ubuntu.

cat /proc/asound/modules
Should list your sound devices.
Then edit or create
nano /etc/modprobe.d/alsa-base.conf

Add the names of your cards to the file (or edit the #)

You should see something like
options snd_usb_intel index=-2
options snd_hda_intel index=-1

I know it doesn't seem logical but make sure you main card is -2 .... and your other cards are -1.

Its been awhile since I actually did mess with multiple sound cards but I'm pretty sure that should work. The nuclear option is to simply blacklist the second device if you never plan to use it.
nano /etc/modprobe.d/sound.blacklist.conf
add
blacklist NAMEofCARD
adding the name of the card reported from cat /proc/asound/modules

The alsaconf program should be able to setup the modules for you, but many distributions don't seem to ship it any more.
 
  • Like
Reactions: ChadD
like this
No doubt for the most part even with multiple sound devices the basic controls get the job done. I understand your issue though.

Believe it or not the talking about editing /modprobe.d and blacklists ect. After awhile using Linux that type of stuff stops sounding so cryptic. It does start to become more logical as you understand what something like modprobe is. What cat is.... what /proc is (not a real file system but a list of system hardware and software in the kenrel)

I know new Linux users look at every distro as a completely different OS then the other distros. If you use Linux long enough, seeing how they are all connected and use the same sub systems and in general the same locations for .conf files ect things do get easier. In the same way that after using Windows for years people know how to pop up regedit, msconfig ect... it can all be as odd in its own way, most people just have more XP with it.
I agree with this so much. In the almost 2 years since my switch, the stuff that used to never make sense now does. Although, the biggest for me was the realization that every distro is pretty much the same under the hood. They all have the same file structure, same base utilities, etc. What really begins to set them apart is really two things in my mind. Project philosophies and package management. That's really where the difference between distros is found. Once you're poking around at the base system, everything is the same for the most part.
 
I agree with this so much. In the almost 2 years since my switch, the stuff that used to never make sense now does. Although, the biggest for me was the realization that every distro is pretty much the same under the hood. They all have the same file structure, same base utilities, etc. What really begins to set them apart is really two things in my mind. Project philosophies and package management. That's really where the difference between distros is found. Once you're poking around at the base system, everything is the same for the most part.

Big split that I've encountered is between the RHEL-alikes and Debian-alikes for Linux, and then of course FreeBSD is it's own thing. I've found Ubuntu (Debian) to be the best supported for the kind of homelab stuff that I've been messing with and FreeBSD to be a common base for other stuff like network appliance and storage appliance distributions.

Hard part is that my employer is hooked on RHEL/CentOS, and I mostly hate it :D.
 
Big split that I've encountered is between the RHEL-alikes and Debian-alikes for Linux, and then of course FreeBSD is it's own thing. I've found Ubuntu (Debian) to be the best supported for the kind of homelab stuff that I've been messing with and FreeBSD to be a common base for other stuff like network appliance and storage appliance distributions.

Hard part is that my employer is hooked on RHEL/CentOS, and I mostly hate it :D.

BSD (Berkeley Software Distribution) was Berkeleys version of Unix that first shipped in 1977. FreeBSD is an updated open source version of that OS. Its like Linux... and they have adopted some of the same sub systems ect which makes it look a lot like Linux (using the same DEs ect) it is at its core a completely different kernel. You find it in appliances and NAS systems ect as it has a very different licence. Basically the Berkeley licence BSD is open sourced under allows companies to take the BSD kernel and systems and change them however they want and then close them if they wish. Where as the Linux licence requires any company that adds their own code to share it as well.

That difference its why you find BSD used in commercial devices... such as Playstation 3 and 4.

As for Red Hat... ya its an American company which matters to some folks in some markets. They do offer pretty great support. Red Hat also partners with a bunch of server manufactures such as HP, IBM and Dell although they in general also offer Ubuntu server so perhaps that isn't a big deal. RHEL has also enjoyed better enterprise class software support (although Ubuntu is making headway there)... things like IBM Tivoli Storage Manager used to be much easier to deal with in RHEL. (granted these days IBM has .rpm and .deb installers) I understand why your not a fan though, not my favorite distro either... but I understand why there is love for it, its had one of the best sales forces behind it for longer then its competition. Red Hat isn't a bad company to deal with.
 
Mostly, I don't like the 'look and feel' of RHEL and CentOS, and I've found stuff that runs well on Ubuntu and not so well on CentOS for home lab stuff (Ubiquiti stuff for one).

I also see why we use them for our production systems.
 
  • Like
Reactions: ChadD
like this
Mostly, I don't like the 'look and feel' of RHEL and CentOS, and I've found stuff that runs well on Ubuntu and not so well on CentOS for home lab stuff (Ubiquiti stuff for one).

I also see why we use them for our production systems.
I think the main difference between RHEL/CentOS and Debian derivatives is that RHEL and CENT are still based on older Kernels. According to distrowatch, the latest version of RHEL/CentOS are still running the 3.10 kernel and only switched to systemd from upstart in the latest version as well. If I had to guess, the issues you experience with them are more due to their use of older versions for stability. They will never be bleeding edge, even less so than Debian or an Ubuntu LTS. However, I don't think the issue is with RPM based distros in and of themselves. For instance, Fedora uses much more up to date software than the RHEL and CentOS as it's basically the test bed for those distros.

What I've learned is that all distros are pretty similar underneath, and usually the choice of distro for me is totally reliant upon two things. Personal preference and whether I want rock solid stability or bleeding edge software. As an example, in my house I have 2 linux servers, a Synology NAS, my desktop, and my laptop (my wife still prefers Windows on her machines). Now, the NAS of course runs the Synology DSM software which is based on whatever it's based on.

My 2 servers are a RPi3 as an OpenVPN server and an MSI Cubi mini PC acting as my home TV/Media server. My OVPN server is running Raspian Stretch Lite, and my media server is running Ubuntu MATE 16.04.1 with the display manager disabled. I used MATE to make the initial config easier, and then disabled it once everything was up and running to essentially make it Ubuntu Server 16.04.1. Both of these servers are not running the latest and greatest builds of software packages because I need rock solid stability. I don't care if they're on the latest Kernel, I care that they are reliable and reasonably up to date. They have the latest security patches, but they don't need to be bleeding edge.

My laptop and desktop on the other hand I'm trying to keep as up to date as possible without going rolling. Although, I'm seriously considering going rolling again because I'd rather not have to deal with point releases and I want the Cinnamon desktop, and outside of Linux Mint, Manjaro Cinnamon has the best implementation of that desktop in my opinion. For these machines if something breaks only I am affected. If the TV server craps out due to an unstable update then my wife can't watch TV anymore (it's a whole home DVR for OTA broadcast via antenna).

The point of all this is that CentOS and RHEL would probably be just fine for my servers because stability is a primary concern. The only thing that keeps me on the debian side is a personal preference for Debian, never been a fan of RPM, and I also prefer to have a more up to date Kernel than 3.10. Ultimately, I've found very few instances where software or services don't work the same on the main distributions. Sure some package management stuff is different, but under the hood most distros use the same kernel (with tweaks of course), the same init system, and once packages are installed they are the same (besides versioning).
 
I think the main difference between RHEL/CentOS and Debian derivatives is that RHEL and CENT are still based on older Kernels. According to distrowatch, the latest version of RHEL/CentOS are still running the 3.10 kernel and only switched to systemd from upstart in the latest version as well. If I had to guess, the issues you experience with them are more due to their use of older versions for stability. They will never be bleeding edge, even less so than Debian or an Ubuntu LTS. However, I don't think the issue is with RPM based distros in and of themselves. For instance, Fedora uses much more up to date software than the RHEL and CentOS as it's basically the test bed for those distros.

What I've learned is that all distros are pretty similar underneath, and usually the choice of distro for me is totally reliant upon two things. Personal preference and whether I want rock solid stability or bleeding edge software. As an example, in my house I have 2 linux servers, a Synology NAS, my desktop, and my laptop (my wife still prefers Windows on her machines). Now, the NAS of course runs the Synology DSM software which is based on whatever it's based on.

My 2 servers are a RPi3 as an OpenVPN server and an MSI Cubi mini PC acting as my home TV/Media server. My OVPN server is running Raspian Stretch Lite, and my media server is running Ubuntu MATE 16.04.1 with the display manager disabled. I used MATE to make the initial config easier, and then disabled it once everything was up and running to essentially make it Ubuntu Server 16.04.1. Both of these servers are not running the latest and greatest builds of software packages because I need rock solid stability. I don't care if they're on the latest Kernel, I care that they are reliable and reasonably up to date. They have the latest security patches, but they don't need to be bleeding edge.

My laptop and desktop on the other hand I'm trying to keep as up to date as possible without going rolling. Although, I'm seriously considering going rolling again because I'd rather not have to deal with point releases and I want the Cinnamon desktop, and outside of Linux Mint, Manjaro Cinnamon has the best implementation of that desktop in my opinion. For these machines if something breaks only I am affected. If the TV server craps out due to an unstable update then my wife can't watch TV anymore (it's a whole home DVR for OTA broadcast via antenna).

The point of all this is that CentOS and RHEL would probably be just fine for my servers because stability is a primary concern. The only thing that keeps me on the debian side is a personal preference for Debian, never been a fan of RPM, and I also prefer to have a more up to date Kernel than 3.10. Ultimately, I've found very few instances where software or services don't work the same on the main distributions. Sure some package management stuff is different, but under the hood most distros use the same kernel (with tweaks of course), the same init system, and once packages are installed they are the same (besides versioning).

I do believe our servers are configured very similarly with the same Ubuntu MATE 16.04 and DM disabled. What do they say about great minds? :D
 
I agree with this so much. In the almost 2 years since my switch, the stuff that used to never make sense now does. Although, the biggest for me was the realization that every distro is pretty much the same under the hood. They all have the same file structure, same base utilities, etc. What really begins to set them apart is really two things in my mind. Project philosophies and package management. That's really where the difference between distros is found. Once you're poking around at the base system, everything is the same for the most part.

What annoys me is how there's no standard on were some configurations are located. Depending on distro / application it may be at /etc or /etc/app or /user/conf or... It's mega annoying when switching distros and sometimes there are even redundant conf files at locations which are not even read lol. Those always make for fun time when you change the configuration and nothing happens :D
 
What annoys me is how there's no standard on were some configurations are located. Depending on distro / application it may be at /etc or /etc/app or /user/conf or... It's mega annoying when switching distros and sometimes there are even redundant conf files at locations which are not even read lol. Those always make for fun time when you change the configuration and nothing happens :D

It is annoying when some software projects step outside the FSH rules. Having said that for the most part system stuff that is supposed to go to /etc does... just a few bad actors that way. I agree though its super annoying.

There are other greyer area stuffs that annoy me like ~/.conf/ Still a ton of projects just throw their conf files into their own directories. Stuff like ~/.dosbox drives me nuts. Its not technically breaking with the FSH, I just find it annoying seeing so many /. folders in my /home
 
Last edited:
I think the main difference between RHEL/CentOS and Debian derivatives is that RHEL and CENT are still based on older Kernels. According to distrowatch, the latest version of RHEL/CentOS are still running the 3.10 kernel and only switched to systemd from upstart in the latest version as well.

https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Kernel_backporting

Red Hat does but doesn't use old kernels. Red Hat uses custom kernels. They back port security fixes and new features into the bones of the older kernel. This gives them very fine grain control of what their kernel is doing and how it interacts with other packages. Its likely the biggest reason RHEL is quite possibly the most rock solid Linux distro going. I know Ubuntu is gaining popularity... rightfully so. But the kernel back porting is one of the biggest reasons RHEL gets some of the massive big system wins it gets. For rock solid its hard to beat RHEL.
 
https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Kernel_backporting

Red Hat does but doesn't use old kernels. Red Hat uses custom kernels. They back port security fixes and new features into the bones of the older kernel. This gives them very fine grain control of what their kernel is doing and how it interacts with other packages. Its likely the biggest reason RHEL is quite possibly the most rock solid Linux distro going. I know Ubuntu is gaining popularity... rightfully so. But the kernel back porting is one of the biggest reasons RHEL gets some of the massive big system wins it gets. For rock solid its hard to beat RHEL.

Doesn't Ubuntu do the same thing re: rolling newer features into older kernels? I wouldn't know, I'm currently running 14.15.18.
 
Doesn't Ubuntu do the same thing re: rolling newer features into older kernels? I wouldn't know, I'm currently running 14.15.18.

Ubuntu does offer backporting for specific packaging... but its my understanding that they don't backport kernels. They release the official security/bug fix versions like 4.17.2 ->4.17.3 ect. Red Hat will take all the pull requests Linus gets for 4.17.4 for instance... they go through them and anything security wise gets pulled and pack ported, anything feature wise they test the heck out of then decide if they need to bother back porting it or not. (most times they don't) If you are running a 1000 location bank and you need X or Y custom piece of software to never fail... that is what you want. :)

To be honest RHEL and Ubuntu are really not going after the same customers at all. I know you look around and see a lot of ubuntu servers. But all the big iron stuff is going to be running RHEL. (keep in mind a lot of servers running RHEL or some spin of RHEL are not reporting anything about what they are running lol) REHL is more reliable and its more secure. SELinux is a NSA / Red Hat project... and the foundation of Linux hardened kernels. RHEL is also what your going to find running IBM servers... Oracle has their own RHEL spin... and Scientific Linux which runs things like CERNs large hadron collider are all RHEL based.

I'm not saying Ubuntu is bad... for a lot of server setups its easier to work with as their server version tends to be much more alike their desktop version. However when it comes to banks, data centers, Gov servers... Ubuntu is still for the most part seen as a desktop distro by most people. They do have a MAC system like SELinux in AppArmor . In general I believe its accepted that Selinux is more secure... although Apparmor is a lot easier to use. Apparmor secures by directory where as Sellinux links directly to inode location. This does mean that AppArmor is file system agnostic which is nice... but it does make it less secure as a Hard linked Inode location could techincaly bypass the security.

Anyway I would say most admins are going to say Ubuntu is easier to deal with... its cheaper to support outside of support contracts. As most Linux people can grok Ubuntu server setups as they are very much what they are used to seeing... and things like AppArmor are much easier to deal with. Large projects with lots of sensitive data like large banks... the NSA... CERN ect RHEL is the best of the best.
 
Last edited:
Ubuntu does offer backporting for specific packaging... but its my understanding that they don't backport kernels. They release the official security/bug fix versions like 4.17.2 ->4.17.3 ect. Red Hat will take all the pull requests Linus gets for 4.17.4 for instance... they go through them and anything security wise gets pulled and pack ported, anything feature wise they test the heck out of then decide if they need to bother back porting it or not. (most times they don't) If you are running a 1000 location bank and you need X or Y custom piece of software to never fail... that is what you want. :)

I'm fairly certain Ubuntu does the same thing? Here's the Software & Updates settings on my own PC:

9SCmERo.png
 
I'm fairly certain Ubuntu does the same thing? Here's the Software & Updates settings on my own PC:

View attachment 93693

https://help.ubuntu.com/community/UbuntuBackports
https://packages.ubuntu.com/xenial-backports/
https://launchpad.net/ubp

Backports to Ubuntu means software, not kernel stuffs. They are basically saying if your running an older LTS version of Ubuntu. You can enable the backport repository. This will allow you to install newer software packages even though your stable LTS shipped with different versions of that software.

As an example imagine you have a Ubuntu server running 16.04 LTS
16.04 shipped with Python 2.7
If you want you can enable the xenial (16.04s code name) backports and install Python 3.01 which Ubuntu shipped with a newer non LTS ubuntu version.
Now if you need the latest Python 3.7 ... Ubuntu hasn't shipped it at all yet so your options there include building it yourself or using a PPA

(*** Note that Ubuntu backports are also NOT supported by Ubuntu at all... they are community driven 100%)

Red Hat backports include software, and the kernel itself... many of their users have a ton of custom software and keeping the kernel mostly static as long as possible saves them time and more importantly money.

https://access.redhat.com/security/updates/backporting/?sc_cid=3093
Red hat unlike Ubuntu fully supports their backports. They are not community driven, they are official.

Red Hat keeps a list of all the stuff their systems are currently vulnerable to... or test / explain how they are not effected by CVE entries.
https://access.redhat.com/security/security-updates/#/cve
This way admins can easily (some what easily) check to ensure they are protected against X or Y. One of the main issues with red hat back porting is false positives... where a someone may look and go oh no we have version xxxx.001 and there is a big vulnerability in the news about that. Good chance they are actually protected via a back port.

Anyway my point is. "Backporting" is a word they almost use in a completely different way. Ubuntu simply means newer versions of software that shipped with a LTS. Red Hat is actually patching software and the kernel without actually changing the version numbers at all.

I get why so many people can't stand RHEL (Red Hat backporting is a polarizing subject)... I'm not saying their way is better for everyone. It is however very common and imo preferable for customers that are large enough to have a lot of internal custom software. Updating those custom packages to use new Libraries and frameworks and even specific Kernel stuff for no real reason can be expensive... paying programmers to roll with every version of X or Y include, making new binaries for perfectly working software because some library package changes something is silly. Customers like that like that they can install Red hat 7... and know their internal packages won't need any overhauls or worse until the next major version of RHEL hits. RHEL 7 is 4 years old and is scheduled to be replaced in late 2019. Plenty of Red hats big iron customers have software packages that have been running on RHEL 7 servers the entire time with zero down time... even though the servers are 100% patched security wise. (Ok well since May-June 2014 anyway when Red Hat got kpatch mainlined in the kernel :) I'm sure before that they may have had to deal with maintenance down time for kernel updates. Also the IBM Power based machines I believe don't support kpatch at this point.) --- A not on live patching yes Ubuntu implemented a live update service as well when kernel 4.0 hit that is when Red Hat upstreamed kpatch and SUSE was working on Kgraft at the same time, Ubuntu tech is based on that work. Ubuntu is the only company that allows users to use their service for free up to 3 machines anyway.
 
Last edited:
OK I'll take your word for it. As stated, I don't use official kernels so it's really something I've never actually looked into. ;)
 
  • Like
Reactions: ChadD
like this
OK I'll take your word for it. As stated, I don't use official kernels so it's really something I've never actually looked into. ;)

Its not something that matters to regular users. Even most company servers for that matter. Its more a data center type concern where they are running very custom software they don't want to be rebuilding every time an include library bumps a version. Yet at the same time they can't afford to be running anything outdated security wise. Red Hat really goes after those types of customers, where as the Ubuntu guys seem to be going after the more medium size server farms.
 
So I've spent the past few days trying to setup Plex server on the linux box I've been playing around with. It took forever to get Plex to be able to read the hard drive I put some media on. Even after I got the permissions set correctly on the drive I had to completely remove every bit of Plex from the machine with command line actions in the terminal and reinstall Plex for it to finally find my media.

#$!@#$%!$#%
 
So I've spent the past few days trying to setup Plex server on the linux box I've been playing around with. It took forever to get Plex to be able to read the hard drive I put some media on. Even after I got the permissions set correctly on the drive I had to completely remove every bit of Plex from the machine with command line actions in the terminal and reinstall Plex for it to finally find my media.

#$!@#$%!$#%

Really? My Plex server is an Ubuntu box. I have all of my media stored on a NAS. Just made sure the media folder was mounted a boot and added the libraries just fine.
 
So I've spent the past few days trying to setup Plex server on the linux box I've been playing around with. It took forever to get Plex to be able to read the hard drive I put some media on. Even after I got the permissions set correctly on the drive I had to completely remove every bit of Plex from the machine with command line actions in the terminal and reinstall Plex for it to finally find my media.

#$!@#$%!$#%

That's weird. I have my Plex on Ubuntu 16.04 and it was a breeze to configure and setup. I even had to add a drive at one point and that was just as easy to add the new media to my library. Now the question is will it all function after I upgrade it to 18.04.01 that was released last week. ;)
 
That's weird. I have my Plex on Ubuntu 16.04 and it was a breeze to configure and setup. I even had to add a drive at one point and that was just as easy to add the new media to my library. Now the question is will it all function after I upgrade it to 18.04.01 that was released last week. ;)

Yeah, I have no idea what happened to cause the problems. I had completely wiped the media drive and formatted it from within Ubuntu. I created the Media folder in Ubuntu, and then installed Plex. It just wouldn't find the media files no matter what I did for days.

Thankfully I got it working now and I can access it from my local network on a Roku. I'm probably going to take a couple of days off from working on it before I try and figure out how to access it from outside the local network.

It does seem to be a pretty nice piece of software. A lot better than anything I've seen for HTPC before it.
 
Yeah, I have no idea what happened to cause the problems. I had completely wiped the media drive and formatted it from within Ubuntu. I created the Media folder in Ubuntu, and then installed Plex. It just wouldn't find the media files no matter what I did for days.

Thankfully I got it working now and I can access it from my local network on a Roku. I'm probably going to take a couple of days off from working on it before I try and figure out how to access it from outside the local network.

It does seem to be a pretty nice piece of software. A lot better than anything I've seen for HTPC before it.

Unix file permissions inheritance when copying across filesystems is sufficiently different from Windows to cause these problems. For these kinds of servers it's easiest to put your files on its own partition/drive and then mount it using the uid/gid options.
 
Unix file permissions inheritance when copying across filesystems is sufficiently different from Windows to cause these problems. For these kinds of servers it's easiest to put your files on its own partition/drive and then mount it using the uid/gid options.

Well, that's exactly what I did. The storage drive is in the Linux box, it was wiped and formatted within Linux, and the file structure was created within Linux.
 
Yeah, I have no idea what happened to cause the problems. I had completely wiped the media drive and formatted it from within Ubuntu. I created the Media folder in Ubuntu, and then installed Plex. It just wouldn't find the media files no matter what I did for days.

Thankfully I got it working now and I can access it from my local network on a Roku. I'm probably going to take a couple of days off from working on it before I try and figure out how to access it from outside the local network.

It does seem to be a pretty nice piece of software. A lot better than anything I've seen for HTPC before it.

It's a single port you need to forward. It's really easy to make accesible from elsewhere.
 
So I've spent the past few days trying to setup Plex server on the linux box I've been playing around with. It took forever to get Plex to be able to read the hard drive I put some media on. Even after I got the permissions set correctly on the drive I had to completely remove every bit of Plex from the machine with command line actions in the terminal and reinstall Plex for it to finally find my media.

#$!@#$%!$#%

Is your media stored on NTFS partitions/drives? If so, that's your issue. Remember, NTFS is not native to Linux.
 
That's weird. I have my Plex on Ubuntu 16.04 and it was a breeze to configure and setup. I even had to add a drive at one point and that was just as easy to add the new media to my library. Now the question is will it all function after I upgrade it to 18.04.01 that was released last week. ;)
I'm running 18.04 and Plex, it was easier for me to setup than it was on 16.04.

I think I spend more time on my ubuntu box more than my Win 10 boxes anymore.
 
No. I'm pretty sure I did EXT4. But it's definitely 3 or 4.
Sounds weird. Are you sure you had group/user settings correct? For beginners it can be most frustrating if and when they create folders or shares as root and then bang their heads to walls when their regular users can't access the share because it's owned and accessible by root and not the user.

For ex windows users it's an easy trap since this doesn't happen on windows where the default user created on setup acts more or less as root. So even if you're an experienced Windows user, it's not apparent to you that on linux you have to grant users or groups explicitly the access to files, folders and shares. It is possible that plex creates its own user/group during setup and if you don't grant that access to the media files - it won't be able to read them.

I haven't used plex though but I've seen this behaviour a lot. For example if you set up a Tomcat server, it creates a tomcat user and group which is then used to tell the system where Tomcat can access things.
 
Last edited:
Sounds weird. Are you sure you had group/user settings correct? For beginners it can be most frustrating if and when they create folders or shares as root and then bang their heads to walls when their regular users can't access the share because it's owned and accessible by root and not the user.

For ex windows users it's an easy trap since this doesn't happen on windows where the default user created on setup acts more or less as root. So even if you're an experienced Windows user, it's not apparent to you that on linux you have to grant users or groups explicitly the access to files, folders and shares. It is possible that plex creates its own user/group during setup and if you don't grant that access to the media files - it won't be able to read them.

I haven't used plex though but I've seen this behaviour a lot. For example if you set up a Tomcat server, it creates a tomcat user and group which is then used to tell the system where Tomcat can access things.

I had looked into whether 'plex' was a user that needed to be granted permission. I did not format and partition the drive from root access, it was straight through the GUI. So if it used root to do it, I don't know how I would have known.

Linux permissions are one of the things that drive me crazy. You can be sitting there looking at the permissions for a given resource and it's telling you everything is right, but it still doesn't work like it should.

I agree it most likely was some kind of permission issue that after I reset all of the permissions on the drive, removed plex, rebooted, and reinstalled plex that it finally recognized the correct permissions and just started working.
 
Now with Plex comes the fight for more storage space. I filled up a terabyte drive in a couple of days.

28 MKV files wiped out 908GB of storage.

D:
 
it was straight through the GUI.

I recommend to learn away from bad Windows habits and use the CLI instead. Everything is so much more clear that way when you don't use a dumbed down interface that masks what really happens underneath.
 
I recommend to learn away from bad Windows habits and use the CLI instead. Everything is so much more clear that way when you don't use a dumbed down interface that masks what really happens underneath.

This is actually very true. People fear terminals, even as someone that grew up with text based commands as opposed to a GUI even I used to be against the use of a terminal in my early days as a PC user. In the many years since adopting Linux as my main OS of choice I've come to understand that a GUI is necessary for everyday simplistic users regarding the tasks such users need to achieve under an operating system, but for the more serious user the terminal really is a vastly more efficient way of doing things.
 
Yes the CLI is daunting at first but the best way once you learn the proper syntax. I had played around with Emby and had learned about chmod to help set access permissions. Gotta tinker and fool around with the OS a bit. Only way to learn.
 
Back
Top