Which distro is for me?

PornoSatan

2[H]4U
Joined
Sep 3, 2004
Messages
3,493
I'm going to move my 'low' end laptop to a Linux distro and wanted to know by experienced users which would be the best option. It's a Inspiron 500m with a 1.3 ghz Pentium M (single core, 32 bit etc) and 1gb ram. I have used Linux before, but it was with Slackware back in the late 90s. During that time I became semi familiar with basic Linux concepts, command lines, scripts and so on. With that said though I have been out of the Linux loop for a long time. What distro would be right for me? I would consider myself a power user in Windows, with an obsession for cleanliness, speed, and responsiveness. I'm a big fan of Mark Russinovich and his Sysinternals tools that are small, elegant and powerful, like AutoRuns, Process Explorer, and Process Monitor. I like to keep records of what is going on and why, incidentally because of this I haven't had to reformat in quite a few years because my computer never becomes bloated, I keep track of things coming and going. I really don't like alot of things loading on boot unless I know specifically why. I want a Linux OS for my laptop that I could do similar with.

Here's how I, after reading around a bit, break them down (comments are welcome)

Ubuntu - This seems nice, its very popular and it has quite a bit of software pre-installed. And a nice software repository. Would the Unity DE work on this laptop? Not sure. I did read up on Kubuntu/Xubuntu/Lubuntu as other options and they seem interesting too. One thing I don't like is with all the stuff it has on it, it seems like it would be hard to keep track of what is actually going on ie "Why is process xxx with PID 193 running?" or why something is installed ie "Oh that's required for such and such to function"

Linux Mint - I like the desktop environment on this, seems like a traditional Windows-esque UI with Cinnamon. Although it comes with a TON of software pre-installed as well and I'm not sure whether or not i'd want potentially vulnerable apps or services already installed, like Java. I could probably uninstall them.

Arch Linux - This one seems crazy. On the one hand I love how 'fresh' of a Linux you can get with it, like if I installed KDE or something I know when I reboot I'll get dropped back in the command prompt because I didn't explicitly state that the KDE environment should run on boot. On the other hand I'm not sure if after I'm done installing it if I'll be able to properly get the wifi drivers installed as well as gpu acceleration etc just through the command line. I'm sure a bit of Googling would help though.

Puppy Linux - Seems kinda interesting.

Is there something that has the "barebones" of Arch, except comes with Wifi drivers, and has a software repository similar to Ubuntu? Essentially I want to boot into a blank OS that already has basic 'essentials' such as hardware drivers that allows me to decide on what software I want. I want to (re)learn the command line in order to truly progress with Linux and also be able to run a window manager or desktop environment if I want to in order to do general web browsing or other terminal related things. I'll probably be mostly in the GUI to be honest, since I can have terminals open anyways.

Almost all the distros look interesting. :p What should I go with?
 
Last edited:
Ubuntu, Fedora sounds like good starting points. From what you want I wouldn't recommend Gentoo but it's a good learning experience.
 
With hardware like that I'd recommend Xubuntu, Lubuntu, Linux Mint (MATE edition), Crunchbang or some other distribution using a lightweight desktop environment. If you want to try out arch without the barebones approach to get your feet wet with Pacman/systemd; use Manjaro.
 
With hardware like that I'd recommend Xubuntu, Lubuntu, Linux Mint (MATE edition), Crunchbang or some other distribution using a lightweight desktop environment. If you want to try out arch without the barebones approach to get your feet wet with Pacman/systemd; use Manjaro.

+1 for Mint. It runs well and seems to have good drivers.(or detects most stuff so far)
 
Arch Linux - This one seems crazy. On the one hand I love how 'fresh' of a Linux you can get with it, like if I installed KDE or something I know when I reboot I'll get dropped back in the command prompt because I didn't explicitly state that the KDE environment should run on boot. On the other hand I'm not sure if after I'm done installing it if I'll be able to properly get the wifi drivers installed as well as gpu acceleration etc just through the command line. I'm sure a bit of Googling would help though.

Arch does not install anything you do not tell it to and it does nothing that you did not ask it to do. While it has a learning curve, it also has the ArchWiki which has information on almost every GNU/Linux related subject. Even users of other distributions frequently find themselves consulting it. You may wish to read The Arch Way to see if that fits what you are looking for as well as the comparisons to other distributions. The Beginner's Guide is also useful.

With GNU/Linux, almost all drivers are part of the kernel (with the exception of the proprietary AMD and nVidia graphics drivers) and, as such, you don't have to download most drivers in the same way you do with Windows. For WiFi and Bluetooth, you may be required to install a firmware package, however, which, in Arch, is as simple as typing pacman -S <firmware-package>. Graphics drivers work similarly (unless you are using Intel graphics as they come bundled in). Arch has a package repository like Ubuntu and it also has the Arch User Repository where users can submit their own package builds (package builds are scripts used to download, compile and then assemble a package. There are tools like yaourt to automate this.)
 
Last edited:
OP here. I was actually messing around with Tiny Core a bit and it's cool as hell. Although it's probably a little too light for me at present, it's pretty close to something I want in the future, it's easier to (re)learn Linux when there isn't 15,000 processes running and you're just dealing with the literal barebones. With that said though I still haven't found a distro i'm entirely comfortable with to install to use as a daily desktop. I took a look at openSuse 12.2 but it seems to use close to 400mbs of ram just in a blank desktop, but I'm not sure if that's from all the daemons running or just due to the desktop environment as I just saw screenshot of the usage from a review.

Would any of the buntu's or mint even run on this system? Seems like their system requirements would be too high. What about if I just installed them and used a lighter DE/WM?

I might just go with Debian and something like XFCE.
 
Last edited:
I run xubuntu from my e-240 single core with 8G ram to my 1090T with 32G ram.

It rocks & has a somewhat-sane desktop interface.

ALSO: If you are using radeon graphics that are 4xxx series or below & want to have 3d acceleration with the amd official driver, do NOT install newer than Ubuntu 12.04. The reason is that fglrx legacy (needed for pre-5k cards) only supports up to xorg 1.12 and 12.10 uses xorg 1.13

If you have a newer graphics card (5xxx+) or just want 2d (via 'radeon' driver) then it doesn't matter.
 
Last edited:
Freebsd "arrives' barebones... you may wish to peruse its forum for the two threads "if there is one thing you wish for that Freebsd would change" "Which is your favorite linux" ... (Not their exact titles, but not hard to find via a forum search, each is multi-page, and the former is ongoing today...)
If you wish to try Arch Linux, you should read all of the threads in its forum about their most recent implementatins VS how it was a couple of years ago, or simply following older guides on the web would run into obstacles probably. (I'd recc. FreeBSD over Arch linux though...)
BTW I run it on equivalent hardware, you'd want to remove debugging sections from the kernel and rebuild it for stability *maybe* within a year or so... (or not.)
 
OP here again. So after alot of self debate and such I went with Debian. I find it sufficiently complex when I want it to be, so I'm not disconnected with what's happening in the background, and when I want to sit in gui browser mode I can do that as well. I loaded it up with Fluxbox too for minimal resource usage. Debian didn't actually come with the proprietary firmware package to allow my wireless adapter to function, so I learned pretty quickly about that by looking through the kernel logs as well as such things as modprobe, lsmod etc. Threw the firmware on a backup partition so that won't be an issue anymore. Also, I got the source and compiled "SCID vs PC" (chess database) as well as compiled "Stockfish" (chess engine). Compiling from source is strangely satisfying, even if you're not a coder. I might tweak the init process later as some things are starting that aren't really necessary so i'm familiarizing myself with the System V style inits.

One thing that annoys me though is the number of directories binaries can be in. You got

some in /bin
some in /sbin
some in /usr/bin
some in /usr/sbin
some in /usr/local/bin
some in /usr/local/sbin

Now root has access to all of these, and they are even in his $PATH, so you can execute any binary from any of these regardless of where you are in the shell as root. But a normal user doesn't get $PATH access /sbin, /usr/sbin and /usr/local/sbin by default (though its easy to fix, im assuming its default for a reason). I mean this makes sense because they're tools that shouldn't really be run by a normal user anyways, but if that's the case why can a normal user even execute them? They're all marked as rwx--x--x which means any user can execute them. It just doesn't make sense. Also, why so many directories in the first place, what's wrong with just using /bin and /sbin?

Anyways, coming from Windows I find the filesystem layout (other than the binary thing) very interesting. Like /proc is interesting. Programs read directly from it to get their info, there's a PID directory for every running process with different kinds of info and such. The log files in /var/log and such, nice.

I still need to get my mind around where configurations go though. Some are in /etc, some are in your $HOME and hidden via .foo, and some are in /usr/share. It's confusing but I'm sure I'll map things out better.

Also, I set up an SSH server too so I can access it from anywhere. I heard botnets are scanning port 22 quite alot looking for SSH's servers to brute force which I find amusing. I might set up a honeypot sometime to see what goes on, but that's a project for later. In the meantime, the SSH server is on a non-standard port for now.

Anyways, just wanted to let you all know what I eventually chose.
 
Last edited:
"locate stockfish" (Its pkg-descr suggests it is from stockfishchess.com)
If you were using FreeBSD, you would
"cd /usr/ports/games/stockfish"
"make install" (or) "portmaster -d -B -P -i -g games/stockfish" && rehash
[/usr/ports/games/scid/ ...]
Binaries in FreeBSD are also /usr/bin, /bin etc but the paths are different than linux in several cases.
Its Makefile suggests it would install to /usr/local/bin/stockfish ...

To more answer the question, one of them, usually /usr/local is installed programs where the absence of
/local/ in the path is native to the distribution, or in the case of FreeBSD, operating system.
 
I just installed Linux Mint/Cinnamon on an old laptop (1.4g Celeron, 1.5g RAM, 40g hd) yesterday. The dual boot with XP was a PITFA but otherwise it went smoothly. After install, it downloaded appx. 250 updates for the OS and all of the apps. I decided to go with Mint because I'm a Linux noob and Mint has more info available. Mint with all the apps took appx 3.8g of hard drive. A clean install of XP with my usual com apps and utilities took 11g.

So far, it looks like a good way to continue using my old computers after MS stops supporting XP next year.
 
One thing that annoys me though is the number of directories binaries can be in. You got

some in /bin
some in /sbin
some in /usr/bin
some in /usr/sbin
some in /usr/local/bin
some in /usr/local/sbin

This is actually a historical anachronism. The UNIX operating system was originally written for mainframes and minicomputers with limited storage. Originally, UNIX came on two disks; one disk was the / (root) partition for the operation system and programs and the other disk was the /usr (user partition) for user files.

As UNIX grew in size, they could no longer fit the operating system on the first disk so they started putting system files on the second disk. User files were thus moved to /usr/home and the rest of the /usr directory was populated with its own bin and lib directories to mirror the directory layout of the root partition.

Typically speaking, the directory structure (as far as storing binaries is concerned) on GNU/Linux goes as follows :

/bin - System binaries that do not require root.
/sbin - System binaries that do require root.
/usr/bin - Application binaries that do not require root.
/usr/sbin - Application binaries that do require root.
/usr/local/bin - User-installed binaries that do not require root.
/usr/local/sbin - User-installed binaries that require root.
/opt - Precompiled binaries (e.g. proprietary software). Every program in here gets its own directory and that directory is expected to contain everything (libraries, etc.) needed to run that program.

You can type man hier for a more extensive list of directories and what they are for.

/usr/local is of little import on GNU/Linux because GNU/Linux does not maintain a strict separation between the base system (what comprises the "core" operating system) and installed applications. On the BSDs and many other traditional *nixes, any application that you install that is not part of the core operating system goes in the /usr/local/ hierarchy along with its configuration files (/usr/local/etc) and startup scripts (/usr/local/etc/rc.d). If /usr/local/ is used at all on GNU/Linux, it is to install programs outside of the package manager.

A lot of newer GNU/Linux distributions have actually started merging these directories :

/bin -> /usr/bin
/sbin -> /usr/bin
/usr/sbin -> /usr/bin
/lib -> /usr/lib
/lib32 -> /usr/lib32
/lib64 -> /usr/lib
/usr/lib64 -> /usr/lib

Leaving :
/usr/bin
/usr/lib32
/usr/lib

Debian is traditionally very conservative about making such changes, however, so it will probably be a while before Debian merges the different directories.

Now root has access to all of these, and they are even in his $PATH, so you can execute any binary from any of these regardless of where you are in the shell as root. But a normal user doesn't get $PATH access /sbin, /usr/sbin and /usr/local/sbin by default (though its easy to fix, im assuming its default for a reason).

The /sbins are for programs that require root. Putting them in the $PATH of other users would be pointless.

Anyways, coming from Windows I find the filesystem layout (other than the binary thing) very interesting. Like /proc is interesting. Programs read directly from it to get their info, there's a PID directory for every running process with different kinds of info and such. The log files in /var/log and such, nice.

One of the fundamental tenants of UNIX is that everything is a file. It is clean, simple, and elegant. A good example of this is hardware : Most hardware is accessed via files in the /dev folder. Each hard disk, for example, has a file that represents a "portal" to that piece of hardware.

For example, the first SATA disk on a system is typically /dev/sda (with the second being /dev/sdb, the third being /dev/sdc and so on). If I want to make an image of that hard disk, I merely need to make a copy of the file representing the hard disk :

Code:
dd if=/dev/sda of=~/backup.image bs=4k
(with ~ representing the path to the current user's home folder).

Another UNIX concept is modularity; each command should do one thing and one thing only and each command is expected to use a simple consistent interface which allows them to be chained together. For example, I can modify the above example to compress the image of the disk through piping, in which the output of one command is redirected into the input of another command.

For example :

Code:
dd if=/dev/sda bs=4k|gzip -c > ~/backup.img.gz

will copy the contents of /dev/sda (the first SATA hard drive on the system) and then pipe it to the gzip command. The gzip command takes the stream of uncompressed incoming data and compresses it, outputting a stream of compressed data. The >, in turn, redirects the output of gzip to the specified file on the disk.
 
Last edited:
This is actually a historical anachronism. The UNIX operating system was originally written for mainframes and minicomputers with limited storage. Originally, UNIX came on two disks; one disk was the / (root) partition for the operation system and programs and the other disk was the /usr (user partition) for user files.

As UNIX grew in size, they could no longer fit the operating system on the first disk so they started putting system files on the second disk. User files were thus moved to /usr/home and the rest of the /usr directory was populated with its own bin and lib directories to mirror the directory layout of the root partition.

Typically speaking, the directory structure (as far as storing binaries is concerned) on GNU/Linux goes as follows :

/bin - System binaries that do not require root.
/sbin - System binaries that do require root.
/usr/bin - Application binaries that do not require root.
/usr/sbin - Application binaries that do require root.
/usr/local/bin - User-installed binaries that do not require root.
/usr/local/sbin - User-installed binaries that require root.
/opt - Precompiled binaries (e.g. proprietary software). Every program in here gets its own directory and that directory is expected to contain everything (libraries, etc.) needed to run that program.

You can type man hier for a more extensive list of directories and what they are for.

/usr/local is of little import on GNU/Linux because GNU/Linux does not maintain a strict separation between the base system (what comprises the "core" operating system) and installed applications. On the BSDs and many other traditional *nixes, any application that you install that is not part of the core operating system goes in the /usr/local/ hierarchy along with its configuration files (/usr/local/etc) and startup scripts (/usr/local/etc/rc.d). If /usr/local/ is used at all on GNU/Linux, it is to install programs outside of the package manager.

A lot of newer GNU/Linux distributions have actually started merging these directories :

/bin -> /usr/bin
/sbin -> /usr/bin
/usr/sbin -> /usr/bin
/lib -> /usr/lib
/lib32 -> /usr/lib32
/lib64 -> /usr/lib
/usr/lib64 -> /usr/lib

Leaving :
/usr/bin
/usr/lib32
/usr/lib

Debian is traditionally very conservative about making such changes, however, so it will probably be a while before Debian merges the different directories.

Ah, that's interesting.



The /sbins are for programs that require root. Putting them in the $PATH of other users would be pointless.

Yeah I know, that's my point though, if they aren't in the $PATH why are they even executable by non-root users. Do a ls -l /sbin and look at the permissions. It's weird. Any user can manually run them, they might not get very far but they can still run them.



One of the fundamental tenants of UNIX is that everything is a file. It is clean, simple, and elegant. A good example of this is hardware : Most hardware is accessed via files in the /dev folder. Each hard disk, for example, has a file that represents a "portal" to that piece of hardware.

For example, the first SATA disk on a system is typically /dev/sda (with the second being /dev/sdb, the third being /dev/sdc and so on). If I want to make an image of that hard disk, I merely need to make a copy of the file representing the hard disk :

Code:
dd if=/dev/sda of=~/backup.image bs=4k
(with ~ representing the path to the current user's home folder).

Another UNIX concept is modularity; each command should do one thing and one thing only and each command is expected to use a simple consistent interface which allows them to be chained together. For example, I can modify the above example to compress the image of the disk through piping, in which the output of one command is redirected into the input of another command.

For example :

Code:
dd if=/dev/sda bs=4k|gzip -c > ~/backup.img.gz

will copy the contents of /dev/sda (the first SATA hard drive on the system) and then pipe it to the gzip command. The gzip command takes the stream of uncompressed incoming data and compresses it, outputting a stream of compressed data. The >, in turn, redirects the output of gzip to the specified file on the disk.

Yeah, being able to clone a hd with just a simple dd command is awesome. The "everything is a file" concept is kind of odd but I'm liking it. I'm relatively familiar with bash scripting concepts. Like I said, I have some past Linux experience. Actually just got done writing a little script to autobump my thread in the forums for Path Of Exile every 1.5 hours.

Code:
root@newlife:/home/fyy/poestuff# ./poepost.sh 5
Posting with: bump
Posting with: up
Posting with: another bump
Posting with: up..
Posting with: bump
root@newlife:/home/fyy/poestuff#

The hardest part was pulling and inserting the HTML form info variables, never did that before. But when I discovered how powerful 'curl' is, it was a match made in heaven.


Also wrote this little script that uses AT&T's text-to-speak web interface because I didn't like how crappy espeak was.

Code:
#!/bin/bash

#./voice.sh <text>

voice=rich

###all voices###
#crystal
#mike
#rich
#lauren
#claire
#charles
#audrey

#anjali
#rosa
#alberto
#klara
#reiner
#francesca

#giovanni
#alain
#juliette
#arnaud

txt=$*
link=`curl -d "voice=$voice&txt=$txt&downloadButton=DOWNLOAD" "http://192.20.225.36/tts/cgi-bin/nph-nvdemo" | grep ".wav" | cut -d\" -f2`
mplayer http://192.20.225.36/$link

It probably could be better with a little effort, but i'm a little rusty, and it works just fine so I'm not too disappointed.
 
Last edited:
Yeah I know, that's my point though, if they aren't in the $PATH why are they even executable by non-root users. Do a ls -l /sbin and look at the permissions. It's weird. Any user can manually run them, they might not get very far but they can still run them.

I should probably clarify what I stated : sbin is for programs that normally require root for full functionality. A lot of utilities still have limited capabilities when run as a regular user (e.g. using ifconfig to view, but not change, network adapter information) and there are groups you can add regular users to to give them limited abilities to do things that normally only root can do.

An outgrowth of the everything is the file philosophy is that device access is controlled through the same types of permissions as any other file; a number of /sbin utilities either manipulate stuff through /dev or through /sys. By using group permissions on files corresponding to devices, limited additional access to administrative functions can be given to regular users.
 
Last edited:
I like Fedora/CentOS only because I learned Linux on RedHat back when Linux was new. For a newbie I might start with Ubuntu. It seems Ubuntu has the most user support for newbies and good package support.
 
On a system that low-end, I'd go with something like Crunchbang or Xubuntu.
 
Why do people reply to a 6 month old thread with simple comments? I know it was an interesting topic, but I'm pretty sure if OP installed something on his old laptop, he did it months ago now...
 
Why do people reply to a 6 month old thread with simple comments? I know it was an interesting topic, but I'm pretty sure if OP installed something on his old laptop, he did it months ago now...

Heh, missed that...
 
They are all the same. You can do a base install of any linux distro and make it look/behave like any other if you want. The only thing that really matters is the kernel and they all use the same one. Everything on top is just icing on the cake.

The reason to use Ubuntu is for access to launchpad PPA repositories. That is pretty much the only thing unique to the distribution and what makes it more user friendly. Instead of having to compile your software to get updated versions you can just add a repository and use apt as you normally would. In fact if it wasn't for the PPAs ubuntu would be useless because all the packages that are in the official repos are usually outdated.

Fedora, Arch, OpenSUSE are all similar, binary distributions.

Gentoo is the only outlier because it behaves more like freebsd, using ports.
 
I just installed Linux Mint/Cinnamon on an old laptop (1.4g Celeron, 1.5g RAM, 40g hd) yesterday. The dual boot with XP was a PITFA but otherwise it went smoothly. After install, it downloaded appx. 250 updates for the OS and all of the apps. I decided to go with Mint because I'm a Linux noob and Mint has more info available. Mint with all the apps took appx 3.8g of hard drive. A clean install of XP with my usual com apps and utilities took 11g.

So far, it looks like a good way to continue using my old computers after MS stops supporting XP next year.

I've been playing around with Linux Mint 15 with Cinnamon and I've found it really cool so far. I didn't install it on my laptop, but I've been using it from a LiveUSB and it works really well. I agree with you that it's very close to Windows from a GUI perspective. It's familiar enough to be instantly usable and there aren't as many jarring differences. I'm also running Ubuntu 12.04LTS on a spare system and it's very nice as well, though the differences in the GUI tend to increase my learning curve a bit. I find I like both pretty well so far though, giving Mint a slight advantage for being more familiar.
 
Back
Top