Backing Up Linux Server

/usr/home

Supreme [H]ardness
Joined
Mar 18, 2008
Messages
6,160
So I have a little experience with Linux, but not like I do with Windows. I'm playing around with CentOS here and I'm wondering how can I back the server up so that it's easy to restore? Restoring a Windows server from a backup is simple. Say my hard drive in my linux server completely fails. Can I recover that Linux server easy and without a bunch of commands in CLI or redoing the partitions or anything?

Oh yeah, and that's free :p.
 
I wouldn't say it's as easy to setup the backups as it is on windows ( I use bash, tar, bzip and NASes ), but I would say that recovery is faster ( I can have a linux server back up and running in under an hour, data set notwithstanding ).

What I have done is write bash scripts to grab the interesting data, bundle it up and ship it off to my NAS device.

I don't really see a problem with the way Windows or linux does it. I can say that I have a higher degree of confidence in my linux backup solution than I do in the windows one, if for no other reason than it's far simpler.
 
Well you could use dd to make a complete disk image, or a complete partition image if you prefer, but this takes awhile and requires a lot of disk space.

Alternatively, I chose to write a script that uses rsync to incrementally backup my entire filesystem to a separate disk (or network location). Then I run the script in a daily cron job. I can post the script if you'd like to see if it's something you could get working. To restore I would partition my new disk, and rsync (copy) everything back to the new disk, then reinstall the grub bootloader.

I haven't tried some of the various GUI-based backup tools like Deja Dup, etc.
 
I wouldn't say it's as easy to setup the backups as it is on windows ( I use bash, tar, bzip and NASes ), but I would say that recovery is faster ( I can have a linux server back up and running in under an hour, data set notwithstanding ).

What I have done is write bash scripts to grab the interesting data, bundle it up and ship it off to my NAS device.

I don't really see a problem with the way Windows or linux does it. I can say that I have a higher degree of confidence in my linux backup solution than I do in the windows one, if for no other reason than it's far simpler.

I'm the opposite lol. I seem to trust stuff in Windows more. I guess it's because I don't know Linux well enough to recover data if I need to. Even if I had to rebuild a Windows server from scratch and copy over the backed up files and reset everything up, I could do it a lot quicker and I'd feel a lot more confident doing it than I would doing the same thing with a Linux server. I guess my main thing is I don't know the filesystem and where everything is kept and how it all functions together. It doesn't seem logical to me since I'm used to the way Windows does it's file system.
 
Well you could use dd to make a complete disk image, or a complete partition image if you prefer, but this takes awhile and requires a lot of disk space.

Alternatively, I chose to write a script that uses rsync to incrementally backup my entire filesystem to a separate disk (or network location). Then I run the script in a daily cron job. I can post the script if you'd like to see if it's something you could get working. To restore I would partition my new disk, and rsync (copy) everything back to the new disk, then reinstall the grub bootloader.

I haven't tried some of the various GUI-based backup tools like Deja Dup, etc.

I'm honestly interested in knowing how you do it as well as XOR != OR's method.

I'm the opposite lol. I seem to trust stuff in Windows more. I guess it's because I don't know Linux well enough to recover data if I need to. Even if I had to rebuild a Windows server from scratch and copy over the backed up files and reset everything up, I could do it a lot quicker and I'd feel a lot more confident doing it than I would doing the same thing with a Linux server. I guess my main thing is I don't know the filesystem and where everything is kept and how it all functions together. It doesn't seem logical to me since I'm used to the way Windows does it's file system.

I actually thought that Linux was your home turf due to your username. :)
 
I'm honestly interested in knowing how you do it as well as XOR != OR's method.



I actually thought that Linux was your home turf due to your username. :)

Haha, yeah I have no idea why I picked that... lol. That was when I was 17 so who knows :p
 
This is the script I place in the /etc/cron.daily directory in my Ubuntu install. It runs in the background every day and I don't even notice it running. It does an incremental backup only and deletes any files from the backup location that have been deleted from the source, and I exclude some directories such as my Firefox cache (tons of useless small files) and the .gvfs directory. It retains permissions and time stamps and all that fancy stuff. When I "upgrade" to a new Ubuntu release it automatically backs up to a new location, so I have my old install left intact... you know... just in case I have to "fix" any issues with the new release. :rolleyes:

It can back up to a local disk or to a disk on a network location. However it has to run as root (obviously) so if you want to backup to a network location you have to set up trusted ssh with root accounts. This is a bit tricky and if you're on a WAN is probably not advisable.

The script ends by popping up a notification bubble indicating success / failure status. This requires setting the DISPLAY variable in the /etc/crontab and also opening up my account's xhost access for the root user.

Code:
#!/bin/bash

backup_host="madarao"
local_host=`hostname`

release=`lsb_release -cs`

server_prefix=""

if [ "$backup_host" != "$local_host" ]
then
   server_prefix="root@${backup_host}:"
fi

backup_path="${server_prefix}/disk2/backup/${local_host}/${release}"

log_file="/var/log/backups.log"

echo "-------------------------------------------------------------------------" >> $log_file
echo " Local Host : ${local_host}" >> $log_file
echo " Backup Host: ${backup_host}" >> $log_file
echo " Release    : ${release}" >> $log_file
echo " Backup Path: ${backup_path}" >> $log_file
echo " Start Time : `date`" >> $log_file
echo "-------------------------------------------------------------------------" >> $log_file

rsync -aqxhW --delete --stats --exclude=".gvfs" --exclude=".mozilla/firefox/ta8h6jrp.default/Cache" --log-file="${log_file}" --log-file-format="" / "${backup_path}"

if [ "$?" == "0" ]
then
  notify_header="Backup Complete for ${local_host}:/"
  notify_text=`tail -24 ${log_file} | grep -o "Total transferred file size: [[:print:]]*"`
  notify-send -i /usr/share/icons/hicolor/32x32/apps/deja-dup.png "${notify_header}" "${notify_text}"
else
  notify_header="Backup Failed for ${local_host}:/"
  notify_text="See man page for error $?"
  notify-send -i /usr/share/icons/hicolor/32x32/apps/deja-dup.png "${notify_header}" "${notify_text}"
fi

echo >> $log_file
 
At least it's not /export/home/ :p Solaris seems to love that setup...

Hmmm... /export is oftentimes the mount point for NFS directories. /export/home would seem to indicate that the home directories are actually on a network location and mounted via NFS.
 
Hmmm... /export is oftentimes the mount point for NFS directories. /export/home would seem to indicate that the home directories are actually on a network location and mounted via NFS.

Might be possible since the Solaris servers at work are networked with a FAS array. :)
 
I'm honestly interested in knowing how you do it as well as XOR != OR's method.
I implement Grandfather, father, son in bash.

This is split into two different bash scripts because different backup sets require different handling. Essentially, the first script identifies the targets and sets up the process, while the second does the actual bundling, compression and uploading to the backup target.
backup-home said:
#!/bin/bash

WEEKDAYNUM=`date +%w -d "12 hours ago"`
MONTHDAYNUM=`date +%d -d "12 hours ago"`
ABBREVWEEKDAYNAME=`date +%a -d "12 hours ago"`
ABBREVMONTHNAME=`date +%b -d "12 hours ago"`
LASTDAYINMONTH=`echo $(cal) | awk '{print $NF}'`

case $WEEKDAYNUM in
6)

exit
;;
0)
exit
;;
esac


find /home/ -maxdepth 1 -mindepth 1 -type d -exec /root/bin/backup-dir.sh home {} $WEEKDAYNUM $MONTHDAYNUM $ABBREVWEEKDAYNAME $ABBREVMONTHNAME $LASTDAYINMONTH \;

if [ $? -ne 0 ]; then
rm -f /mnt/data/staging/home.state
fi
backup-dir.sh said:
#!/bin/bash

if [ $# -ne 7 ]; then
echo 'Usage: backup-dir.sh <setname> <directory> <WeekDayNumber> <MonthDayNumber> <Abbreviated Weekday Name> <Abbreviated Month Name> <lastdayinmonth>'
exit 1
fi

BACKUPDEST=$1
DIRECTORY=$2
WEEKDAYNUM=$3
MONTHDAYNUM=$4
ABBREVWEEKDAYNAME=$5
ABBREVMONTHNAME=$6
LASTDAYINMONTH=$7
WEEKNUM=`expr \( $MONTHDAYNUM / 7 \) + 1`
HOMEDIR=`basename ${DIRECTORY}`
STAGINGPATH=/mnt/data/staging
BACKUPSTATEFILE=$1.state

. /root/bin/ftpvars
. /root/bin/servervars

if [ ! -d $STAGINGPATH ]; then
mkdir -p $STAGINGPATH
fi

if [ $WEEKDAYNUM -ge 1 -a $WEEKDAYNUM -le 6 ]; then
if [ $LASTDAYINMONTH -eq $MONTHDAYNUM ]; then
BACKUPPATH=$BACKUPROOT/monthly/$BACKUPDEST/$ABBREVMONTHNAME
FILEHEADER=$HOMEDIR.$ABBREVMONTHNAME
FILENAME=$HOMEDIR.$ABBREVMONTHNAME.tar.bz2

elif [ $WEEKDAYNUM -eq 5 ]; then
BACKUPPATH=$BACKUPROOT/weekly/$BACKUPDEST/$ABBREVWEEKDAYNAME$WEEKNUM
FILEHEADER=$HOMEDIR.$ABBREVWEEKDAYNAME$WEEKNUM
FILENAME=$HOMEDIR.$ABBREVWEEKDAYNAME$WEEKNUM.tar.bz2
else
BACKUPPATH=$BACKUPROOT/daily/$BACKUPDEST/$ABBREVWEEKDAYNAME
FILEHEADER=$HOMEDIR.$ABBREVWEEKDAYNAME
FILENAME=$HOMEDIR.$ABBREVWEEKDAYNAME.tar.bz2
fi
else
exit
fi

echo "$STAGINGPATH/$FILENAME $DIRECTORY"
time tar cfj "$STAGINGPATH/$FILENAME" $DIRECTORY --exclude "$DIRECTORY/Music" --exclude "*.vdi" --exclude "*.sav"
if [ $? -ne 0 ]; then
exit $?
fi

if [ `stat -c %s $STAGINGPATH/$FILENAME` -le $FTPCUTOFFBYTES ]; then
time lftp -u $FTPUSERNAME,$FTPPASSWORD -e "mkdir -p $BACKUPPATH; cd $BACKUPPATH; rm $FILENAME; mput $STAGINGPATH/$FILENAME; quit" $FTPHOSTNAME
else
cd $STAGINGPATH
time split -b $FTPCUTOFFBYTES -d $FILENAME ${FILEHEADER}.
for f in $STAGINGPATH/${FILEHEADER}.??
do
mv $f $f.tar.bz2
#CUR_FILENAME=`basename $f`
time lftp -u $FTPUSERNAME,$FTPPASSWORD -e "mkdir -p $BACKUPPATH; cd $BACKUPPATH; mput ${f}.tar.bz2; quit" $FTPHOSTNAME
rm -f $f.tar.bz2
done
fi

if [ $? -eq 0 ]; then
if [ ! -f $STAGINGPATH/$BACKUPSTATEFILE ]; then
FILESIZE=`stat -c %s $STAGINGPATH/$FILENAME`
echo $FILESIZE > $STAGINGPATH/$BACKUPSTATEFILE
else
STATEFILETIME=`date --utc --reference=$STAGINGPATH/$BACKUPSTATEFILE +%s`
CURRENTTIME=`date +%s`
STATEDELTA=`expr $CURRENTTIME - $STATEFILETIME`
# If file last modified less than 12 hours ago, update value instead of replace. This is for the home dirs
if [ $STATEDELTA -ge 43200 ]; then
echo `stat -c %s $STAGINGPATH/$FILENAME` > $STAGINGPATH/$BACKUPSTATEFILE
else
# touch $STAGINGPATH/$BACKUPSTATEFILE
LASTBYTEVALUE=`cat $STAGINGPATH/$BACKUPSTATEFILE`
expr $LASTBYTEVALUE + `stat -c %s $STAGINGPATH/$FILENAME` > $STAGINGPATH/$BACKUPSTATEFILE
fi
fi

else
exit $?
fi

rm -f "$STAGINGPATH/$FILENAME"

I'll grant, this is much more complex than the windows counterpart. But it's a method that has evolved over the years, and it works really really well. Oh, and it also ties into nagios; it leaves behind state files with interesting data that nagios grabs. I know if a backup failed from the night before, or even if the size of the backup set changed significantly ( possible problem depending on the set ). I don't have to religiously check all of my backups everyday, which is a nice time saver. I only have to do my quarterly restore tests, but that's normal.
 
Last edited:
Storage craft has a beta out for their ShadowProtect Linux, I have not used it yet but you may want to look into that, good quality imaged based backups
 
I think the question might be what you're trying to backup in linux... I've learned to create install logs that allow me to do unattended install/setups rather than a complete backup. That way, I can also upgrade to new versions easily and I only have to preserve the data and config settings.
For that you can tar/rsync specific files off or use SVN/GIT to backup/store config changes. I'm also looking into using Chef to setup new servers, so I can bring up new servers quickly all configured the same way...
 
So I have a little experience with Linux, but not like I do with Windows. I'm playing around with CentOS here and I'm wondering how can I back the server up so that it's easy to restore? Restoring a Windows server from a backup is simple. Say my hard drive in my linux server completely fails. Can I recover that Linux server easy and without a bunch of commands in CLI or redoing the partitions or anything?

Oh yeah, and that's free :p.



Bacula is an enterprise-tier backup solution that'll do a lot.

It just drops right in to CentOS or RedHat (standard on Fedora). The real power is in the CLI, but it will do pretty much what you want from GUI.

It's in your repos, or EPEL.

:cool:
 
Back
Top