How do you backup your mysqldatabases?

Red Squirrel

[H]F Junkie
Joined
Nov 29, 2009
Messages
9,211
I use mysqldump to dump the entire databases to a single file then tar it. For some servers I also have scripts that backup specific DBs to their own file to make it easier to restore. Basically these are just bash scripts that run in cron.

I'm just curious to know what others do. I would like to be able to do a backup job that backups all databases but puts them in their own file, without me having to go in and update the script if I add new DBs. Is there an easy way of doing this?

I was just about to restore a backup to a server when it occurred to me I would probably break lot of stuff because there is the mysql schema and other system DBs as part of that backup. This was not a disaster recovery situation so not a huge deal, just going in phpmyadmin to grab the DBs manually from the old server, but it got me thinking I definitely need a better approach.
 
I would also like to know, currently I do the same. Bash scrip on cron with via mysqldump though I doubt its the best/fool proof way to do things.
 
With ZFS, I essentially flush the tables in read mode and make a snapshot.
 
mySQL replication + mysqldump on a read only node every 4 hours
We do have to add some config for each db though

Not too sure how industry standard this is, but seems to work pretty well
 
AutoMySQLBackup. Though in the past, I've just used homegrown shell scripts to do it.
 
Yeah, I just have a simple bash script as well that dumps each DB to it's own folder and gzips, it works. I then let the system backups handle getting that file somewhere (dump over FTP if remote, BackupExec if local, etc...) If you wanted to get slightly fancier, but without much extra work, you could just start the script with something like this to dump the list of DBs:
mysql -B -N -e 'show databases;'

Then just step through them in a simple for loop running mysqldump. That will at least make sure you never miss a DB if someone adds it without telling you.

I'm too lazy for that, and since all our MySQL DB servers just have one main database, I use this super simple script in a cron every night to do the backups:

Code:
#!/bin/bash

# Script Function:
# This bash script backs up DBs with a file name time stamp and tar.gz zips the file.
# The DB will be saved in /home/backups/database_backups/<DB>/
# DB backups older than 1 days will be deleted.
# Make sure correct logins are in ~/.my.cnf file for cron user

# Change to backup folder
cd /home/backups/mysql_databases/

# Delete old backups
find /home/backups/mysql_databases -name "*.tar.gz" -mtime +1 -exec rm -f {} \;

# Generate the file timestamp
TIMESTAMP=`date +%Y%m%d%H%M`

# Primary DB
DBNAME="database"
DUMP_PATH=/home/backups/mysql_databases/$DBNAME/
mysqldump --opt -c -e $DBNAME > $DBNAME.sql
tar czpf $DUMP_PATH/$DBNAME-$TIMESTAMP.tar.gz $DBNAME.sql
rm -f $DBNAME.sql

# MySQL DB
DBNAME="mysql"
DUMP_PATH=/home/backups/mysql_databases/$DBNAME/
mysqldump --opt -c -e  --events $DBNAME > $DBNAME.sql
tar czpf $DUMP_PATH/$DBNAME-$TIMESTAMP.tar.gz $DBNAME.sql
rm -f $DBNAME.sql

Our restore requirements are so minimal (basically it's just gonna be a full DR), so I'm not too worried about needing a single table. If I really needed to do that, I'd just restore the dump to a temp DB, grab the table(s) I want and go from there. I've also used sed/awk to parse out what I want from a dump file in the past, but frankly that ends up taking more time than you think, and a simple full restore/dump what you want is usually quicker.

But I'm not running Facebook here, so a bit of manual work for a super rare restore is fine for us. :)
 
Last edited:
interesting ideas, so basically similar to what I'm doing now.

I decided to write a bash script that will iterate through the DBs using "show databases" as suggested, then tar those as individual files. I will also ensure that the file omits the creation of the db, that way I can restore to a custom db name.

Ex: on a shared web host a db might be named username_site but I may want to call it site_dev locally and also rstore to site_test so this will allow me to do that.

I'm revamping my dev environment so it's more unified and as part of that I'll have generic scripts to restore/sync dbs from live/backups.

I'll post the script once it's done. I'm wanting to keep it as simple as possible so that when I setup a new server I just have to copy it and then throw a single string execution in cron tab.

I may also make it handle version rotation. Probably just hard code it to do 7 days of the week, and then month. Maybe week number too. Though I may make that a separate script, as in most cases I just want a single dump which will then be handled by regular file backups.
 
I ended up going with something like this, idea is that I can just dump this script on any box then call it from cron with whatever parameters I need for that specific server. It's kinda crude and lacks proper error checking but think it will get the job done. The file rotation think I'll just write a separate script for that which I can call through cron as well.

Code:
#/bin/bash

user=$1;
pass=$2;
bakdest=$3;

#work from tmp so tar files don't have all sorts of folder based on destination specified.  we just want a file.
cd /tmp

#put db list in temp file:
mysql -u ${user} -p${pass} -B -N -e "show databases;" | grep -iv "information_schema" | grep -iv "mysql" > dblist.tmp


#ensure dest folder exists or whole backup will fail
mkdir -p ${bakdest} >/dev/null


#iterate through file to backup those dbs:

while read p; do

date
echo "...dumping ${p}..."  
mysqldump -u ${user} -p${pass} -c --dump-date ${p} > ${p}.sql
echo "...taring..."
tar -czf ${bakdest}${p}.tar.gz ${p}.sql
rm ${p}.sql


done <dblist.tmp


#delete temp file
rm dblist.tmp

date
echo Done.


Edit: had totally forgot to add taring.


I should have done this a long time ago, much better having each DB in it's own file.
 
Last edited:
There's no need to write the dblist to its own file unless you are using it elsewhere. Other than that looks good.
 
Infact having just tried automysqlbackup, honestly use that. Does everything you'd want it to and more tidily and effectively.
 
Wanted something simple, no installation or config files needed. So this works for me. I throw a 1 liner in the cron file to dump the backup to a location for my regular file backup to pickup.

Only downside is having the password in the actual command line means that if it was a shared host anyone would be able to see the password using ps aux, so obviously not something you'd want to use on a host that other people can ssh into.
 
Wanted something simple, no installation or config files needed. So this works for me. I throw a 1 liner in the cron file to dump the backup to a location for my regular file backup to pickup.

Only downside is having the password in the actual command line means that if it was a shared host anyone would be able to see the password using ps aux, so obviously not something you'd want to use on a host that other people can ssh into.

You could also set the MYSQL_PWD env var in the cron to the correct password, and then it at least wouldn't show up in ps on the command line. Not as ideal as putting the username/password in a ~/.my.cnf file, but seems like an OK compromise. Not sure if a person can use ps to show env vars for other users (unless they are root), so I guess worth checking that out first.
 
Back
Top