The new script is splitted in two. The first part works on the drupal sever. It dumps the MySQL database and makes a tarball of it, along the site's files. If all goes right, it erases the archive X days old, X being an integer of your choice. This is just what the original Fullsitebackup.sh does, plus a few lines to ensure that the disk quota on the server isn't reached too quickly.
The first script.
#!/bin/bash
# Database connection information
dbname="dbname" # (e.g.: dbname=drupaldb)
dbhost="dbhost" # Usually not localhost
dbuser="dbuser" # (e.g.: dbuser=drupaluser)
dbpw="dbpasswd" # (e.g.: dbuser password)
# Website Files
webrootdir="/path/to/web/root/" # (e.g.: webrootdir=/home/user/public_html)
# Variables
# Default TAR Output File Base Name
tarnamebase=sitebackup-
datestamp=`date +'%m-%d-%Y'`
trashafter="6 day" # a number of days or weeks you like after which you want an archive to be trashed
timelimit=`date -d "-$trashafter" +'%m-%d-%Y'`
tartobeswept=$tarnamebase$timelimit.tgz
# Execution directory (script start point)
startdir="/path/to/backup/dir"
cd $startdir
logfile=$startdir"/logsite.log" # file path and name of log file to use
# Temporary Directory
tempdir=$datestamp
# Input Parameter Check
if test "$1" = ""
then
tarname=$tarnamebase$datestamp.tgz
else
tarname=$1
fi
# Begin logging
echo "Beginning drupal site backup using fullsitebackup.sh ..." >> $logfile
# Create temporary working directory
echo " Creating temp working dir ..." >> $logfile
mkdir $tempdir
# TAR website files
echo "TARing website files into $webrootdir ..." >> $logfile
cd $webrootdir
tar cf $startdir/$tempdir/filecontent.tar .
# sqldump database information
echo " Dumping drupal database, using ..." >> $logfile
echo " user:$dbuser; database:$dbname host:$dbhost " >> $logfile
cd $startdir/$tempdir
mysqldump --user=$dbuser --password=$dbpw --host=$dbhost --add-drop-table $dbname > dbcontent.sql 2>>$logfile
if [ $? -ne 0 ] ; then
echo "Echec du dump de $dbname. " > $logfile
exit 1
fi
# Create final backup file
echo "Creating final compressed (tgz) TAR file: $tarname ..." >> $logfile
tar czf $startdir/$tarname filecontent.tar dbcontent.sql 1>>$logfile 2>&1
if [ $? -ne 0 ] ; then
endtime=`date`
echo "Echec de l'archivage $endtime de $tarname. " >> $logfile
exit 1
else
echo "Archivage effectue $endtime de $tarname. " >> $logfile
if [ -e $startdir/$tartobeswept ] ; then
chmod 600 $startdir/$tartobeswept
rm $startdir/$tartobeswept 1>> $logfile 2>&1 ; echo "Purge de $tartobeswept effectue $endtime " >> $logfile
fi
fi
# Reduction des droits
echo " archive en lecture seule " >> $logfile
chmod 400 $startdir/$tarname
# Nettoyage
echo " Removing temp dir $tempdir ..." >> $logfile
cd $startdir
rm -r $tempdir
# The End
endtime=`date`
echo "Backup completed $endtime, TAR file at $tarname. " >> $logfile
##
##
Obviously, read and execute rights only for the owner of this file are enough.
Here you may object that, if the web server is compromised or fails completely, you loose all the hard work, as the backup sits on the server itself. You would be right.
Hence a second script, that duplicates the freshly created archive onto a distant machine, where it is installed. For this one script to work, you need an ssh account on the web server and to add in its ~/.ssh/authorized_keys file the public ssh key of your account on the local machine. Thus, the ssh transfer script won't have any account password in it, a potential security issue. It keeps the Y last days/weeks/months.
#!/bin/sh
# Variables
tarnamebase=sitebackup- # prefix of tarballs
datestamp=`date +'%m-%d-%Y'`
backupdir="backupdir_path_on_web_server" # where backups are found on distant server
localdir="/path/to/local/backup/dir" # where they are copied on local machine
sshhost="hostname"
sshuser="username"
keepitem="n" # n - an integer - being the number of daily/weekly/monthly archives kept
logfile=$localdir"/logfilename.log"
weeklybackupdir="$localdir/semaine" # semaine is french for "week" (create that directory !)
monthlybackupdir="$localdir/mois" # mois is french for "month" (create that directory !)
admin="webmaster@yourdomain.tld"
# Input Parameter Check
if test "$1" = ""
then
tarname=$tarnamebase$datestamp.tgz
else
tarname=$1
fi
# Secure copy from the server
# A public key of the host must be present in the authorized keys file on the server side
cd $localdir
scp $sshuser"@"$sshhost:$backupdir/$tarname $localdir/$tarname 1> $logfile 2>&1
# If previous operation fails, the script doesn't touch older backups
if [ $? -ne 0 ] ; then
endtime=`date`
echo "Echec du transfert $endtime de $tarname. Fin du script. " >> $logfile
else
endtime=`date`
echo "Transfert effectué $endtime de $tarname. " >> $logfile
# Else, it looks if the count of past backups is greater than a defined number
# In case of which it copies the oldest as a monthly backup if the day of the month is 1
# And trims the monthly backups to a number specified in a variable
if [ `ls -1t $tarnamebase* | wc -l` -gt $keepitem ] ; then
if [ `date +%d` = "01" ] ; then
cp `ls -1t $tarnamebase* | tail -n 1` $monthlybackupdir 1>> $logfile 2>&1
cd $monthlybackupdir
while [ `ls -1t $tarnamebase* | wc -l` -gt $keepitem ] ; do
chmod 600 `ls -1t $tarnamebase* | tail -n 1`
rm `ls -1t $tarnamebase* | tail -n 1` 1>> $logfile 2>&1
done
cd $localdir
else
# Or, if the day is wednesday, copies the oldest as a weekly backup
# and does some cleanup in the weekly backup directory
if [ `date +%u` = "5" ] ; then
cp `ls -1t $tarnamebase* | tail -n 1` $weeklybackupdir 1>> $logfile 2>&1
cd $weeklybackupdir
while [ `ls -1t $tarnamebase* | wc -l` -gt $keepitem ] ; do
chmod 600 `ls -1t $tarnamebase* | tail -n 1`
rm `ls -1t $tarnamebase* | tail -n 1` 1>> $logfile 2>&1
done
cd $localdir
fi
fi
# In any case, it makes sure there are only the defined number of daily backups left
while [ `ls -1t $tarnamebase* | wc -l` -gt $keepitem ] ; do
chmod 600 `ls -1t $tarnamebase* | tail -n 1`
rm `ls -1t $tarnamebase* | tail -n 1` 1>> $logfile 2>&1
done
fi
fi
# Reports home what went wrong
mail -s "logfile" $admin < $logfile
exit 0
These scripts need to be executed at a fixed interval you will specify in the crontab (crontab -e). From my experience, they work reasonably well with different shells (sh, ksh and bash), different systems (Debian, NetBSD) and on different hardware architectures (MacPPC64, Mac68k, x86_64). Anyway, hardware shouldn't be of any concern with a high level langage.
Aucun commentaire:
Enregistrer un commentaire