为Ghost博客实现自动备份

一旦我们的网站和服务器设置完毕,最担心的就是网站内容丢失或服务器被关闭等灾难性事故了。

有什么办法避免这种事情发生吗?有的。

那就是为服务器加上自动备份机制。

必要性

虽然各VPS提供商们都有自己的备份机制和服务提供,但是一则有可能是要收费的,二则有违备份隔离风险的目的性,如果VPS服务商的服务出问题,依靠他的备份也无法保证数据的安全,前段时间腾讯云员工删除网站数据的事件就是最好的注脚:
腾讯云故障致客户数据丢失遭千万索赔 https://www.yicai.com/news/100007159.html

实操

本着求人不如求己的原则,经过参考网上知识,我通过如下操作在服务器端实现了博客自动备份。

nano ~\backup.sh

代码

输入以下内容:

#!/usr/bin/env bash
#
# Auto backup script
#
# Copyright (C) 2016 Teddysun <i@teddysun.com>
#
# URL: https://teddysun.com/469.html
#
# You must to modify the config before run it!!!
# Backup MySQL/MariaDB/Percona datebases, files and directories
# Backup file is encrypted with AES256-cbc with SHA1 message-digest (option)
# Auto transfer backup file to Google Drive (need install gdrive command) (option)
# Auto transfer backup file to FTP server (option)
# Auto delete Google Drive's or FTP server's remote file (option)
#

[[ $EUID -ne 0 ]] && echo "Error: This script must be run as root!" && exit 1

########## START OF CONFIG ##########

# Encrypt flag (true: encrypt, false: not encrypt)
ENCRYPTFLG=true

# WARNING: KEEP THE PASSWORD SAFE!!!
# The password used to encrypt the backup
# To decrypt backups made by this script, run the following command:
# openssl enc -aes256 -in [encrypted backup] -out decrypted_backup.tgz -pass pass:[backup password] -d -md sha1
BACKUPPASS="11111111"

# Directory to store backups
LOCALDIR="/root/backups/"

# Temporary directory used during backup creation
TEMPDIR="/root/backups/temp/"

# File to log the outcome of backups
LOGFILE="/root/backups/backup.log"

# OPTIONAL: If you want backup MySQL database, enter the MySQL root password below
#MYSQL_ROOT_PASSWORD=""

# Below is a list of MySQL database name that will be backed up
# If you want backup ALL databases, leave it blank.
#MYSQL_DATABASE_NAME[0]=""

# Below is a list of files and directories that will be backed up in the tar backup
# For example:
# File: /data/www/default/test.tgz
# Directory: /data/www/default/test
BACKUP[0]="/var/www/ghost"

# Number of days to store daily local backups (default 7 days)
LOCALAGEDAILIES="14"

# Delete Googole Drive's & FTP server's remote file flag (true: delete, false: not delete)
DELETE_REMOTE_FILE_FLG=false

# Upload to FTP server flag (true: upload, false: not upload)
FTP_FLG=false

# FTP server
# OPTIONAL: If you want upload to FTP server, enter the Hostname or IP address below
FTP_HOST=""

# FTP username
# OPTIONAL: If you want upload to FTP server, enter the FTP username below
FTP_USER=""

# FTP password
# OPTIONAL: If you want upload to FTP server, enter the username's password below
FTP_PASS=""

# FTP server remote folder
# OPTIONAL: If you want upload to FTP server, enter the FTP remote folder below
# For example: public_html
FTP_DIR=""

########## END OF CONFIG ##########



# Date & Time
DAY=$(date +%d)
MONTH=$(date +%m)
YEAR=$(date +%C%y)
BACKUPDATE=$(date +%Y%m%d%H%M%S)
# Backup file name
TARFILE="${LOCALDIR}""$(hostname)"_"${BACKUPDATE}".tgz
# Encrypted backup file name
ENC_TARFILE="${TARFILE}.enc"
# Backup MySQL dump file name
#SQLFILE="${TEMPDIR}mysql_${BACKUPDATE}.sql"

log() {
    echo "$(date "+%Y-%m-%d %H:%M:%S")" "$1"
    echo -e "$(date "+%Y-%m-%d %H:%M:%S")" "$1" >> ${LOGFILE}
}

# Check for list of mandatory binaries
check_commands() {
    # This section checks for all of the binaries used in the backup
    BINARIES=( cat cd du date dirname echo openssl pwd rm tar )
    
    # Iterate over the list of binaries, and if one isn't found, abort
    for BINARY in "${BINARIES[@]}"; do
        if [ ! "$(command -v "$BINARY")" ]; then
            log "$BINARY is not installed. Install it and try again"
            exit 1
        fi
    done

    # check gdrive command
    GDRIVE_COMMAND=false
    if [ "$(command -v "gdrive")" ]; then
        GDRIVE_COMMAND=true
    fi

    # check ftp command
    if ${FTP_FLG}; then
        if [ ! "$(command -v "ftp")" ]; then
            log "ftp is not installed. Install it and try again"
            exit 1
        fi
    fi
}

calculate_size() {
    local file_name=$1
    local file_size=$(du -h $file_name 2>/dev/null | awk '{print $1}')
    if [ "x${file_size}" = "x" ]; then
        echo "unknown"
    else
        echo "${file_size}"
    fi
}

# Backup MySQL databases
#mysql_backup() {
#    if [ -z ${MYSQL_ROOT_PASSWORD} ]; then
#        log "MySQL root password not set, MySQL backup skipped"
#    else
#        log "MySQL dump start"
#        mysql -u root -p"${MYSQL_ROOT_PASSWORD}" 2>/dev/null <<EOF
#exit
#EOF
#        if [ $? -ne 0 ]; then
#            log "MySQL root password is incorrect. Please check it and try again"
#            exit 1
#        fi
#    
#        if [ "${MYSQL_DATABASE_NAME[*]}" == "" ]; then
#            mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" --all-databases > "${SQLFILE}" 2>/dev/null
#            if [ $? -ne 0 ]; then
#                log "MySQL all databases backup failed"
#                exit 1
#            fi
#            log "MySQL all databases dump file name: ${SQLFILE}"
#            #Add MySQL backup dump file to BACKUP list
#            BACKUP=(${BACKUP[*]} ${SQLFILE})
#        else
#            for db in ${MYSQL_DATABASE_NAME[*]}
#            do
#                unset DBFILE
#                DBFILE="${TEMPDIR}${db}_${BACKUPDATE}.sql"
#                mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" ${db} > "${DBFILE}" 2>/dev/null
#                if [ $? -ne 0 ]; then
#                    log "MySQL database name [${db}] backup failed, please check database name is correct and try again"
#                    exit 1
#                fi
#                log "MySQL database name [${db}] dump file name: ${DBFILE}"
#                #Add MySQL backup dump file to BACKUP list
#                BACKUP=(${BACKUP[*]} ${DBFILE})
#            done
#        fi
#        log "MySQL dump completed"
#    fi
#}

start_backup() {
    [ "${BACKUP[*]}" == "" ] && echo "Error: You must to modify the [$(basename $0)] config before run it!" && exit 1

    log "Tar backup file start"
    tar -zcPf ${TARFILE} ${BACKUP[*]}
    if [ $? -gt 1 ]; then
        log "Tar backup file failed"
        exit 1
    fi
    log "Tar backup file completed"

    # Encrypt tar file
    if ${ENCRYPTFLG}; then
        log "Encrypt backup file start"
        openssl enc -aes256 -in "${TARFILE}" -out "${ENC_TARFILE}" -pass pass:"${BACKUPPASS}" -md sha1
        log "Encrypt backup file completed"

        # Delete unencrypted tar
        log "Delete unencrypted tar file: ${TARFILE}"
        rm -f ${TARFILE}
    fi

    # Delete MySQL temporary dump file
#    for sql in `ls ${TEMPDIR}*.sql`
#    do
#        log "Delete MySQL temporary dump file: ${sql}"
#        rm -f ${sql}
#    done

    if ${ENCRYPTFLG}; then
        OUT_FILE="${ENC_TARFILE}"
    else
        OUT_FILE="${TARFILE}"
    fi
    log "File name: ${OUT_FILE}, File size: `calculate_size ${OUT_FILE}`"
}

# Transfer backup file to Google Drive
# If you want to install gdrive command, please visit website:
# https://github.com/prasmussen/gdrive
# of cause, you can use below command to install it
# For x86_64: wget -O /usr/bin/gdrive http://dl.lamp.sh/files/gdrive-linux-x64; chmod +x /usr/bin/gdrive
# For i386: wget -O /usr/bin/gdrive http://dl.lamp.sh/files/gdrive-linux-386; chmod +x /usr/bin/gdrive
gdrive_upload() {
    if ${GDRIVE_COMMAND}; then
        log "Tranferring backup file to Google Drive"
        gdrive upload --no-progress ${OUT_FILE} >> ${LOGFILE}
        if [ $? -ne 0 ]; then
            log "Error: Tranferring backup file to Google Drive failed"
            exit 1
        fi
        log "Tranferring backup file to Google Drive completed"
    fi
}

# Tranferring backup file to FTP server
#ftp_upload() {
#    if ${FTP_FLG}; then
#        [ -z ${FTP_HOST} ] && log "Error: FTP_HOST can not be empty!" && exit 1
#        [ -z ${FTP_USER} ] && log "Error: FTP_USER can not be empty!" && exit 1
#        [ -z ${FTP_PASS} ] && log "Error: FTP_PASS can not be empty!" && exit 1
#        [ -z ${FTP_DIR} ] && log "Error: FTP_DIR can not be empty!" && exit 1
#
#        local FTP_OUT_FILE=$(basename ${OUT_FILE})
#        log "Tranferring backup file to FTP server"
#        ftp -in ${FTP_HOST} 2>&1 >> ${LOGFILE} <<EOF
#user $FTP_USER $FTP_PASS
#binary
#lcd $LOCALDIR
#cd $FTP_DIR
#put $FTP_OUT_FILE
#quit
#EOF
#        log "Tranferring backup file to FTP server completed"
#    fi
#}

# Get file date
get_file_date() {
    #Approximate a 30-day month and 365-day year
    DAYS=$(( $((10#${YEAR}*365)) + $((10#${MONTH}*30)) + $((10#${DAY})) ))

    unset FILEYEAR FILEMONTH FILEDAY FILEDAYS FILEAGE
    FILEYEAR=$(echo "$1" | cut -d_ -f2 | cut -c 1-4)
    FILEMONTH=$(echo "$1" | cut -d_ -f2 | cut -c 5-6)
    FILEDAY=$(echo "$1" | cut -d_ -f2 | cut -c 7-8)

    if [[ "${FILEYEAR}" && "${FILEMONTH}" && "${FILEDAY}" ]]; then
        #Approximate a 30-day month and 365-day year
        FILEDAYS=$(( $((10#${FILEYEAR}*365)) + $((10#${FILEMONTH}*30)) + $((10#${FILEDAY})) ))
        FILEAGE=$(( 10#${DAYS} - 10#${FILEDAYS} ))
        return 0
    fi

    return 1
}

# Delete Google Drive's old backup file
delete_gdrive_file() {
    local FILENAME=$1
    if ${DELETE_REMOTE_FILE_FLG} && ${GDRIVE_COMMAND}; then
        local FILEID=$(gdrive list -q "name = '${FILENAME}'" --no-header | awk '{print $1}')
        if [ -n ${FILEID} ]; then
            gdrive delete ${FILEID} >> ${LOGFILE}
            log "Google Drive's old backup file name: ${FILENAME} has been deleted"
        fi
    fi
}

# Delete FTP server's old backup file
#delete_ftp_file() {
#    local FILENAME=$1
#    if ${DELETE_REMOTE_FILE_FLG} && ${FTP_FLG}; then
#        ftp -in ${FTP_HOST} 2>&1 >> ${LOGFILE} <<EOF
#user $FTP_USER $FTP_PASS
#cd $FTP_DIR
#del $FILENAME
#quit
#EOF
#        log "FTP server's old backup file name: ${FILENAME} has been deleted"
#    fi
#}

# Clean up old file
clean_up_files() {
    cd ${LOCALDIR} || exit

    if ${ENCRYPTFLG}; then
        LS=($(ls *.enc))
    else
        LS=($(ls *.tgz))
    fi

    for f in ${LS[@]}
    do
        get_file_date ${f}
        if [ $? == 0 ]; then
            if [[ ${FILEAGE} -gt ${LOCALAGEDAILIES} ]]; then
                rm -f ${f}
                log "Old backup file name: ${f} has been deleted"
                delete_gdrive_file ${f}
                delete_ftp_file ${f}
            fi
        fi
    done
}

# Main progress
STARTTIME=$(date +%s)

# Check if the backup folders exist and are writeable
if [ ! -d "${LOCALDIR}" ]; then
    mkdir -p ${LOCALDIR}
fi
if [ ! -d "${TEMPDIR}" ]; then
    mkdir -p ${TEMPDIR}
fi

log "Backup progress start"
check_commands
#mysql_backup
start_backup
log "Backup progress complete"

log "Upload progress start"
gdrive_upload
#ftp_upload
log "Upload progress complete"

clean_up_files

ENDTIME=$(date +%s)
DURATION=$((ENDTIME - STARTTIME))
log "All done"
log "Backup and transfer completed in ${DURATION} seconds"

注:

  • 将以下密码修改为你自己的密码。
    BACKUPPASS="11111111"
  • 备份内容、临时文件夹和日志均存在以下目录,如有需要可以自行查看。
    /root/backups/
  • 备份目录为默认Ghost目录,可根据需要自行修改。
    BACKUP[0]="/var/www/ghost"
  • 默认本地保存14天的备份文件,远端不删除备份文件,可根据需要自行修改。
    LOCALAGEDAILIES="14"
    DELETE_REMOTE_FILE_FLG=false
  • 关闭了上传FTP的功能,可根据需要自行修改。
    DELETE_REMOTE_FILE_FLG=false
  • 由于本博客采用SQLite,因此注释了全部MySQL相关部分,如有需要请自行修改。

准备工作

将backup.sh文件保存后,做备份前仍需需完成以下操作:

  • 安装OpenSSL。
    sudo apt-get install openssl
  • 增加执行权限。
    sudo chmod +x backup.sh
  • 安装gdrive。
    sudo wget -O /usr/bin/gdrive http://dl.lamp.sh/files/gdrive-linux-x64
    sudo chmod +x /usr/bin/gdrive
  • 链接Gdrive账号。
    gdrive about
    gdrive会输出一串链接,在本机浏览器打开,登录Google账号后将返回字符串粘贴至gdrive命令行中,完成gdrive命令对远端网盘的操作授权。

测试

以上动作执行完毕后,可以通过sudo ./backup.sh进行第一次备份,并根据log输出检查错误。
一般来说这时候备份应该是没有问题了。

上线

  • 将log屏显输出去掉,修改相关代码行为:
log() {
 #   echo "$(date "+%Y-%m-%d %H:%M:%S")" "$1"
    echo -e "$(date "+%Y-%m-%d %H:%M:%S")" "$1" >> ${LOGFILE}
}
  • 接下来将相关内容准备至相关位置。
    sudo cp ./backup.sh /root/backup.sh
    sudo cp ./.gdrive /root/.gdrive

  • 然后将备份命令设置成自动每日运行。
    sudo nano /etc/crontab

  • 根据以下内容修改crontab文件后保存。

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:~/bin
MAILTO=root
HOME=/root

# m h dom mon dow user  command
30 3    * * *   root    bash /root/backup.sh

这样每天凌晨3:30分服务器将自动将Ghost博客相关文件加密压缩后上传至Gdrive网盘,实现我们自动备份和确保备份文件可用性的需求。

参考链接:


一步步教你从零开始搭博客系列: