Bash Scripting for Server Automation

By Anurag Singh

Updated on May 24, 2025

Bash Scripting for Server Automation

In this tutorial, learn bash scripting for server automation—from rotating logs and cleaning up old files to dumping Postgres to S3 and alerting our team.

When it comes to managing servers at scale, manual intervention quickly becomes a bottleneck. Bash scripting empowers us to automate routine tasks—from provisioning new users and rotating logs to orchestrating backups and deployments—so we can focus on solving higher-level challenges. In this guide, we’ll take a deep dive into the foundational concepts and best practices that underpin effective Bash scripts for server automation, ensuring our scripts are reliable, maintainable, and secure.

Why Bash?

Bash is the de facto standard shell on most Linux distributions and macOS. It’s lightweight, widely supported, and integrates seamlessly with Unix utilities. By learning Bash scripting, we unlock the ability to glue together powerful command-line tools (e.g., awk, sed, rsync, ssh) into reusable scripts that run unattended.

Prerequisites

Before proceeding, make sure you have the following in place:

Bash Scripting for Server Automation

Getting Started with a Script Header

Every Bash script should begin with a shebang line that specifies the interpreter:

#!/usr/bin/env bash

Using env ensures we reference the user’s environment and use the desired Bash version (especially if multiple shells coexist). Immediately after, we can set safety flags:

set -euo pipefail
IFS=$'\n\t'
  • -e: Exit on any command failure
  • -u: Treat unset variables as errors
  • -o pipefail: Catch failures in pipelines
  • IFS: Define a safe internal field separator

These flags help us fail fast and avoid common pitfalls like unhandled errors or word-splitting surprises.

Defining and Using Variables Securely

Variables let us parameterize scripts. We should follow naming conventions (uppercase, underscores) and always quote expansions:

BACKUP_DIR="/var/backups"
TIMESTAMP=$(date +'%Y-%m-%d_%H%M')
TARGET_FILE="${BACKUP_DIR}/backup_${TIMESTAMP}.tar.gz"

Key points:

  • Wrap expressions in $(…) instead of backticks for readability and nesting.
  • Always quote "${VARIABLE}" to prevent word splitting or globbing.
  • Use braces (${…}) when appending text or ensuring correct expansion.

Reading Command-Line Arguments

We often need to pass parameters to scripts. Bash offers $1, $2, … or "$@" for all arguments:

if [[ $# -lt 2 ]]; then
  echo "Usage: $0 <source_dir> <destination_dir>"
  exit 1
fi

SRC_DIR="$1"
DST_DIR="$2"

For more robust parsing—supporting flags like -v or --help—we can use getopts:

while getopts ":hvd:" opt; do
  case ${opt} in
    h ) echo "Usage: …"; exit 0 ;;
    v ) VERBOSE=true ;;
    d ) DST_DIR="${OPTARG}" ;;
    \? ) echo "Invalid option: -$OPTARG" >&2; exit 1 ;;
  esac
done
shift $((OPTIND -1))

Control Flow: Conditions and Loops

Decision-making in Bash is straightforward:

# If-else
if [[ -d "${SRC_DIR}" ]]; then
  echo "Source exists"
else
  echo "Source missing" >&2
  exit 1
fi

# For loop over files
for file in "${SRC_DIR}"/*; do
  echo "Processing ${file}"
done

# While loop reading stdin safely
while IFS= read -r line; do
  echo "Line: ${line}"
done < "${SRC_DIR}/list.txt"

By combining conditions and loops, we can traverse directories, handle retries, or batch-process resources.

Encapsulating Logic in Functions

Functions promote reuse and clarity:

log() {
  local level="$1"; shift
  printf '[%s] %s\n' "${level}" "$*"
}

backup() {
  local src="$1" dst="$2"
  tar czf "${dst}" -C "${src}" . || { log ERROR "Backup failed"; exit 1; }
  log INFO "Backup created at ${dst}"
}

Best practices for functions:

  • Use local to limit variable scope
  • Return meaningful exit codes (0 success, non-zero failure)
  • Write one function per logical task

Error Handling and Cleanup with Traps

To ensure our scripts clean up temporary files or roll back partial changes, we use trap:

TEMP_DIR=$(mktemp -d)
cleanup() {
  rm -rf "${TEMP_DIR}"
  log INFO "Cleaned up temporary files"
}
trap cleanup EXIT

# … do work in "${TEMP_DIR}" …
  • trap … EXIT runs regardless of script exit status. For more granular control (e.g., on SIGINT), list specific signals: trap cleanup SIGINT SIGTERM.

Logging, Debugging, and Dry-Run Modes

Clear logs help diagnose failures on remote servers:

VERBOSE=false
log() {
  local level="$1"; shift
  timestamp=$(date +'%F %T')
  echo "[${timestamp}] [${level}] $*"
}

if [[ "${VERBOSE}" == true ]]; then
  set -x  # Echo commands as they run
fi

A dry-run mode can simulate actions without making changes:

DRY_RUN=false
run_cmd() {
  local cmd="$*"
  if [[ "${DRY_RUN}" == true ]]; then
    log INFO "Would run: ${cmd}"
  else
    eval "${cmd}"
  fi
}

Scheduling with Cron or Systemd Timers

Once our script is rock-solid, we can schedule it:

Cron: Edit crontab -e and add:

0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1

This runs daily at 2 AM, appending logs for auditing.

Systemd Timer (modern alternative):

Create /etc/systemd/system/backup.service:

nano /etc/systemd/system/backup.service

Add following content:

[Unit]
Description=Daily Backup

[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup.sh

Create /etc/systemd/system/backup.timer:

[Unit]
Description=Runs backup.daily

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

Enable and start:

sudo systemctl enable --now backup.timer

Systemd timers provide better logging and recovery options compared to cron.

Real-World Example: Automating Log Rotation and Cleanup

Let’s tie everything together by rotating web server logs and deleting files older than 30 days:

#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

LOG_DIR="/var/log/nginx"
ARCHIVE_DIR="/var/backups/nginx_logs"
DAYS_TO_KEEP=30

log() {
  local lvl="$1"; shift
  echo "[$(date +'%F %T')] [${lvl}] $*"
}

rotate_logs() {
  mkdir -p "${ARCHIVE_DIR}"
  for logfile in "${LOG_DIR}"/*.log; do
    local base=$(basename "${logfile}")
    local archive="${ARCHIVE_DIR}/${base}-$(date +'%Y%m%d').gz"
    log INFO "Rotating ${logfile} to ${archive}"
    gzip -c "${logfile}" > "${archive}"
    : > "${logfile}"  # Truncate original
  done
}

cleanup_old() {
  log INFO "Deleting archives older than ${DAYS_TO_KEEP} days"
  find "${ARCHIVE_DIR}" -type f -mtime +"${DAYS_TO_KEEP}" -print -delete
}

rotate_logs
cleanup_old

We’ve combined variables, loops, functions, logging, and safe flags into a compact script that can be scheduled daily.

Best Practices and Final Tips

  • Use ShellCheck: Run . /cleanup.sh | shellcheck - or install ShellCheck to catch common mistakes (unquoted variables, deprecated syntax, etc.).
  • Enable Strict Mode: Always include set -euo pipefail and a safe IFS.
  • Document Your Script: Add comments describing overall purpose, parameters, and example usage.
  • Version Control: Store scripts in Git to track changes and collaborate.
  • Limit External Dependencies: Rely on standard utilities or check for prerequisite commands at startup.
  • Secure Secrets: Never hard-code passwords. Use environment variables or a secrets manager.

Real-World Example: Automating PostgreSQL database backup

In many environments, regularly backing up critical data to a remote location is essential for disaster recovery. Let’s see how we can automate PostgreSQL database dumps, upload them to an S3-compatible bucket, and notify our team on completion.

#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

# Configuration
DB_NAME="myapp_db"
DB_USER="backup_user"
BACKUP_DIR="/var/backups/db"
TIMESTAMP=$(date +'%Y-%m-%d_%H%M')
DUMP_FILE="${BACKUP_DIR}/${DB_NAME}_${TIMESTAMP}.sql.gz"
S3_BUCKET="s3://our-backups-bucket/databases"
NOTIFY_EMAIL="ops-team@example.com"

# Logger function
log() {
  local lvl="$1"; shift
  echo "[$(date +'%F %T')] [${lvl}] $*"
}

# Ensure backup directory exists
prepare() {
  mkdir -p "${BACKUP_DIR}"
  log INFO "Backup directory ready at ${BACKUP_DIR}"
}

# Dump and compress the database
dump_db() {
  log INFO "Starting dump of ${DB_NAME}"
  PGPASSWORD="${PGPASS:-}" pg_dump -U "${DB_USER}" "${DB_NAME}" | gzip > "${DUMP_FILE}"
  log INFO "Database dump created at ${DUMP_FILE}"
}

# Upload to remote storage
upload_s3() {
  log INFO "Uploading ${DUMP_FILE} to ${S3_BUCKET}"
  aws s3 cp "${DUMP_FILE}" "${S3_BUCKET}/" \
    || { log ERROR "Upload failed"; exit 1; }
  log INFO "Upload successful"
}

# Send notification email
notify_team() {
  SUBJECT="Backup Complete: ${DB_NAME} at ${TIMESTAMP}"
  BODY="The backup for database ${DB_NAME} completed successfully and was uploaded to ${S3_BUCKET}/${DB_NAME}_${TIMESTAMP}.sql.gz"
  echo "${BODY}" | mail -s "${SUBJECT}" "${NOTIFY_EMAIL}"
  log INFO "Notification sent to ${NOTIFY_EMAIL}"
}

# Cleanup old backups (older than 7 days)
cleanup_old() {
  log INFO "Removing dumps older than 7 days in ${BACKUP_DIR}"
  find "${BACKUP_DIR}" -type f -name "${DB_NAME}_*.sql.gz" -mtime +7 -print -delete
}

# Main execution flow
main() {
  prepare
  dump_db
  upload_s3
  notify_team
  cleanup_old
  log INFO "Backup workflow completed"
}

# Run
main

How this works:

Preparation
We create our local backup directory if it doesn’t exist, ensuring a safe place for dumps.

Database Dump
Using pg_dump with gzip compression, we export the database securely. We rely on a dedicated backup_user and avoid exposing superuser credentials.

Remote Upload
We leverage the AWS CLI to push our compressed dump to S3, with error handling that aborts on failure.

Notification
A simple mail alert keeps the operations team informed without manual checks.

Retention Policy
By pruning dumps older than a week, we control storage costs and comply with our retention requirements.

With this script scheduled via cron or a systemd timer, we offload routine backups entirely to automation—freeing us to focus on scaling and feature delivery, confident that our data is safely archived offsite.

In this tutorial, we've learnt bash scripting for server automation By mastering these concepts, we equip ourselves with a versatile toolkit for automating virtually any server task. As our infrastructure grows, well-crafted Bash scripts become the backbone of reliable, repeatable operations—so let’s embrace automation and keep our servers humming smoothly!