📒 The Bash Script Template I Wish I Had 5 Years Ago

BashCI/CDDevopsAWS

Monday, October 6, 2025

We've all been there. You write a Bash script on a Tuesday afternoon. It works perfectly. You deploy it to production, pat yourself on the back, and move on.

Then Friday rolls around and suddenly your script is deleting the wrong files, or worse—silently failing and pretending everything is fine.

I learned this the hard way. After one too many 3 AM debugging sessions, I finally sat down and built myself a template. Not because I'm organized (I'm not), but because I was tired of fixing the same mistakes over and over.

Here's what I wish someone had told me when I started writing Bash scripts.

1- Start with the safety net

The first three lines of every script I write now look like this:

#!/usr/bin/env bash
# shellcheck shell=bash

set -euo pipefail

set -E

The # shellcheck shell=bash line tells shellcheck what it's dealing with. Without it, shellcheck can get confused and throw false positives.

That set -euo pipefail line? It's basically saying "please fail loudly if something goes wrong." Without it, your script will happily continue running even after a command fails. I once spent an hour debugging why my script wasn't working, only to realize it had errored out on line 3 and kept going anyway.

The -e exits on errors. The -u catches undefined variables (because $BUCKET_NAM is not the same as $BUCKET_NAME). And pipefail makes sure errors in pipes don't get swallowed.

The set -E is the less famous sibling that makes sure your ERR trap actually works inside functions. Without it, a function can fail silently even though you set up error handling.

2- Know what you're running

I started adding metadata to my scripts after I found a year-old script in a repo and had no idea what version it was or who wrote it. Now every script starts with:

# === Metadata ===
readonly SCRIPT_NAME="$(basename "$0")"
readonly SCRIPT_VERSION="1.0.0"
readonly SCRIPT_AUTHOR="Osama"

It sounds boring, but when you're maintaining 20+ scripts across different projects, this little header becomes your best friend.

3- Tell people what they're looking at

Right after metadata, add a short description of what the script does and what it needs:

# === Description ===
# ------------------------------------------------------------
# Deploys application to S3 and invalidates CloudFront cache
# Args: -e ENV -b BRANCH -r REGION -p PROFILE [-n] [-d]
# ------------------------------------------------------------

Future you will thank present you for this. Especially when you're juggling five different scripts and trying to remember which one does what.

4- Organize your variables like you organize your kitchen

I split mine into two sections:

# === Main Vars ===
BUCKET_NAME="my-bucket"
REGION="${REGION:-}"
PROFILE="${PROFILE:-}"
ENV="${ENV:-}"
BRANCH="${BRANCH:-}"
DOMAIN_NAME="osamalabs.com"

# === Helper Vars ===
DRY_RUN=false
DEBUG=false
TMP_DIR="$(mktemp -d -t "${SCRIPT_NAME:-script}.XXXXXX")"

Main variables are the ones that actually do things. Helper variables control how the script behaves. It's a small thing, but it makes scripts way easier to scan when you come back to them six months later.

5- Give yourself good logging

I can't count how many times I've stared at a wall of output trying to figure out what went wrong. Now I use simple formatting functions:

# === Styling Functions ===
info() { printf "[INFO] %s\n" "$*"; }
ok() { printf "[ OK ] %s\n" "$*"; }
warn() { printf "[WARN] %s\n" "$*"; }
err() { printf "[ERR ] %s\n" "$*" >&2; }
section() { printf "\n== %s ==\n" "$*"; }

The output looks clean, and more importantly, I can grep for [ERR ] when things go sideways. Errors go to stderr (>&2), which is how it should be.

6- Add a usage function

People (including future you) need to know how to run your script:

# === Usage Function ===
usage() {
  cat <<EOF
Usage: $0 -e ENV -b BRANCH -r REGION -p PROFILE [-n] [-d]

Options:
  -e ENV       Environment (dev, test, prod)
  -b BRANCH    Git branch (main, feature-*)
  -r REGION    AWS region (eu-central-1)
  -p PROFILE   AWS profile (production)
  -n           Simulate actions without changes through dry run
  -d           Enable verbose output
  -v           Show script version
  -h           Show this help message

Example:
  $0 -e dev -b main -n -d   
EOF
  exit 1
}

7- Use getopts, not guesswork

For the longest time I just used positional arguments. Then I had to remember: is environment the first argument or the second? Now I use getopts:

# === Argument Parsing ===
while getopts ":e:b:r:p:ndvh" opt; do
  case $opt in
    e) ENV="$OPTARG" ;;
    b) BRANCH="$OPTARG" ;;
    r) REGION="$OPTARG" ;;
    p) PROFILE="$OPTARG" ;;
    n) DRY_RUN=true ;;
    d) DEBUG=true ;;
    v) echo "$SCRIPT_VERSION"; exit 0 ;;
    h) usage ;;
    \?) err "Unknown option: -$OPTARG"; usage; exit 2 ;;
    :)  err "Option -$OPTARG requires an argument"; usage; exit 2 ;;
  esac
done
shift $((OPTIND - 1))

# Validate required args
[[ -z "${ENV:-}" ]] && { err "ENV is required (-e)"; exit 1; }
[[ -z "${BRANCH:-}" ]] && { err "BRANCH is required (-b)"; exit 1; }
[[ -z "${REGION:-}" ]] && { err "REGION is required (-r)"; exit 1; }
[[ -z "${PROFILE:-}" ]] && { err "PROFILE is required (-p)"; exit 1; }

# Optional: validate shapes
case "$ENV" in dev|test|prod) :;; *) err "ENV must be dev|test|prod"; exit 2;; esac

Self-documenting and way harder to mess up.

8- Keep your code DRY with helper functions

If you're typing the same command over and over, wrap it in a helper:

# === Helper Functions ===
aws_cmd() {
  aws --profile "$PROFILE" --region "$REGION" --no-cli-pager "$@"
}

req() {
  curl -fsS --retry 3 --retry-all-errors --max-time 25 -L "$@"
}

run_cmd() {
  if $DRY_RUN; then
    info "[DRY-RUN] $*"
  else
    "$@"
  fi
}

Now instead of typing aws --profile production --region eu-central-1 --no-cli-pager fifty times, you just use run_cmd aws_cmd. Your fingers will thank you, and the script becomes way more readable.

Note: The --no-cli-pager option disables output pagination in the AWS CLI. Without it, some commands may open a pager (like less) to display lengthy output, which can interrupt your workflow. This option is useful for scripts or when you want to process output without interruptions.

Adding a dry-run mode lets you test your script without actually doing anything destructive. It's like a safety on a gun, you hope you don't need it, but you're glad it's there.

9- Make cleanup automatic: Even Failures Deserve a Happy Ending!

Here's a pattern that saved me countless times:

# === Cleanup & Traps ===
cleanup() {
  info "Cleaning up temporary resources..."
  if [[ -n "${TMP_DIR:-}" && -d "$TMP_DIR" ]]; then
    rm -rf -- "$TMP_DIR"
    # Delete test objects, whatever you created
  fi
}

trap 'rc=$?; cleanup; exit $rc' EXIT
trap 'err "Error on line $LINENO: $BASH_COMMAND"' ERR
trap 'warn "Interrupted"; exit 130' INT
trap 'warn "Terminated"; exit 143' TERM

The trap ensures cleanup runs no matter what, even if your script crashes halfway through. but not on SIGKILL/SIGSTOP or abrupt terminations.

10- Check before you wreck

Preflight checks are non-negotiable now:

# === Preflight Checks ===
section "Preflight checks"

# Check required commands
command -v aws >/dev/null 2>&1 || { err "AWS CLI not found"; exit 1; }
command -v jq >/dev/null 2>&1 || { err "jq not found"; exit 1; }
ok "Required commands available"

# Validate required variables
[[ -z "${ENV:-}" ]] && { err "ENV is required"; exit 1; }
[[ -z "${BUCKET_NAME:-}" ]] && { err "BUCKET_NAME not set"; exit 1; }
ok "Required variables set"

# Verify permissions
if ! aws_cmd sts get-caller-identity >/dev/null 2>&1; then
  err "AWS credentials invalid or expired"
  exit 1
fi
ok "AWS credentials valid"

# Check resources exist
if ! aws_cmd s3api head-bucket --bucket "$BUCKET_NAME" >/dev/null 2>&1; then
  err "Bucket '$BUCKET_NAME' not accessible"
  exit 1
fi
ok "Bucket is reachable: $BUCKET_NAME"

This is where I catch 80% of problems before they become problems. Missing dependencies?, Invalid credentials?, Bucket doesn't exist? Catch em all!

If the preflight checks pass, the script will probably work. If they don't, I know immediately what's wrong.

It feels like extra work, but catching problems early beats debugging them later. Always.

11- Think idempotent

Your script should be safe to run twice. Check if things already exist before creating them:

# === Main Logic ===
section "Main Logic"

deploy_lambda() {
  if aws_cmd lambda get-function --function-name "$FUNCTION_NAME" >/dev/null 2>&1; then
    info "Function exists, updating code..."
    run_cmd aws_cmd lambda update-function-code \
      --function-name "$FUNCTION_NAME" \
      --zip-file "fileb://function.zip"
    ok "Lambda updated"
  else
    info "Function doesn't exist, creating..."
    run_cmd aws_cmd lambda create-function \
      --function-name "$FUNCTION_NAME" \
      --zip-file "fileb://function.zip" \
      --handler index.handler \
      --runtime nodejs20.x \
      --role "$LAMBDA_ROLE"
    ok "Lambda created"
  fi
}

This saved me when I accidentally ran a deployment script twice in a row. Instead of crashing, it just... worked.

12- Validate as you go

Preflight checks catch problems before you start. But what about checking that things actually worked?

# === Post flight Checks ===
section "Post flight Checks"

if aws_cmd s3 ls "s3://$BUCKET_NAME/index.html" >/dev/null 2>&1; then
  ok "Files uploaded successfully"
else
  err "Upload verification failed - index.html not found"
  exit 1
fi

info "Invalidating CloudFront cache..."
INVALIDATION_ID=$(aws_cmd cloudfront create-invalidation \
  --distribution-id "$DISTRIBUTION_ID" \
  --paths "/*" \
  --query 'Invalidation.Id' \
  --output text)

# Wait and verify
aws_cmd cloudfront wait invalidation-completed --distribution-id "$DISTRIBUTION_ID" --id "$INVALIDATION_ID"
ok "Cache invalidation completed: $INVALIDATION_ID"

I learned this after a deployment "succeeded" but the files were corrupted. The script exited cleanly, but the site was broken. Now I verify each critical step immediately after it runs.

13- End with a summary

After everything's done, tell yourself what happened:

# === Analysis ===
section "Summary"

info "Environment: $ENV"
info "Branch: $BRANCH"
info "Bucket: $BUCKET_NAME"
info "Region: $REGION"
info "Files uploaded: $(aws_cmd s3 ls "s3://$BUCKET_NAME/" --recursive | wc -l)"
info "CloudFront invalidation: $INVALIDATION_ID"

ok "Deployment completed successfully at $(date -u +"%Y-%m-%d %H:%M:%S UTC")"
info "Site URL: https://$DOMAIN_NAME"

This is the part that gets saved in your CI logs. When someone asks "did the deployment work?" you can just paste this summary. No guessing, no digging through hundreds of log lines.

AWS scripts need special care

If you're working with AWS:

  • Never hardcode credentials
  • Always set region and profile as variables
  • Turn off the pager: export AWS_PAGER=""

That last one is subtle but important. The AWS CLI tries to be helpful by piping output through a pager, which breaks your script when it's running in CI.

Run shellcheck

Just do it. Shellcheck catches so many silly mistakes:

shellcheck your-script.sh

Every time I think "this script is fine," shellcheck finds three things I missed. It's humbling and helpful in equal measure.

If you need to disable a warning, be explicit about why:

# shellcheck disable=SC2086  # Intentional word splitting for arguments

The template that keeps me sane

These days, every script I write follows the same structure:

#!/usr/bin/env bash
# shellcheck shell=bash
set -euo pipefail
set -E

# === Metadata ===
# === Description ===
# === Main Vars ===
# === Helper Vars ===
# === Styling Functions ===
# === Usage Function ===
# === Argument Parsing ===
# === Helper Functions ===
# === Cleanup & Traps ===
# === Preflight Checks ===
# === Main Logic ===
# === Post flight Checks ===
# === Analysis ===

It sounds like a lot, but it's become muscle memory. And when something breaks at 2 AM, I know exactly where to look. The dividers are like chapter headings—they help me navigate the script without reading every line.

What I've learned

Good Bash scripts aren't about being clever. They're about being predictable. They fail loudly, clean up after themselves, and tell you what they're doing.

The best script is the one you can run six months from now and it just works. The one your teammate can read without asking you questions. The one that doesn't surprise you in production.

These practices didn't come from a book. They came from broken scripts, angry Slack messages, and that sinking feeling when you realize you just deleted something important.

But now? My scripts are boring, no more adrenaline rushes. They do what they say they'll do. They clean up their mess. They tell me when something's wrong.

And honestly, boring is exactly what you want in a Bash script.

shell