Complete Backup & Restore Guide: Linux Data Protection Strategies

This comprehensive guide covers everything from basic file backups to enterprise disaster recovery strategies. Learn what to backup, how to backup, when to backup, and most importantly, how to restore when disaster strikes.

1. Backup Fundamentals & Principles

Understand the core principles of data protection before implementing any backup strategy. These fundamentals ensure your backups are reliable and effective.

The 3-2-1 Backup Rule

Critical

What is the 3-2-1 rule?

3 Copies of your data: Original + 2 backups
2 Different Media: Hard drive + Cloud/Tape
1 Off-site Copy: Geographic separation

Why this matters:

Protects against multiple failure scenarios: hardware failure, theft, fire, ransomware, and human error. A single backup is never enough.

Implementation example:

1. Primary: Live data on server
2. Local backup: External HDD/NAS (daily)
3. Cloud backup: AWS S3/Backblaze (weekly)
4. Off-site: Tape at different location (monthly)

Critical Warning

Having only local backups means you lose everything if your building burns down. Always maintain at least one geographically separated copy.

RPO vs RTO Explained

Intermediate

RPO - Recovery Point Objective

Definition: Maximum acceptable data loss measured in time.
Example: RPO of 1 hour means you can afford to lose up to 1 hour of data.
Implies: You need backups at least every hour.

RTO - Recovery Time Objective

Definition: Maximum acceptable downtime for restoration.
Example: RTO of 4 hours means system must be restored within 4 hours.
Implies: You need efficient, automated restore procedures.

Business impact:

E-commerce: RPO=minutes, RTO=hours (critical revenue loss)
Development server: RPO=1 day, RTO=1 day (acceptable delay)
Personal computer: RPO=1 week, RTO=2 days (low priority)

Quick Assessment Tool

Ask these questions to determine your RPO/RTO:

1. How much data can you afford to lose? (Time) 2. How long can your business operate without this data? 3. What is the financial impact per hour of downtime? 4. What regulatory/compliance requirements apply?

2. Essential Backup Tools & Commands

Master the fundamental Linux commands for creating reliable backups. Each tool has specific use cases and advantages.

tar - Tape Archive

Beginner
tar -czvf backup-$(date +%Y%m%d).tar.gz /path/to/backup

What each flag does:

c: Create new archive
z: Compress with gzip
v: Verbose (show progress)
f: File name follows
$(date +%Y%m%d): Auto-date in filename

When to use tar:

• Creating compressed archives of directories
• Long-term storage backups
• Transferring multiple files as one
• Simple, reliable file-level backups

Practical tar Examples

# Backup with progress and exclude patterns tar --exclude='*.log' --exclude='*.tmp' \ -czvf backup-$(date +%F).tar.gz \ /home /etc /var/www # Split large backup into multiple files tar -czvf - /large/directory | split -b 2G - backup-part- # Verify backup integrity after creation tar -tzf backup.tar.gz | head -20 # Extract specific files from backup tar -xzvf backup.tar.gz path/to/specific/file.txt

rsync - Remote Synchronization

Intermediate
rsync -avz --delete /source/ user@remote:/destination/

Why rsync is superior:

Incremental: Only copies changed files
Resumable: Can continue interrupted transfers
Bandwidth efficient: Compresses during transfer
Versatile: Local, remote, SSH, and more

Essential rsync flags:

-a: Archive mode (preserves permissions)
-v: Verbose output
-z: Compress during transfer
--delete: Delete extra files at destination
--progress: Show transfer progress
--exclude: Exclude patterns

Production rsync Script

#!/bin/bash # rsync-backup.sh - Production backup script SOURCE="/var/www /home /etc" DEST="/backups/daily" LOG="/var/log/backup-$(date +%Y%m%d).log" EXCLUDE_FILE="/etc/backup-exclude.txt" echo "Starting backup $(date)" >> $LOG rsync -avz \ --delete \ --progress \ --exclude-from="$EXCLUDE_FILE" \ --link-dest="../daily.1" \ $SOURCE $DEST/daily.0/ 2>> $LOG # Rotate backups rm -rf $DEST/daily.7 for i in {6..0}; do [ -d $DEST/daily.$i ] && mv $DEST/daily.$i $DEST/daily.$((i+1)) done echo "Backup completed $(date)" >> $LOG

dd - Disk Duplicator

Advanced
dd if=/dev/sda of=/backup/sda-backup.img bs=4M status=progress

What dd does:

Creates exact byte-for-byte copy of entire disk or partition. This includes empty space, partition tables, and boot sectors.

When to use dd:

• Complete system imaging
• Disaster recovery preparation
• Disk cloning/migration
• Forensic analysis
• Boot disk creation

DANGER Warning!

dd is called "Data Destroyer" for a reason! Reversing source and target can wipe your entire disk. Always double-check if= (input) and of= (output) parameters.

Safe dd Practices

# 1. ALWAYS verify disk identifiers first lsblk fdisk -l # 2. Use small test first dd if=/dev/sda of=test.img bs=1M count=100 # 3. Verify the test cmp /dev/sda test.img -n 100M # 4. Full backup with compression dd if=/dev/sda bs=4M status=progress | gzip > sda-backup.img.gz # 5. Restore verification gzip -dc sda-backup.img.gz | dd of=/dev/sda bs=4M status=progress

3. Backup Strategies & Scheduling

Different data requires different backup frequencies and retention policies. Implement a tiered approach for optimal protection.

Grandfather-Father-Son (GFS) Strategy

Intermediate

How GFS works:

Son (Daily): Keep 7 daily backups
Father (Weekly): Keep 4 weekly backups
Grandfather (Monthly): Keep 12 monthly backups
Yearly: Keep 3-7 yearly backups

Advantages:

• Provides multiple recovery points over time
• Efficient storage usage
• Protects against gradual data corruption
• Meets compliance requirements

GFS Rotation Schedule

1
Daily Backups (Son)
Monday-Sunday: daily.0 to daily.6
Keep for 7 days, then discard or promote
2
Weekly Backups (Father)
First Sunday of each month → weekly.0
Keep 4 weeks, then promote to monthly
3
Monthly Backups (Grandfather)
Last backup of each month → monthly.0
Keep 12 months, then promote to yearly

Incremental vs Differential

Intermediate

Incremental Backup:

How it works: Backs up only changed files since last backup (any type)
Storage: Minimal - only changes
Restore: Requires full + all incrementals
Best for: Frequent backups, limited storage

Differential Backup:

How it works: Backs up changes since last full backup
Storage: Grows over time
Restore: Requires full + latest differential
Best for: Medium frequency, faster restore

Backup Type Comparison

# Full Backup (Weekly Sunday) tar -czvf full-$(date +%Y%m%d).tar.gz /data # Incremental (Monday-Saturday) # Uses --newer or --after-date tar -czvf inc-$(date +%Y%m%d).tar.gz \ --newer-mtime="$(cat /var/backup/last-full)" \ /data # Differential (Daily after full) tar -czvf diff-$(date +%Y%m%d).tar.gz \ --newer-mtime="last Sunday" \ /data

Automated Scheduling with Cron

Beginner

Cron syntax explained:

Format: minute hour day month day-of-week command
* = any value
*/5 = every 5 units
1,3,5 = specific values
1-5 = range of values

Complete Backup Schedule

# Edit crontab: crontab -e # Daily incremental at 2 AM 0 2 * * * /usr/local/bin/daily-backup.sh # Weekly full backup Sunday at 3 AM 0 3 * * 0 /usr/local/bin/weekly-backup.sh # Monthly backup 1st of month at 4 AM 0 4 1 * * /usr/local/bin/monthly-backup.sh # Verify backups daily at 5 AM 0 5 * * * /usr/local/bin/verify-backups.sh # Clean old backups Saturday at 6 AM 0 6 * * 6 /usr/local/bin/clean-backups.sh

4. Database Backup & Recovery

Databases require special handling for consistent backups. Learn application-aware backup techniques for MySQL, PostgreSQL, and MongoDB.

MySQL/MariaDB Backup

Intermediate
mysqldump -u root -p --all-databases --single-transaction > full-backup-$(date +%F).sql

Critical flags explained:

--single-transaction: Creates consistent backup without locking (InnoDB)
--routines: Includes stored procedures/functions
--triggers: Includes database triggers
--events: Includes scheduled events
--master-data: Includes binary log position (replication)

Production MySQL Backup Script

#!/bin/bash # mysql-backup.sh - Production database backup BACKUP_DIR="/backup/mysql" DATE=$(date +%Y%m%d_%H%M%S) RETENTION_DAYS=30 # Create backup directory mkdir -p $BACKUP_DIR/$DATE # Backup all databases mysqldump --all-databases \ --single-transaction \ --routines \ --triggers \ --events \ --master-data=2 \ --flush-logs \ > $BACKUP_DIR/$DATE/full-backup.sql # Compress backup gzip $BACKUP_DIR/$DATE/full-backup.sql # Backup binary logs (for point-in-time recovery) cp /var/lib/mysql/mysql-bin.* $BACKUP_DIR/$DATE/ 2>/dev/null || true # Clean old backups find $BACKUP_DIR -type d -mtime +$RETENTION_DAYS -exec rm -rf {} \;

PostgreSQL Backup

Intermediate
pg_dumpall -U postgres -f full-backup-$(date +%F).sql

PostgreSQL backup methods:

pg_dump: Logical backup of single database
pg_dumpall: Logical backup of all databases + roles
pg_basebackup: Physical backup (filesystem level)
WAL Archiving: Continuous backup for PITR

PostgreSQL Continuous Backup

# 1. Enable WAL archiving in postgresql.conf wal_level = replica archive_mode = on archive_command = 'cp %p /backup/wal/%f' # 2. Take base backup pg_basebackup -D /backup/base -Ft -z -P # 3. Restore process # Stop PostgreSQL systemctl stop postgresql # Restore base backup rm -rf /var/lib/postgresql/data/* tar -xzf /backup/base/base.tar.gz -C /var/lib/postgresql/data # Configure recovery cat > /var/lib/postgresql/data/recovery.conf << EOF restore_command = 'cp /backup/wal/%f %p' recovery_target_time = '2025-12-16 14:30:00' EOF # Start PostgreSQL (will recover to target time) systemctl start postgresql

5. Disaster Recovery Procedures

When disaster strikes, having documented recovery procedures is crucial. Test these regularly to ensure they work when needed.

Complete System Recovery

Critical

Step-by-step recovery:

Step 1: Assess the damage

lsblk # Check disks df -h # Check filesystems systemctl # Check services
Step 2: Boot from recovery media
# Use SystemRescueCD or Live USB # Mount original disks mount /dev/sda2 /mnt mount /dev/sda1 /mnt/boot
Step 3: Restore data
# Restore from backup rsync -av /backup/latest/ /mnt/ # Or restore tar backup tar -xzvf backup.tar.gz -C /mnt/
Step 4: Reinstall bootloader
chroot /mnt grub-install /dev/sda update-grub exit
Step 5: Verify and reboot
fsck -y /dev/sda2 reboot

File-Level Recovery

Beginner

Common recovery scenarios:

Accidental deletion: Restore from backup
File corruption: Restore previous version
Ransomware: Restore from clean backup
Permission issues: Restore original permissions

Quick File Recovery Commands

# Find which backup contains file find /backup -name "important.txt" -type f # Extract single file from tar backup tar -xzvf backup.tar.gz path/to/file.txt # Restore file with rsync (from latest backup) rsync -av /backup/daily.0/home/user/file.txt /home/user/ # Restore previous version (if using snapshots) cp /home/.snapshots/daily.1/user/file.txt /home/user/ # Check file integrity before restore sha256sum /backup/file.txt sha256sum /original/location/file.txt

6. Cloud & Remote Backup Strategies

Cloud storage provides geographic redundancy and scalability. Implement secure, automated cloud backup solutions.

AWS S3 Backup

Advanced
aws s3 sync /local/backup s3://my-backup-bucket/ --delete

AWS CLI setup:

1. Install AWS CLI: apt install awscli
2. Configure credentials: aws configure
3. Create S3 bucket: aws s3 mb s3://my-backup-bucket
4. Enable versioning: aws s3api put-bucket-versioning

Automated S3 Backup Script

#!/bin/bash # s3-backup.sh - Automated AWS S3 backup BACKUP_DIR="/backup" S3_BUCKET="s3://my-backup-bucket" DATE=$(date +%Y%m%d) RETENTION_DAYS=90 # Create local backup tar -czf $BACKUP_DIR/full-$DATE.tar.gz \ --exclude='*.log' \ --exclude='*.tmp' \ /home /etc /var/www # Upload to S3 with encryption aws s3 cp $BACKUP_DIR/full-$DATE.tar.gz \ $S3_BUCKET/ \ --sse AES256 \ --storage-class STANDARD_IA # Apply lifecycle policy (auto-delete after 90 days) aws s3api put-bucket-lifecycle-configuration \ --bucket my-backup-bucket \ --lifecycle-configuration '{ "Rules": [{ "ID": "DeleteOldBackups", "Status": "Enabled", "Prefix": "", "Expiration": {"Days": 90} }] }' # Clean local old backups find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -delete

Rsync over SSH (Remote Backup)

Intermediate
rsync -avz -e ssh /local/data user@remote-server:/backup/

SSH key setup for automation:

# Generate SSH key (no password for automation) ssh-keygen -t rsa -b 4096 -N "" -f ~/.ssh/backup-key # Copy public key to remote server ssh-copy-id -i ~/.ssh/backup-key.pub user@remote-server # Test connection ssh -i ~/.ssh/backup-key user@remote-server "echo test" # Use in rsync rsync -avz -e "ssh -i ~/.ssh/backup-key" \ /local/data user@remote-server:/backup/

Security Considerations

Passwordless SSH keys are convenient but risky. Implement additional security:

# 1. Restrict key usage in ~/.ssh/authorized_keys from="192.168.1.100",command="/usr/local/bin/backup-script" ssh-rsa AAA... # 2. Use dedicated backup user with limited permissions useradd backup-user -s /bin/rbash chmod 750 /home/backup-user # 3. Implement fail2ban for SSH protection apt install fail2ban systemctl enable fail2ban

7. Backup Verification & Testing

A backup is only as good as your ability to restore from it. Regular testing is non-negotiable for reliable data protection.

Backup Integrity Checks

Critical

What to verify:

File integrity: No corruption in backup files
Completeness: All required files are backed up
Consistency: Databases are transactionally consistent
Accessibility: Backup media is readable
Encryption: Encrypted backups can be decrypted

Automated Verification Script

#!/bin/bash # verify-backup.sh - Comprehensive backup verification BACKUP_FILE="/backup/full-$(date +%Y%m%d).tar.gz" LOG_FILE="/var/log/backup-verify-$(date +%Y%m%d).log" MIN_SIZE=1000000 # 1GB minimum echo "=== Backup Verification $(date) ===" > $LOG_FILE # 1. Check backup exists and has minimum size if [ ! -f "$BACKUP_FILE" ]; then echo "ERROR: Backup file not found" >> $LOG_FILE exit 1 fi SIZE=$(stat -c%s "$BACKUP_FILE") echo "Backup size: $SIZE bytes" >> $LOG_FILE if [ $SIZE -lt $MIN_SIZE ]; then echo "WARNING: Backup smaller than expected" >> $LOG_FILE fi # 2. Verify tar archive integrity if ! tar -tzf "$BACKUP_FILE" > /dev/null 2>&1; then echo "ERROR: Tar archive corrupted" >> $LOG_FILE exit 1 fi echo "Tar integrity check: PASSED" >> $LOG_FILE # 3. Verify critical files are included CRITICAL_FILES=("/etc/passwd" "/etc/shadow" "/etc/fstab") for file in "${CRITICAL_FILES[@]}"; do if tar -tzf "$BACKUP_FILE" | grep -q "^${file#/}$"; then echo "✓ $file found in backup" >> $LOG_FILE else echo "✗ $file MISSING from backup" >> $LOG_FILE fi done # 4. Generate checksum for future comparison sha256sum "$BACKUP_FILE" > "${BACKUP_FILE}.sha256" echo "Verification completed: $(date)" >> $LOG_FILE

Restore Testing Schedule

Recovery

Testing frequency:

Weekly: Test single file restore
Monthly: Test database restore
Quarterly: Test full application restore
Annually: Full disaster recovery drill
After changes: Test whenever backup process changes

Quarterly DR Test Procedure

1
Preparation (1 week before)
• Schedule maintenance window
• Notify stakeholders
• Prepare test environment
2
Test Execution
• Simulate disaster scenario
• Restore from latest backup
• Verify system functionality
• Measure recovery time
3
Documentation
• Record actual RTO/RPO
• Note issues encountered
• Update recovery procedures
• Share lessons learned

8. Complete Backup Implementation

Putting it all together: A complete, production-ready backup solution with monitoring, alerting, and documentation.

Complete Backup Architecture

Advanced

Multi-tier backup architecture:

Level 1: Local Snapshots
• Technology: LVM/ZFS/Btrfs snapshots
• Frequency: Hourly
• Retention: 24 hours
• Purpose: Quick file recovery

Level 2: Local Backup Server
• Technology: rsync + hard links
• Frequency: Daily
• Retention: 30 days (GFS rotation)
• Purpose: Server recovery

Level 3: Cloud Storage
• Technology: AWS S3/Backblaze
• Frequency: Weekly
• Retention: 1 year
• Purpose: Disaster recovery

Level 4: Off-site Archive
• Technology: Tape/LTO
• Frequency: Monthly
• Retention: 7 years
• Purpose: Compliance/archival

Monitoring & Alerting

Intermediate

What to monitor:

• Backup success/failure status
• Backup duration and size
• Storage capacity trends
• Verification test results
• Recovery time measurements

Nagios/Icinga Backup Check

#!/bin/bash # check_backup.sh - Nagios plugin for backup monitoring BACKUP_DIR="/backup" WARNING_AGE=86400 # 24 hours in seconds CRITICAL_AGE=172800 # 48 hours in seconds # Find latest backup LATEST=$(find $BACKUP_DIR -name "*.tar.gz" -type f -printf '%T@ %p\n' | sort -n | tail -1 | cut -f2 -d" ") if [ -z "$LATEST" ]; then echo "CRITICAL: No backups found" exit 2 fi # Check backup age NOW=$(date +%s) BACKUP_TIME=$(stat -c %Y "$LATEST") AGE=$((NOW - BACKUP_TIME)) # Check backup size (minimum 100MB) SIZE=$(stat -c %s "$LATEST") MIN_SIZE=104857600 if [ $AGE -gt $CRITICAL_AGE ]; then echo "CRITICAL: Last backup $((AGE/3600)) hours ago" exit 2 elif [ $AGE -gt $WARNING_AGE ]; then echo "WARNING: Last backup $((AGE/3600)) hours ago" exit 1 elif [ $SIZE -lt $MIN_SIZE ]; then echo "WARNING: Backup size only $((SIZE/1048576))MB" exit 1 else echo "OK: Backup $((AGE/3600)) hours ago, size $((SIZE/1048576))MB" exit 0 fi

Backup Success Checklist

Implementation Checklist:

1. Define RPO/RTO for each system
2. Implement 3-2-1 rule with off-site copy
3. Choose appropriate tools (rsync, tar, database-specific)
4. Set up GFS rotation with proper retention
5. Automate scheduling with cron/systemd timers
6. Enable monitoring and alerting
7. Document procedures for restoration
8. Test regularly - backup without restore is useless
9. Review annually - update with system changes
10. Train staff - ensure multiple people can restore

Pro Tips for Success

Start small: Protect critical data first, expand later
Automate everything: Manual backups are forgotten backups
Test restores: The only way to know backups work
Monitor proactively: Don't wait for failure to check backups
Keep it simple: Complex systems fail in complex ways
Document thoroughly: You won't remember details during crisis
Budget appropriately: Backup storage costs money, but data loss costs more

Common Backup Failures to Avoid

1. Backing up to same disk: Disk failure loses original AND backup
2. No off-site copy: Fire/theft/ransomware takes everything
3. Untested restores: Backups that don't work are worthless
4. Insufficient retention: Can't recover from corruption discovered weeks later
5. No monitoring: Silent failures go unnoticed until needed
6. Single point of knowledge: Only one person knows how to restore
7. Ignoring database consistency: File-level backup of live database
8. No encryption for cloud: Sensitive data exposed
9. Backing up everything: Wasting storage on non-critical data
10. Forgetting to update: Not adapting backup strategy to system changes