Linux Hands-on Practice Tasks for DevOps

Practical Linux exercises and real-world tasks to build your DevOps skills. Each task includes requirements, step-by-step guidance, and solutions. Practice these exercises to prepare for interviews and daily operations.

1. Filesystem Operations Tasks

Practice filesystem operations, file management, and directory manipulation tasks.

Task 1: Log File Management

Beginner

Task Description:

You're managing a server with limited disk space. The /var/log directory is growing too large. Your task is to clean up log files while keeping recent logs for debugging.

Requirements:

  1. Create a practice directory: /tmp/log-practice
  2. Generate sample log files with different dates and sizes
  3. Find and list log files older than 7 days
  4. Compress logs older than 3 days instead of deleting them
  5. Delete logs older than 30 days
  6. Create a script to automate this cleanup

Step-by-Step Implementation:

1 Setup practice environment:

# Create practice directory mkdir -p /tmp/log-practice cd /tmp/log-practice # Create sample log files with different dates for i in {1..30}; do touch -d "$i days ago" "app.log.$(date -d "$i days ago" +%Y%m%d)" # Add some content echo "Log entry from $(date -d "$i days ago" '+%Y-%m-%d') - Error $RANDOM" > "app.log.$(date -d "$i days ago" +%Y%m%d)" done # Create some large log files for i in {1..5}; do yes "Large log entry for testing disk space. " | head -1000 > "large_log_$i.log" done # Verify creation ls -la *.log | head -5 du -sh /tmp/log-practice

2 Find and list old log files:

# List files older than 7 days find /tmp/log-practice -name "*.log*" -type f -mtime +7 -ls # Count files by age find /tmp/log-practice -name "*.log*" -type f -mtime +30 | wc -l find /tmp/log-practice -name "*.log*" -type f -mtime +7 -mtime -30 | wc -l find /tmp/log-practice -name "*.log*" -type f -mtime -7 | wc -l

3 Compress old logs:

# Compress logs older than 3 days but less than 30 days find /tmp/log-practice -name "*.log*" -type f -mtime +3 -mtime -30 -exec gzip {} \; # Verify compression ls -la *.gz | head -5 file *.gz | head -3

4 Delete very old logs:

# Delete logs older than 30 days find /tmp/log-practice -name "*.log*" -type f -mtime +30 -delete # Verify deletion find /tmp/log-practice -name "*.log*" -type f | wc -l

5 Create cleanup script:

#!/bin/bash # log-cleanup.sh LOG_DIR="/tmp/log-practice" RETENTION_DAYS=30 COMPRESS_DAYS=3 echo "Starting log cleanup in $LOG_DIR" echo "Current disk usage:" du -sh "$LOG_DIR" # Compress logs older than COMPRESS_DAYS days echo "Compressing logs older than $COMPRESS_DAYS days..." find "$LOG_DIR" -name "*.log" -type f -mtime +$COMPRESS_DAYS -exec gzip {} \; # Delete compressed logs older than RETENTION_DAYS days echo "Deleting logs older than $RETENTION_DAYS days..." find "$LOG_DIR" -name "*.log.gz" -type f -mtime +$RETENTION_DAYS -delete echo "Cleanup completed. Final disk usage:" du -sh "$LOG_DIR" # Make script executable chmod +x log-cleanup.sh

Complete Solution:

📁 LOG MANAGEMENT COMPLETE SOLUTION ===================================== 1. SETUP: mkdir -p /tmp/log-practice # Generate test files as shown 2. FINDING OLD LOGS: find /tmp/log-practice -name "*.log*" -type f -mtime +7 -exec ls -lh {} \; 3. COMPRESSION STRATEGY: # Keep current logs (0-3 days): Uncompressed for easy access # Archive medium logs (3-30 days): Compressed to save space # Delete old logs (30+ days): Remove to free disk 4. AUTOMATION SCRIPT: # Save as /usr/local/bin/log-cleanup.sh # Add to crontab: 0 2 * * * /usr/local/bin/log-cleanup.sh 5. MONITORING: # Add logging to script echo "$(date): Cleaned $(find ... | wc -l) files" >> /var/log/log-cleanup.log

Advanced variations to try:

# 1. Rotate logs instead of deleting # Create dated directories mkdir -p /tmp/log-practice/archive/$(date +%Y)/$(date +%m) find /tmp/log-practice -name "*.log" -type f -mtime +30 -exec mv {} /tmp/log-practice/archive/$(date +%Y)/$(date +%m) \; # 2. Calculate space saved BEFORE=$(du -sb /tmp/log-practice | cut -f1) # Run cleanup AFTER=$(du -sb /tmp/log-practice | cut -f1) SAVED=$(( (BEFORE - AFTER) / 1024 / 1024 )) echo "Saved ${SAVED}MB" # 3. Email report # Add to script: echo "Log cleanup completed. Saved ${SAVED}MB." | mail -s "Log Cleanup Report" admin@example.com # 4. Exclude certain logs find /tmp/log-practice -name "*.log" ! -name "important*.log" -type f -mtime +30 -delete

Task 2: Find and Process Files

Intermediate

Task Description:

You need to find specific files across the system and perform operations on them. This is common for security audits, cleanup tasks, or data migration.

Requirements:

  1. Find all files larger than 100MB in /home directory
  2. Find all executable files owned by specific users
  3. Find files modified in the last 24 hours
  4. Find and delete empty directories
  5. Find files with specific permissions (world-writable)
  6. Create a summary report of findings

Practice Setup:

# Create practice environment mkdir -p /tmp/file-search-practice cd /tmp/file-search-practice # Create test files with different characteristics # Large files dd if=/dev/zero of=large_file_1.img bs=1M count=150 dd if=/dev/zero of=large_file_2.img bs=1M count=120 # Executable files echo '#!/bin/bash' > script1.sh echo 'echo "Hello"' >> script1.sh chmod +x script1.sh echo '#!/bin/python3' > script2.py echo 'print("World")' >> script2.py chmod +x script2.py # Recently modified files touch -d "1 hour ago" recent_file.txt echo "Modified recently" > recent_file.txt # World-writable file touch world_writable.txt chmod 777 world_writable.txt # Empty directories mkdir -p empty_dir1/empty_subdir mkdir empty_dir2 # Files owned by different users (if possible) sudo touch root_owned.txt sudo chown root:root root_owned.txt # Create directory structure mkdir -p {backup,logs,temp,config}

Implementation Tasks:

1 Find large files:

# Find files larger than 100MB find /tmp/file-search-practice -type f -size +100M -exec ls -lh {} \; # Alternative with human readable output find /tmp/file-search-practice -type f -size +100M -exec du -h {} \; # Count large files find /tmp/file-search-practice -type f -size +100M | wc -l # Get total size of large files find /tmp/file-search-practice -type f -size +100M -exec du -b {} + | awk '{sum += $1} END {print sum/1024/1024 " MB"}'

2 Find executable files:

# Find all executable files find /tmp/file-search-practice -type f -executable -exec ls -lh {} \; # Find executable files by specific owner find /tmp/file-search-practice -type f -executable -user $(whoami) -exec ls -lh {} \; # Find and check file type find /tmp/file-search-practice -type f -executable -exec file {} \; # Find scripts (files starting with shebang) find /tmp/file-search-practice -type f -executable -exec head -1 {} \; | grep -n "#!"

3 Find recently modified files:

# Files modified in last 24 hours find /tmp/file-search-practice -type f -mtime -1 -exec ls -lh {} \; # Files modified in last 1 hour find /tmp/file-search-practice -type f -mmin -60 -exec ls -lh {} \; # Files modified between 1 and 24 hours ago find /tmp/file-search-practice -type f -mmin +60 -mmin -1440 -exec ls -lh {} \; # With modification time details find /tmp/file-search-practice -type f -mtime -1 -printf "%p - %TY-%Tm-%Td %TH:%TM\n"

4 Find and process empty directories:

# Find empty directories find /tmp/file-search-practice -type d -empty # Find and list empty directories find /tmp/file-search-practice -type d -empty -ls # Find and delete empty directories (careful!) find /tmp/file-search-practice -type d -empty -delete # Find empty directories older than 7 days find /tmp/file-search-practice -type d -empty -mtime +7 -delete

5 Find files with specific permissions:

# Find world-writable files (security risk!) find /tmp/file-search-practice -type f -perm -o=w ! -perm -g=w ! -perm -u=w -exec ls -lh {} \; # Find files writable by group find /tmp/file-search-practice -type f -perm -g=w -exec ls -lh {} \; # Find SUID files (security check) find /tmp/file-search-practice -type f -perm /4000 -exec ls -lh {} \; # Find files with no owner (orphaned) find /tmp/file-search-practice -nouser -o -nogroup -exec ls -lh {} \;

Complete Search Script:

#!/bin/bash # file-audit.sh - Comprehensive file system audit REPORT_FILE="/tmp/file_audit_$(date +%Y%m%d_%H%M%S).txt" SEARCH_DIR="/tmp/file-search-practice" echo "File System Audit Report - $(date)" > "$REPORT_FILE" echo "========================================" >> "$REPORT_FILE" echo "" >> "$REPORT_FILE" echo "1. LARGE FILES (>100MB):" >> "$REPORT_FILE" echo "-------------------------" >> "$REPORT_FILE" find "$SEARCH_DIR" -type f -size +100M -exec ls -lh {} \; >> "$REPORT_FILE" 2>/dev/null echo "" >> "$REPORT_FILE" echo "2. EXECUTABLE FILES:" >> "$REPORT_FILE" echo "-------------------" >> "$REPORT_FILE" find "$SEARCH_DIR" -type f -executable -exec ls -lh {} \; >> "$REPORT_FILE" 2>/dev/null echo "" >> "$REPORT_FILE" echo "3. RECENTLY MODIFIED (last 24h):" >> "$REPORT_FILE" echo "--------------------------------" >> "$REPORT_FILE" find "$SEARCH_DIR" -type f -mtime -1 -exec ls -lh {} \; >> "$REPORT_FILE" 2>/dev/null echo "" >> "$REPORT_FILE" echo "4. WORLD-WRITABLE FILES (Security Risk):" >> "$REPORT_FILE" echo "---------------------------------------" >> "$REPORT_FILE" find "$SEARCH_DIR" -type f -perm /o=w -exec ls -lh {} \; >> "$REPORT_FILE" 2>/dev/null echo "" >> "$REPORT_FILE" echo "5. EMPTY DIRECTORIES:" >> "$REPORT_FILE" echo "---------------------" >> "$REPORT_FILE" find "$SEARCH_DIR" -type d -empty >> "$REPORT_FILE" 2>/dev/null echo "" >> "$REPORT_FILE" echo "6. SUMMARY STATISTICS:" >> "$REPORT_FILE" echo "----------------------" >> "$REPORT_FILE" echo "Total files: $(find "$SEARCH_DIR" -type f | wc -l)" >> "$REPORT_FILE" echo "Total directories: $(find "$SEARCH_DIR" -type d | wc -l)" >> "$REPORT_FILE" echo "Total size: $(du -sh "$SEARCH_DIR" | cut -f1)" >> "$REPORT_FILE" echo "Report generated: $REPORT_FILE" cat "$REPORT_FILE" | head -50

Advanced search patterns:

# 1. Find files by content pattern find /tmp/file-search-practice -type f -exec grep -l "pattern" {} \; # 2. Find duplicate files by content find /tmp/file-search-practice -type f -exec md5sum {} \; | sort | uniq -w32 -d # 3. Find files by extension and process find /tmp/file-search-practice -name "*.log" -type f -exec sh -c ' echo "Processing: $1" wc -l "$1" ' sh {} \; # 4. Find and archive old files find /tmp/file-search-practice -type f -mtime +30 -exec tar -rvf /tmp/archive.tar {} \; # 5. Find and change permissions find /tmp/file-search-practice -type f -name "*.sh" -exec chmod 755 {} \; # 6. Find symbolic links find /tmp/file-search-practice -type l -exec ls -la {} \;

2. Process & System Management Tasks

Practice process management, system monitoring, and performance optimization tasks.

Task 3: System Monitoring Dashboard

Intermediate

Task Description:

Create a system monitoring dashboard script that displays real-time system metrics. This is essential for DevOps engineers to quickly assess system health.

Requirements:

  1. Display system uptime and load average
  2. Show CPU usage percentage
  3. Display memory usage with details
  4. Show disk usage for all mounted filesystems
  5. List top 5 processes by CPU and memory
  6. Show network interface statistics
  7. Create a continuously updating dashboard

Practice Environment:

# Install useful monitoring tools sudo apt install sysstat net-tools bc # Debian/Ubuntu # or sudo yum install sysstat net-tools bc # RHEL/CentOS # Test individual commands uptime free -h df -h top -bn1 | grep "Cpu(s)" ss -s

Building the Dashboard:

1 Basic system information:

#!/bin/bash # system-dashboard.sh clear echo "================================================" echo " SYSTEM MONITORING DASHBOARD" echo "================================================" echo "" # System information echo "1. SYSTEM INFO:" echo " Hostname: $(hostname)" echo " Kernel: $(uname -r)" echo " OS: $(grep PRETTY_NAME /etc/os-release | cut -d'"' -f2)" echo " Uptime: $(uptime -p | sed 's/up //')" echo " Load Average: $(uptime | awk -F'load average:' '{print $2}')" echo ""

2 CPU and memory metrics:

# CPU Usage echo "2. CPU USAGE:" CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2 + $4}') echo " Usage: ${CPU_USAGE}%" echo " Cores: $(nproc)" echo "" # Memory Usage echo "3. MEMORY USAGE:" MEM_TOTAL=$(free -h | grep Mem | awk '{print $2}') MEM_USED=$(free -h | grep Mem | awk '{print $3}') MEM_FREE=$(free -h | grep Mem | awk '{print $4}') MEM_AVAIL=$(free -h | grep Mem | awk '{print $7}') echo " Total: $MEM_TOTAL" echo " Used: $MEM_USED" echo " Free: $MEM_FREE" echo " Available: $MEM_AVAIL" echo ""

3 Disk and process information:

# Disk Usage echo "4. DISK USAGE:" df -h --output=source,size,used,avail,pcent,target | grep -E "^/dev" | head -5 | while read line; do echo " $line" done echo "" # Top Processes echo "5. TOP PROCESSES BY CPU:" ps aux --sort=-%cpu | head -6 | awk '{printf " %-10s %-6s %-6s %-10s %s\n", $11, $1, $2, $3, $4}' echo "" echo "6. TOP PROCESSES BY MEMORY:" ps aux --sort=-%mem | head -6 | awk '{printf " %-10s %-6s %-6s %-10s %s\n", $11, $1, $2, $3, $4}' echo ""

4 Network and system load:

# Network Interfaces echo "7. NETWORK INTERFACES:" ip -o addr show | awk '{print $2": "$4}' | head -5 | while read line; do echo " $line" done echo "" # System Load echo "8. SYSTEM LOAD:" echo " Running processes: $(ps aux | wc -l)" echo " Users logged in: $(who | wc -l)" echo " Open files: $(lsof | wc -l)" echo ""

5 Continuous monitoring loop:

#!/bin/bash # monitoring-loop.sh INTERVAL=5 # Update every 5 seconds CLEAR_SCREEN=true while true; do if [ "$CLEAR_SCREEN" = true ]; then clear fi echo "================================================" echo " REAL-TIME SYSTEM MONITOR - $(date '+%H:%M:%S')" echo "================================================" echo "" # Compact display echo "CPU: $(top -bn1 | grep "Cpu(s)" | awk '{printf "%.1f%%", $2 + $4}')" echo "Mem: $(free -m | awk '/Mem:/ {printf "%.1f%%", $3/$2*100}')" echo "Load: $(uptime | awk -F'load average:' '{print $2}' | tr -d ' ')" # Disk usage bar echo -n "Disk [/]: " ROOT_USAGE=$(df -h / | awk 'NR==2 {print $5}' | tr -d '%') BAR_WIDTH=20 FILLED=$((ROOT_USAGE * BAR_WIDTH / 100)) EMPTY=$((BAR_WIDTH - FILLED)) printf "[" printf "#%.0s" $(seq 1 $FILLED) printf "-%.0s" $(seq 1 $EMPTY) printf "] %s\n" "$ROOT_USAGE%" # Top process TOP_PROC=$(ps aux --sort=-%cpu | head -2 | tail -1) PROC_NAME=$(echo $TOP_PROC | awk '{print $11}') PROC_CPU=$(echo $TOP_PROC | awk '{print $3}') echo "Top process: $PROC_NAME ($PROC_CPU% CPU)" echo "" echo "Press Ctrl+C to exit" sleep $INTERVAL done

Advanced Dashboard Features:

#!/bin/bash # advanced-dashboard.sh # Colors for output RED='\033[0;31m' GREEN='\033[0;32m' YELLOW='\033[1;33m' BLUE='\033[0;34m' NC='\033[0m' # No Color # Thresholds CPU_WARN=80 MEM_WARN=80 DISK_WARN=80 get_cpu_usage() { echo $(top -bn1 | grep "Cpu(s)" | awk '{print $2 + $4}' | cut -d'.' -f1) } get_mem_usage() { echo $(free | awk '/Mem:/ {printf "%.0f", $3/$2*100}') } get_disk_usage() { echo $(df -h / | awk 'NR==2 {print $5}' | tr -d '%') } colorize() { local value=$1 local warn=$2 local crit=$3 if [ $value -ge $crit ]; then echo -e "${RED}$value%${NC}" elif [ $value -ge $warn ]; then echo -e "${YELLOW}$value%${NC}" else echo -e "${GREEN}$value%${NC}" fi } while true; do clear CPU=$(get_cpu_usage) MEM=$(get_mem_usage) DISK=$(get_disk_usage) echo -e "${BLUE}╔════════════════════════════════════════════╗${NC}" echo -e "${BLUE}║ SYSTEM DASHBOARD - $(date '+%H:%M:%S') ║${NC}" echo -e "${BLUE}╚════════════════════════════════════════════╝${NC}" echo "" # CPU echo -n "CPU Usage: " colorize $CPU $CPU_WARN 90 # Progress bar echo -n " [" for i in $(seq 1 20); do if [ $((i * 5)) -le $CPU ]; then echo -n "█" else echo -n "░" fi done echo "]" # Memory echo -n "Mem Usage: " colorize $MEM $MEM_WARN 90 # Progress bar echo -n " [" for i in $(seq 1 20); do if [ $((i * 5)) -le $MEM ]; then echo -n "█" else echo -n "░" fi done echo "]" # Disk echo -n "Disk Usage: " colorize $DISK $DISK_WARN 90 # Progress bar echo -n " [" for i in $(seq 1 20); do if [ $((i * 5)) -le $DISK ]; then echo -n "█" else echo -n "░" fi done echo "]" echo "" echo "Top Processes:" ps aux --sort=-%cpu | head -4 | tail -3 | awk '{printf " %-20s %6s%% CPU %6s%% MEM\n", $11, $3, $4}' echo "" echo "Network:" RX=$(netstat -i | awk 'NR==3 {print $3}') TX=$(netstat -i | awk 'NR==3 {print $7}') echo " RX: $RX packets/s TX: $TX packets/s" echo "" echo -e "${YELLOW}Press Ctrl+C to exit${NC}" sleep 2 done

Monitoring with alerts:

#!/bin/bash # monitor-with-alerts.sh ALERT_EMAIL="admin@example.com" LOG_FILE="/var/log/system-monitor.log" check_thresholds() { local cpu=$(get_cpu_usage) local mem=$(get_mem_usage) local disk=$(get_disk_usage) local alert=false local message="" if [ $cpu -gt 90 ]; then alert=true message+="CRITICAL: CPU usage at ${cpu}%\n" elif [ $cpu -gt 80 ]; then alert=true message+="WARNING: CPU usage at ${cpu}%\n" fi if [ $mem -gt 90 ]; then alert=true message+="CRITICAL: Memory usage at ${mem}%\n" elif [ $mem -gt 80 ]; then alert=true message+="WARNING: Memory usage at ${mem}%\n" fi if [ $disk -gt 90 ]; then alert=true message+="CRITICAL: Disk usage at ${disk}%\n" elif [ $disk -gt 80 ]; then alert=true message+="WARNING: Disk usage at ${disk}%\n" fi if [ "$alert" = true ]; then echo "$(date): $message" >> "$LOG_FILE" # Uncomment to send email # echo -e "$message" | mail -s "System Alert from $(hostname)" "$ALERT_EMAIL" echo -e "${RED}ALERT!${NC}\n$message" fi } # Run as a daemon while true; do check_thresholds sleep 60 # Check every minute done

3. Scripting & Automation Tasks

Practice Bash scripting, automation, and task scheduling exercises.

Task 4: Backup Automation Script

Intermediate

Challenge:

Create a comprehensive backup script that automates the backup process for critical directories. The script should include rotation, compression, logging, and error handling.

Requirements:

  1. Backup multiple directories specified in a config file
  2. Compress backups using tar and gzip
  3. Implement backup rotation (keep last N backups)
  4. Add comprehensive logging
  5. Include error checking and email notifications
  6. Make script configurable and reusable
  7. Test backup restoration process

Building the Backup Script:

1 Setup configuration:

#!/bin/bash # backup-manager.sh # Configuration CONFIG_FILE="/etc/backup-manager.conf" LOG_FILE="/var/log/backup-manager.log" BACKUP_DIR="/backups" RETENTION_DAYS=7 COMPRESSION_LEVEL=9 # Load configuration if exists if [ -f "$CONFIG_FILE" ]; then source "$CONFIG_FILE" fi # Default directories to backup if not specified DEFAULT_DIRS=( "/etc" "/home" "/var/www" "/var/log" ) # Logging function log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE" echo "$1" } # Error handling error_exit() { log "ERROR: $1" exit 1 } # Check requirements check_requirements() { if [ ! -d "$BACKUP_DIR" ]; then mkdir -p "$BACKUP_DIR" || error_exit "Cannot create backup directory" fi if ! command -v tar &> /dev/null; then error_exit "tar command not found" fi if ! command -v gzip &> /dev/null; then error_exit "gzip command not found" fi }

2 Backup function:

# Backup function backup_directory() { local dir="$1" local backup_name="backup_$(basename "$dir")_$(date +%Y%m%d_%H%M%S).tar.gz" local backup_path="$BACKUP_DIR/$backup_name" log "Starting backup of $dir" # Check if directory exists if [ ! -d "$dir" ]; then log "WARNING: Directory $dir does not exist, skipping" return 1 fi # Create backup tar -czf "$backup_path" -C "$(dirname "$dir")" "$(basename "$dir")" 2>> "$LOG_FILE" if [ $? -eq 0 ]; then local size=$(du -h "$backup_path" | cut -f1) log "Backup completed: $backup_name ($size)" echo "$backup_path" else log "ERROR: Failed to backup $dir" return 1 fi } # Backup all directories backup_all() { local backup_list=() # Use directories from config or defaults if [ ${#DIRS_TO_BACKUP[@]} -gt 0 ]; then local dirs=("${DIRS_TO_BACKUP[@]}") else local dirs=("${DEFAULT_DIRS[@]}") fi log "Starting backup of ${#dirs[@]} directories" for dir in "${dirs[@]}"; do backup_path=$(backup_directory "$dir") if [ -n "$backup_path" ]; then backup_list+=("$backup_path") fi done echo "${backup_list[@]}" }

3 Rotation and cleanup:

# Cleanup old backups cleanup_old_backups() { log "Cleaning up backups older than $RETENTION_DAYS days" find "$BACKUP_DIR" -name "backup_*.tar.gz" -type f -mtime +$RETENTION_DAYS | while read backup; do log "Deleting old backup: $(basename "$backup")" rm -f "$backup" done local count=$(find "$BACKUP_DIR" -name "backup_*.tar.gz" -type f -mtime +$RETENTION_DAYS | wc -l) log "Deleted $count old backup(s)" } # List backups list_backups() { echo "Available backups in $BACKUP_DIR:" echo "==================================" find "$BACKUP_DIR" -name "backup_*.tar.gz" -type f -exec ls -lh {} \; | \ awk '{print $6" "$7" "$8" "$5" "$9}' | \ while read line; do echo " $line" done local total=$(find "$BACKUP_DIR" -name "backup_*.tar.gz" -type f | wc -l) local total_size=$(find "$BACKUP_DIR" -name "backup_*.tar.gz" -type f -exec du -ch {} + | tail -1 | cut -f1) echo "" echo "Total: $total backups ($total_size)" }

4 Restore function:

# Restore function restore_backup() { local backup_file="$1" local restore_dir="$2" if [ ! -f "$backup_file" ]; then error_exit "Backup file not found: $backup_file" fi if [ ! -d "$restore_dir" ]; then mkdir -p "$restore_dir" || error_exit "Cannot create restore directory" fi log "Restoring $backup_file to $restore_dir" tar -xzf "$backup_file" -C "$restore_dir" 2>> "$LOG_FILE" if [ $? -eq 0 ]; then log "Restore completed successfully" else error_exit "Restore failed" fi } # Verify backup verify_backup() { local backup_file="$1" if [ ! -f "$backup_file" ]; then error_exit "Backup file not found: $backup_file" fi log "Verifying backup: $backup_file" # Test archive integrity if gzip -t "$backup_file" 2>> "$LOG_FILE"; then log "Backup integrity check passed" return 0 else log "ERROR: Backup integrity check failed" return 1 fi }

5 Main script logic:

# Main function main() { local action="${1:-backup}" case "$action" in backup) check_requirements log "=== Starting backup process ===" backup_all cleanup_old_backups log "=== Backup process completed ===" ;; list) list_backups ;; restore) if [ -z "$2" ] || [ -z "$3" ]; then echo "Usage: $0 restore " exit 1 fi check_requirements restore_backup "$2" "$3" ;; verify) if [ -z "$2" ]; then echo "Usage: $0 verify " exit 1 fi verify_backup "$2" ;; *) echo "Usage: $0 {backup|list|restore|verify}" echo "" echo "Commands:" echo " backup - Perform backup of configured directories" echo " list - List available backups" echo " restore - Restore backup to directory" echo " verify - Verify backup integrity" exit 1 ;; esac } # Run main function with all arguments main "$@"

Complete Backup System:

#!/bin/bash # complete-backup-system.sh # ============================================ # COMPLETE BACKUP SYSTEM WITH ALL FEATURES # ============================================ # Configuration CONFIG_DIR="/etc/backup-system" CONFIG_FILE="$CONFIG_DIR/config.sh" LOG_DIR="/var/log/backup-system" LOG_FILE="$LOG_DIR/backup-$(date +%Y%m%d).log" BACKUP_ROOT="/backups" EMAIL_NOTIFY="admin@example.com" # Create necessary directories mkdir -p "$CONFIG_DIR" "$LOG_DIR" "$BACKUP_ROOT" # Sample configuration cat > "$CONFIG_FILE" << 'EOF' #!/bin/bash # Backup System Configuration # Directories to backup (space separated) BACKUP_DIRS="/etc /home /var/www /opt/app" # Exclude patterns (tar exclude format) EXCLUDE_PATTERNS="*.tmp *.log *.cache node_modules" # Retention policy DAILY_RETENTION=7 # Keep daily backups for 7 days WEEKLY_RETENTION=4 # Keep weekly backups for 4 weeks MONTHLY_RETENTION=12 # Keep monthly backups for 12 months # Compression COMPRESS=true COMPRESSION_LEVEL=9 # Encryption (optional) ENCRYPT=false GPG_RECIPIENT="" # Remote backup (optional) REMOTE_BACKUP=false REMOTE_HOST="backup-server" REMOTE_USER="backup" REMOTE_DIR="/backups/$(hostname)" # Notification SEND_EMAIL=true EMAIL_SUCCESS=true EMAIL_FAILURE=true EOF # Main backup script cat > /usr/local/bin/backup-system << 'EOF' #!/bin/bash # Load configuration source /etc/backup-system/config.sh # Initialize TIMESTAMP=$(date +%Y%m%d_%H%M%S) BACKUP_TYPE="${1:-daily}" # daily, weekly, monthly BACKUP_DIR="$BACKUP_ROOT/$BACKUP_TYPE" mkdir -p "$BACKUP_DIR" # Logging log() { local level="$1" local message="$2" local timestamp=$(date '+%Y-%m-%d %H:%M:%S') echo "[$timestamp] [$level] $message" | tee -a "$LOG_FILE" # Also log to system journal logger -t "backup-system" "[$level] $message" } # Error handling fail() { log "ERROR" "$1" if [ "$SEND_EMAIL" = true ] && [ "$EMAIL_FAILURE" = true ]; then echo "Backup failed: $1" | mail -s "Backup Failed - $(hostname)" "$EMAIL_NOTIFY" fi exit 1 } # Create backup archive create_backup() { local archive_name="backup_${BACKUP_TYPE}_${TIMESTAMP}.tar.gz" local archive_path="$BACKUP_DIR/$archive_name" log "INFO" "Creating backup: $archive_name" # Build tar command local tar_cmd="tar -cz" # Add compression level if [ "$COMPRESS" = true ]; then tar_cmd="$tar_cmd --gzip --gzip-compression-level=$COMPRESSION_LEVEL" fi # Add exclude patterns for pattern in $EXCLUDE_PATTERNS; do tar_cmd="$tar_cmd --exclude='$pattern'" done # Create archive tar_cmd="$tar_cmd -f '$archive_path' $BACKUP_DIRS" log "INFO" "Running: $tar_cmd" eval $tar_cmd 2>> "$LOG_FILE" if [ $? -ne 0 ]; then fail "Failed to create backup archive" fi # Verify archive if ! gzip -t "$archive_path" 2>/dev/null; then fail "Backup archive verification failed" fi # Calculate size local size=$(du -h "$archive_path" | cut -f1) log "INFO" "Backup created: $archive_name ($size)" echo "$archive_path" } # Encrypt backup (if enabled) encrypt_backup() { local archive_path="$1" if [ "$ENCRYPT" = true ] && [ -n "$GPG_RECIPIENT" ]; then log "INFO" "Encrypting backup" gpg --encrypt --recipient "$GPG_RECIPIENT" --output "${archive_path}.gpg" "$archive_path" if [ $? -eq 0 ]; then rm -f "$archive_path" log "INFO" "Backup encrypted successfully" echo "${archive_path}.gpg" else fail "Backup encryption failed" fi else echo "$archive_path" fi } # Copy to remote (if enabled) copy_to_remote() { local archive_path="$1" if [ "$REMOTE_BACKUP" = true ]; then log "INFO" "Copying backup to remote server" rsync -avz --progress "$archive_path" \ "$REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/" 2>> "$LOG_FILE" if [ $? -ne 0 ]; then log "WARNING" "Remote copy failed, but local backup exists" else log "INFO" "Remote copy completed" fi fi } # Apply retention policy apply_retention() { log "INFO" "Applying $BACKUP_TYPE retention policy" case "$BACKUP_TYPE" in daily) RETENTION=$DAILY_RETENTION ;; weekly) RETENTION=$WEEKLY_RETENTION ;; monthly) RETENTION=$MONTHLY_RETENTION ;; esac # Delete old backups find "$BACKUP_DIR" -name "backup_${BACKUP_TYPE}_*.tar.gz*" -type f -mtime +$RETENTION \ -exec rm -f {} \; 2>/dev/null local deleted_count=$(find "$BACKUP_DIR" -name "backup_${BACKUP_TYPE}_*.tar.gz*" -type f -mtime +$RETENTION 2>/dev/null | wc -l) log "INFO" "Deleted $deleted_count old backup(s)" } # Main backup process main_backup() { log "INFO" "=== Starting $BACKUP_TYPE backup ===" # Create backup archive=$(create_backup) # Encrypt if needed archive=$(encrypt_backup "$archive") # Copy to remote if enabled copy_to_remote "$archive" # Apply retention apply_retention log "INFO" "=== $BACKUP_TYPE backup completed ===" # Send success notification if [ "$SEND_EMAIL" = true ] && [ "$EMAIL_SUCCESS" = true ]; then local size=$(du -h "$archive" | cut -f1) echo "$BACKUP_TYPE backup completed successfully. Size: $size" | \ mail -s "Backup Success - $(hostname)" "$EMAIL_NOTIFY" fi } # Handle command line arguments case "${1:-daily}" in daily|weekly|monthly) BACKUP_TYPE="$1" main_backup ;; list) echo "Available backups:" echo "==================" for type in daily weekly monthly; do echo "" echo "$type backups:" find "$BACKUP_ROOT/$type" -name "*.tar.gz*" -type f -exec ls -lh {} \; 2>/dev/null done ;; verify) if [ -z "$2" ]; then echo "Usage: $0 verify " exit 1 fi if gzip -t "$2" 2>/dev/null; then echo "Backup verification: PASSED" else echo "Backup verification: FAILED" exit 1 fi ;; *) echo "Usage: $0 {daily|weekly|monthly|list|verify}" exit 1 ;; esac EOF # Make executable chmod +x /usr/local/bin/backup-system # Add to crontab echo "Adding to crontab..." (crontab -l 2>/dev/null; echo "# Backup System Schedule") | crontab - (crontab -l 2>/dev/null; echo "0 2 * * * /usr/local/bin/backup-system daily") | crontab - (crontab -l 2>/dev/null; echo "0 3 * * 0 /usr/local/bin/backup-system weekly") | crontab - (crontab -l 2>/dev/null; echo "0 4 1 * * /usr/local/bin/backup-system monthly") | crontab - echo "Backup system installed successfully!" echo "Configuration: $CONFIG_FILE" echo "Logs: $LOG_DIR" echo "Backups: $BACKUP_ROOT"

Testing the backup system:

# Test the backup system echo "=== Testing Backup System ===" # 1. Create test data mkdir -p /tmp/test-backup/{data1,data2,data3} echo "Test file 1" > /tmp/test-backup/data1/file1.txt echo "Test file 2" > /tmp/test-backup/data2/file2.txt echo "Test file 3" > /tmp/test-backup/data3/file3.txt # 2. Update configuration temporarily BACKUP_DIRS="/tmp/test-backup" BACKUP_ROOT="/tmp/backup-test" # 3. Run backup /usr/local/bin/backup-system daily # 4. Verify backup find /tmp/backup-test -name "*.tar.gz" -type f -exec echo "Found: {}" \; # 5. Test restore mkdir -p /tmp/restore-test tar -xzf /tmp/backup-test/daily/backup_daily_*.tar.gz -C /tmp/restore-test ls -la /tmp/restore-test # 6. Verify restore diff -r /tmp/test-backup /tmp/restore-test/tmp/test-backup && echo "Restore verified: OK" # 7. Cleanup rm -rf /tmp/test-backup /tmp/backup-test /tmp/restore-test echo "=== Backup system test completed ==="

Practice Exercises Summary

Complete Task Checklist

Task Skills Practiced Commands Used Difficulty Time
Log File Management find, gzip, cron, scripting find, gzip, tar, cron Beginner 30 min
Find and Process Files find, exec, xargs, file ops find, exec, ls, du Intermediate 45 min
System Monitoring Dashboard Bash scripting, system metrics top, free, df, ps Intermediate 60 min
Backup Automation Script Scripting, automation, error handling tar, gzip, find, cron Advanced 90 min
Process Management Process control, signals, monitoring ps, top, kill, nice Intermediate 40 min
Network Configuration Networking, firewall, troubleshooting ip, ss, iptables, netstat Advanced 60 min
User Management User/group management, permissions useradd, usermod, chmod, chown Beginner 30 min
Package Management Package installation, dependencies apt, yum, dpkg, rpm Beginner 25 min

Additional Practice Exercises

🎯 EXTRA PRACTICE EXERCISES ============================ 1. DISK QUOTA MANAGEMENT: - Implement user disk quotas - Monitor quota usage - Send alerts when quotas exceeded 2. SECURITY HARDENING: - Audit system for security issues - Remove unnecessary services - Configure firewall rules - Set up intrusion detection 3. DOCKER CONTAINER MANAGEMENT: - Create Dockerfiles for applications - Manage container lifecycle - Set up Docker networking - Implement container monitoring 4. WEB SERVER CONFIGURATION: - Configure Nginx/Apache virtual hosts - Set up SSL certificates - Implement load balancing - Configure caching 5. DATABASE ADMINISTRATION: - Backup and restore databases - Optimize query performance - Set up replication - Monitor database health 6. LOG ANALYSIS PIPELINE: - Set up log collection - Parse and analyze logs - Create alerts from log patterns - Visualize log data 7. CI/CD PIPELINE: - Set up Jenkins/GitLab CI - Create build pipelines - Implement automated testing - Configure deployment automation

Quick practice tasks:

# Task 1: Find and kill zombie processes ps aux | grep defunct # Kill parent process of zombies # Task 2: Monitor network traffic in real-time sudo tcpdump -i eth0 -n port 80 # Task 3: Create a user with specific permissions sudo useradd -m -s /bin/bash testuser sudo passwd testuser sudo usermod -aG sudo testuser # Task 4: Set up a cron job for daily updates echo "0 4 * * * apt update && apt upgrade -y" | crontab - # Task 5: Create a simple HTTP server python3 -m http.server 8080 # Then test with: curl http://localhost:8080 # Task 6: Monitor file changes in real-time tail -f /var/log/syslog # Alternative: inotifywait -m /path/to/watch # Task 7: Create a compressed archive with progress tar -czf backup.tar.gz --checkpoint=.1000 /path/to/backup # Task 8: Find files with SUID permission find / -perm /4000 -type f 2>/dev/null # Task 9: Monitor memory usage of a process watch -n 1 'ps -p $(pgrep process_name) -o pid,cmd,%mem,%cpu' # Task 10: Create a network share with Samba sudo apt install samba sudo systemctl enable --now smbd

Interview Practice Scenarios

Common interview questions to practice:

💼 DEVOPS INTERVIEW SCENARIOS ============================== SCENARIO 1: SERVER PERFORMANCE -------------------------------- "You get an alert that a production server CPU is at 95%. Users are reporting slow response times. How do you diagnose and resolve this issue?" Expected actions: 1. SSH into server 2. Check load average: uptime 3. Identify top processes: top, ps aux --sort=-%cpu 4. Check for specific patterns (DDoS, runaway process) 5. Take appropriate action (kill process, scale, optimize) SCENARIO 2: DISK SPACE EMERGENCY --------------------------------- "/ partition is 99% full. Applications are failing to write logs. How do you quickly free up space and prevent recurrence?" Expected actions: 1. Check disk usage: df -h, du -sh /* 2. Find large files: find / -type f -size +100M 3. Clean up temporary files, logs, caches 4. Implement log rotation 5. Set up monitoring alerts SCENARIO 3: NETWORK CONNECTIVITY --------------------------------- "Application cannot connect to database server. Connection timeouts are occurring. How do you troubleshoot?" Expected actions: 1. Test basic connectivity: ping database-host 2. Check DNS resolution: nslookup, dig 3. Test port connectivity: telnet, nc 4. Check firewall rules: iptables -L 5. Verify service is running on database server SCENARIO 4: SERVICE FAILURE --------------------------- "Critical service (nginx/mysql) keeps crashing. It starts but dies after few minutes. How do you debug?" Expected actions: 1. Check service status: systemctl status service 2. Examine logs: journalctl -u service, tail -f logfile 3. Check resource limits: ulimit -a 4. Look for patterns in crashes 5. Test with minimal configuration

Timed practice challenge:

# TIMED CHALLENGE: 15 MINUTES # -------------------------------------- # Setup: mkdir -p /tmp/interview-challenge cd /tmp/interview-challenge # Task 1: Create 100 test files with random sizes for i in {1..100}; do size=$((RANDOM % 1000 + 1)) dd if=/dev/urandom of="file_$i.dat" bs=1K count=$size 2>/dev/null done # Task 2: Find and list files larger than 500KB # Task 3: Compress files older than 1 day # Task 4: Delete empty files # Task 5: Create a summary report with file counts and total size # Time yourself! Complete all tasks in 15 minutes. # Solution skeleton: # 1. find . -type f -size +500k -exec ls -lh {} \; # 2. find . -type f -mtime +1 -exec gzip {} \; # 3. find . -type f -empty -delete # 4. echo "Files: $(find . -type f | wc -l), Size: $(du -sh .)"