Docker on Linux: Complete Installation & Fundamentals Guide

Master Docker on Linux with this comprehensive guide covering installation, container fundamentals, Dockerfile creation, networking, storage management, and production best practices for efficient container deployment.

Docker Architecture & Workflow Linux Host Machine Linux Kernel Namespaces Cgroups OverlayFS Device Mapper Docker Engine Docker Daemon (dockerd) containerd runc Docker CLI (docker) REST API Client Container Runtime Container 1 nginx:alpine Container 2 redis:latest Container 3 postgres:15 Registry & Images Docker Hub Private Registry Local Images Data Flow: Docker CLI → Docker Daemon → containerd → runc → Container
Docker architecture showing components, runtime layers, and data flow

Why Docker on Linux?

Docker revolutionized application deployment by providing consistent, portable, and isolated environments using Linux container technology.

  • Consistency: Works the same on any Linux distribution
  • Isolation: Secure process and resource isolation
  • Portability: Build once, run anywhere
  • Efficiency: Lightweight compared to virtual machines
  • Version Control: Image layers enable efficient updates
  • Microservices: Perfect for modern application architecture
  • DevOps Integration: Fits seamlessly into CI/CD pipelines

1. Docker Installation on Linux

🐋
Docker Engine
sudo apt install docker.io
Core Docker runtime and CLI for container management. Core Runtime
📦
Docker Compose
sudo apt install docker-compose
Multi-container application definition and orchestration. Orchestration
🏗️
Buildx
docker buildx create --use
Advanced image building with cross-platform support. Build
👤
Rootless Mode
dockerd-rootless-setuptool.sh install
Run Docker daemon as non-root user for security. Security
🔧
Docker Machine
docker-machine create default
Provision and manage Docker hosts (legacy). Management
Podman (Alternative)
sudo apt install podman
Daemonless container engine, Docker-compatible. Rootless

Installation Methods Comparison

Method Command Best For Updates Stability
Official Repository curl -fsSL https://get.docker.com | sh Most users, production Regular ✅ Stable
Distribution Package sudo apt install docker.io Ubuntu/Debian users Distro updates ✅ Very Stable
Snap Package sudo snap install docker Easy installation Auto-updates ⚠️ Sandboxed
Binary Installation wget docker.com/binary.tar.gz Air-gapped systems Manual ⚠️ Manual
Docker Desktop GUI installer Developers (WSL2) Auto-updates ✅ Stable
Rootless Mode dockerd-rootless-setuptool.sh Security-conscious Manual ✅ Experimental

Complete Installation Guide

1 Ubuntu/Debian Installation
#!/bin/bash
# docker-install-ubuntu.sh - Complete Docker installation for Ubuntu/Debian

set -e

echo "=== Docker Installation for Ubuntu/Debian ==="

# 1. Remove old versions
sudo apt-get remove -y docker docker-engine docker.io containerd runc

# 2. Update package index
sudo apt-get update

# 3. Install prerequisites
sudo apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

# 4. Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
    sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# 5. Set up stable repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
  https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# 6. Update package index again
sudo apt-get update

# 7. Install Docker Engine
sudo apt-get install -y \
    docker-ce \
    docker-ce-cli \
    containerd.io \
    docker-compose-plugin

# 8. Verify installation
sudo docker run hello-world

# 9. Add user to docker group (optional but recommended)
sudo usermod -aG docker $USER

# 10. Enable Docker to start on boot
sudo systemctl enable docker.service
sudo systemctl enable containerd.service

# 11. Install Docker Compose (separate)
DOCKER_COMPOSE_VERSION=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
sudo curl -L "https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" \
    -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

echo "=== Installation Complete ==="
echo "Log out and log back in for group changes to take effect"
echo "Test with: docker run hello-world"
2 RHEL/CentOS/Fedora Installation
#!/bin/bash
# docker-install-rhel.sh - Docker installation for RHEL/CentOS/Fedora

set -e

echo "=== Docker Installation for RHEL/CentOS/Fedora ==="

# Check distribution
if [[ -f /etc/redhat-release ]]; then
    echo "Detected RHEL-based distribution"
    
    # 1. Remove old versions
    sudo yum remove -y docker \
        docker-client \
        docker-client-latest \
        docker-common \
        docker-latest \
        docker-latest-logrotate \
        docker-logrotate \
        docker-engine
    
    # 2. Install prerequisites
    sudo yum install -y yum-utils
    
    # 3. Add Docker repository
    sudo yum-config-manager \
        --add-repo \
        https://download.docker.com/linux/centos/docker-ce.repo
    
    # 4. Install Docker Engine
    sudo yum install -y docker-ce docker-ce-cli containerd.io
    
elif [[ -f /etc/fedora-release ]]; then
    echo "Detected Fedora"
    
    # 1. Remove old versions
    sudo dnf remove -y docker \
        docker-client \
        docker-client-latest \
        docker-common \
        docker-latest \
        docker-latest-logrotate \
        docker-logrotate \
        docker-selinux \
        docker-engine-selinux \
        docker-engine
    
    # 2. Install prerequisites
    sudo dnf -y install dnf-plugins-core
    
    # 3. Add Docker repository
    sudo dnf config-manager \
        --add-repo \
        https://download.docker.com/linux/fedora/docker-ce.repo
    
    # 4. Install Docker Engine
    sudo dnf install -y docker-ce docker-ce-cli containerd.io
    
else
    echo "Unsupported distribution"
    exit 1
fi

# 5. Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker

# 6. Verify installation
sudo docker run hello-world

# 7. Add user to docker group
sudo usermod -aG docker $USER

# 8. Install Docker Compose
DOCKER_COMPOSE_VERSION=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
sudo curl -L "https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" \
    -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

echo "=== Installation Complete ==="
echo "Log out and log back in for group changes"
echo "Test with: docker run hello-world"
3 Verify Installation
# Verify Docker installation
docker --version
docker-compose --version
docker system info

# Test Docker functionality
docker run hello-world

# Check Docker service status
sudo systemctl status docker

# View Docker daemon logs
sudo journalctl -u docker.service -f

# Test networking
docker run -it --rm alpine ping -c 4 google.com

# Test volume mounting
docker run -v /tmp:/host-tmp alpine ls /host-tmp

# Test port mapping
docker run -d -p 8080:80 nginx:alpine
curl http://localhost:8080

# Clean up test containers
docker container prune -f
docker image prune -f

2. Docker Fundamentals & Core Commands

Essential Docker Commands

# Docker System Information
docker version # Docker version
docker info # System-wide information
docker system df # Disk usage
docker stats # Live container statistics
docker events # Real-time Docker events
# Image Management
docker images # List images
docker pull ubuntu:20.04 # Pull image from registry
docker push myrepo/myimage:tag # Push image to registry
docker rmi ubuntu:20.04 # Remove image
docker image prune # Remove unused images
docker history ubuntu:20.04 # Show image history/layers
docker tag ubuntu:20.04 myubuntu:latest # Tag image
docker save -o ubuntu.tar ubuntu:20.04 # Save image to tar
docker load -i ubuntu.tar # Load image from tar
# Container Lifecycle
docker run -it ubuntu:20.04 /bin/bash # Run interactive container
docker run -d nginx:alpine # Run detached container
docker start container_name # Start stopped container
docker stop container_name # Stop running container
docker restart container_name # Restart container
docker pause container_name # Pause container
docker unpause container_name # Unpause container
docker kill container_name # Kill container (SIGKILL)
docker rm container_name # Remove container
docker container prune # Remove stopped containers
# Container Inspection
docker ps # List running containers
docker ps -a # List all containers
docker ps -q # List only container IDs
docker ps -f "status=running" # Filter containers
docker inspect container_name # Detailed container info
docker logs container_name # View container logs
docker logs -f container_name # Follow logs
docker logs --tail 100 container_name # Last 100 lines
docker top container_name # View container processes
docker diff container_name # Show filesystem changes
docker port container_name # Show port mappings
# Container Execution
docker exec -it container_name /bin/bash # Execute command in container
docker exec container_name ls -la # Run command non-interactive
docker attach container_name # Attach to running container
docker cp file.txt container_name:/tmp/ # Copy file to container
docker cp container_name:/tmp/file.txt . # Copy file from container
# Networking
docker network ls # List networks
docker network create mynetwork # Create network
docker network inspect mynetwork # Inspect network
docker network connect mynetwork container # Connect container
docker network disconnect mynetwork container # Disconnect container
docker network prune # Remove unused networks
# Volumes
docker volume ls # List volumes
docker volume create myvolume # Create volume
docker volume inspect myvolume # Inspect volume
docker volume prune # Remove unused volumes
# System Cleanup
docker system prune # Remove unused data
docker system prune -a # Remove all unused data
docker system df -v # Detailed disk usage
Docker Container Lifecycle Build Dockerfile docker build Push Registry docker push Pull Image docker pull Run Container docker run ... Logs exec Container States & Transitions Created Running Paused Stopped Exited
Docker workflow and container state transitions

3. Dockerfile Mastery

Complete Dockerfile Reference

Complete Dockerfile Example
# Dockerfile - Production-ready Node.js Application

# ============================================
# BUILD STAGE
# ============================================

# Use official Node.js LTS as base image
FROM node:18-alpine AS builder

# Set environment variables
ENV NODE_ENV=production \
    APP_PORT=3000 \
    NPM_CONFIG_LOGLEVEL=warn

# Set working directory
WORKDIR /app

# Install dependencies first (caching layer)
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

# Copy application source
COPY . .

# Build application (if needed)
# RUN npm run build

# ============================================
# RUNTIME STAGE (Multi-stage build)
# ============================================

# Use smaller runtime image
FROM node:18-alpine

# Add non-root user for security
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001

# Install runtime dependencies
RUN apk add --no-cache \
    tini \
    curl \
    dumb-init

# Use tini as init process
ENTRYPOINT ["/sbin/tini", "--"]

# Set working directory
WORKDIR /app

# Copy built artifacts from builder stage
COPY --from=builder --chown=nodejs:nodejs /app /app

# Switch to non-root user
USER nodejs

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:${APP_PORT}/health || exit 1

# Expose application port
EXPOSE ${APP_PORT}

# Define volume for persistent data
VOLUME ["/app/data", "/app/logs"]

# Set labels for metadata
LABEL maintainer="devops@example.com" \
      version="1.0.0" \
      description="Node.js Application" \
      org.opencontainers.image.source="https://github.com/your/repo"

# Command to run the application
CMD ["node", "server.js"]

# ============================================
# ADDITIONAL DOCKERFILE DIRECTIVES
# ============================================

# ARG - Build-time arguments
# ARG BUILD_VERSION=1.0.0
# ARG NODE_VERSION=18

# ENV - Environment variables
# ENV DATABASE_URL=postgres://user:pass@db:5432/app
# ENV REDIS_URL=redis://redis:6379

# COPY - Copy files with patterns
# COPY src/ ./src/
# COPY public/ ./public/
# COPY *.config.js ./

# ADD - Copy with URL and tar extraction
# ADD https://example.com/file.tar.gz /tmp/
# ADD app.tar.gz /app/

# RUN - Execute commands
# RUN apt-get update && apt-get install -y \
#     git \
#     python3 \
#     build-essential \
#     && rm -rf /var/lib/apt/lists/*

# USER - Switch user
# USER nobody

# WORKDIR - Set working directory
# WORKDIR /usr/src/app

# VOLUME - Create mount point
# VOLUME /var/lib/mysql

# EXPOSE - Document ports
# EXPOSE 80/tcp
# EXPOSE 443/tcp

# HEALTHCHECK - Container health
# HEALTHCHECK --interval=5m --timeout=3s \
#   CMD curl -f http://localhost/ || exit 1

# SHELL - Change default shell
# SHELL ["/bin/bash", "-c"]

# STOPSIGNAL - Signal for stopping
# STOPSIGNAL SIGTERM

# ONBUILD - Trigger instructions
# ONBUILD COPY package.json ./
# ONBUILD RUN npm install

Dockerfile Best Practices

Practice Good Example Bad Example Why
Use Official Images FROM node:18-alpine FROM ubuntu:latest Security, size, maintenance
Multi-stage Builds Separate build and runtime stages Single stage with all tools Smaller final images
Layer Caching Copy package.json before source Copy everything then install Faster builds
Non-root User USER nodejs Run as root Security
Cleanup apt-get clean && rm -rf /var/lib/apt/lists/* Leave cache files Smaller images
Health Checks HEALTHCHECK CMD curl -f http://localhost/health No health check Reliability
.dockerignore Exclude node_modules, .git Copy everything Smaller context
Tag Explicitly node:18.15.0-alpine3.17 node:latest Reproducibility

Sample Application Dockerfiles

1 Python Flask Application
# Dockerfile for Python Flask Application
FROM python:3.11-slim AS builder

# Install build dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    python3-dev \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app

# Copy requirements first for caching
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt

# Runtime stage
FROM python:3.11-slim

# Create non-root user
RUN groupadd -r flask && useradd -r -g flask flask

WORKDIR /app

# Copy Python dependencies from builder
COPY --from=builder /root/.local /root/.local

# Copy application
COPY --chown=flask:flask . .

# Add .local to PATH
ENV PATH=/root/.local/bin:$PATH

# Switch to non-root user
USER flask

# Expose port
EXPOSE 5000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s \
    CMD python -c "import requests; requests.get('http://localhost:5000/health')"

# Run application
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]
2 Nginx with Custom Configuration
# Dockerfile for Custom Nginx
FROM nginx:1.23-alpine

# Remove default nginx configuration
RUN rm /etc/nginx/conf.d/default.conf

# Copy custom configuration
COPY nginx.conf /etc/nginx/nginx.conf
COPY sites/ /etc/nginx/sites-available/
COPY ssl/ /etc/nginx/ssl/

# Copy static files
COPY static/ /usr/share/nginx/html/

# Create necessary directories
RUN mkdir -p /var/log/nginx && \
    mkdir -p /var/cache/nginx && \
    chown -R nginx:nginx /var/log/nginx /var/cache/nginx

# Switch to nginx user (already exists in base image)
USER nginx

# Expose HTTP and HTTPS
EXPOSE 80
EXPOSE 443

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s \
    CMD curl -f http://localhost/ || exit 1

# Start nginx in foreground
CMD ["nginx", "-g", "daemon off;"]
3 Multi-service Docker Compose
# docker-compose.yml - Full Stack Application
version: '3.8'

services:
  # Frontend - React Application
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
      args:
        NODE_ENV: production
    ports:
      - "3000:3000"
    environment:
      - REACT_APP_API_URL=http://backend:5000/api
    depends_on:
      - backend
    networks:
      - app-network
    volumes:
      - frontend-node-modules:/app/node_modules
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000"]
      interval: 30s
      timeout: 10s
      retries: 3

  # Backend - Node.js API
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    ports:
      - "5000:5000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgres://postgres:password@db:5432/app
      - REDIS_URL=redis://redis:6379
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - app-network
    volumes:
      - backend-data:/app/data
      - ./backend/logs:/app/logs
    restart: unless-stopped

  # Database - PostgreSQL
  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_DB=app
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    networks:
      - app-network
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Cache - Redis
  redis:
    image: redis:7-alpine
    command: redis-server --appendonly yes
    volumes:
      - redis-data:/data
    networks:
      - app-network
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 3

  # Reverse Proxy - Nginx
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/sites:/etc/nginx/sites-enabled:ro
      - ./ssl:/etc/nginx/ssl:ro
    depends_on:
      - frontend
      - backend
    networks:
      - app-network
    restart: always

networks:
  app-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.0.0/16

volumes:
  postgres-data:
  redis-data:
  frontend-node-modules:
  backend-data:

4. Docker Networking & Storage

Docker Network Types

# Network Management
docker network ls # List networks
docker network create mynet # Create bridge network
docker network create --driver bridge isolated-net
docker network create --driver overlay cluster-net
docker network create --subnet=172.20.0.0/16 --gateway=172.20.0.1 mynet
docker network inspect mynet # Detailed network info
docker network connect mynet container # Connect container
docker network disconnect mynet container
docker network prune # Remove unused networks
# Network Types Examples
# Bridge (default) - Containers on same host can communicate
docker run -d --name web --network bridge nginx
docker run -d --name app --network bridge node
# Host - Share host's network namespace
docker run -d --name web --network host nginx # Uses host network directly
# None - No networking
docker run -d --name test --network none alpine sleep 3600
# Custom bridge with DNS
docker network create --driver bridge app-net
docker run -d --name db --network app-net -e MYSQL_ROOT_PASSWORD=secret mysql
docker run -d --name app --network app-net -e DB_HOST=db myapp # Can resolve "db"
# Overlay network (Swarm mode)
docker swarm init # Initialize swarm
docker network create --driver overlay --attachable overlay-net
docker service create --name web --network overlay-net --replicas 3 nginx
# Macvlan - Assign MAC addresses to containers
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
macvlan-net
# Port Publishing
docker run -d -p 8080:80 nginx # Map host port 8080 to container 80
docker run -d -p 80:80 -p 443:443 nginx # Multiple ports
docker run -d -p 127.0.0.1:8080:80 nginx # Bind to specific interface
docker run -d -p 8080:80/tcp -p 8080:80/udp nginx # TCP and UDP
docker run -d --expose 3000 node # Expose port without publishing

Docker Storage & Volumes

Volume Management Examples
# Volume Management Commands
docker volume ls                              # List volumes
docker volume create app-data                 # Create volume
docker volume inspect app-data                # Volume details
docker volume rm app-data                     # Remove volume
docker volume prune                           # Remove unused volumes

# Bind Mounts (Host directories)
docker run -d \
  --name mysql \
  -v /home/user/mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=secret \
  mysql:8.0

# Named Volumes (Docker managed)
docker run -d \
  --name postgres \
  -v pgdata:/var/lib/postgresql/data \
  -e POSTGRES_PASSWORD=secret \
  postgres:15

# Read-only volumes
docker run -d \
  --name nginx \
  -v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v /usr/share/nginx/html:/usr/share/nginx/html:ro \
  nginx:alpine

# Volume with specific options
docker run -d \
  --name app \
  -v app-logs:/app/logs:rw,noexec,nosuid \
  myapp:latest

# Multi-container volume sharing
docker volume create shared-data
docker run -d --name writer -v shared-data:/data alpine sh -c "echo 'Hello' > /data/file.txt"
docker run -it --name reader -v shared-data:/data alpine cat /data/file.txt

# tmpfs mounts (in-memory)
docker run -d \
  --name tmpfs-app \
  --tmpfs /tmp:size=100M,mode=1777 \
  alpine:latest

# Volume drivers (NFS example)
docker volume create \
  --driver local \
  --opt type=nfs \
  --opt o=addr=192.168.1.100,rw \
  --opt device=:/path/to/nfs/share \
  nfs-volume

# Backup volume data
docker run --rm \
  -v pgdata:/source \
  -v /backup:/backup \
  alpine tar czf /backup/pgdata-$(date +%Y%m%d).tar.gz -C /source .

# Restore volume from backup
docker run --rm \
  -v pgdata:/target \
  -v /backup:/backup \
  alpine sh -c "rm -rf /target/* && tar xzf /backup/pgdata-20231210.tar.gz -C /target"

5. Production Best Practices

Docker Production Best Practices:
1. Use specific image tags: Avoid latest, use ubuntu:20.04
2. Implement health checks: HEALTHCHECK CMD curl -f http://localhost/health
3. Set resource limits: --memory=512m --cpus=1.0
4. Use non-root users: USER nodejs in Dockerfile
5. Implement logging: Use JSON logging driver for structured logs
6. Secure secrets: Use Docker secrets or external vaults
7. Regular updates: Update base images and dependencies
8. Monitor containers: Implement monitoring and alerting
9. Backup volumes: Regular backups of persistent data
10. Test thoroughly: Test images in staging before production

Production Deployment Script

#!/bin/bash
# docker-deploy.sh - Production deployment script

set -euo pipefail

# Configuration
APP_NAME="myapp"
APP_VERSION="1.2.3"
REGISTRY="registry.example.com"
ENVIRONMENT="production"
DOCKER_COMPOSE_FILE="docker-compose.prod.yml"

# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'

log() {
    echo -e "${GREEN}[$(date '+%Y-%m-%d %H:%M:%S')]${NC} $1"
}

error() {
    echo -e "${RED}[ERROR]${NC} $1" >&2
    exit 1
}

check_prerequisites() {
    log "Checking prerequisites..."
    
    # Check Docker
    if ! command -v docker &> /dev/null; then
        error "Docker is not installed"
    fi
    
    # Check Docker Compose
    if ! command -v docker-compose &> /dev/null; then
        error "Docker Compose is not installed"
    fi
    
    # Check disk space
    local free_space=$(df -h /var/lib/docker | awk 'NR==2 {print $5}' | sed 's/%//')
    if [[ $free_space -gt 90 ]]; then
        error "Low disk space on Docker storage: ${free_space}% used"
    fi
    
    log "Prerequisites satisfied"
}

build_images() {
    log "Building Docker images..."
    
    # Build application image
    docker build \
        -t ${REGISTRY}/${APP_NAME}:${APP_VERSION} \
        -t ${REGISTRY}/${APP_NAME}:latest \
        --build-arg BUILD_VERSION=${APP_VERSION} \
        --build-arg NODE_ENV=production \
        --no-cache \
        .
    
    # Push to registry
    log "Pushing images to registry..."
    docker push ${REGISTRY}/${APP_NAME}:${APP_VERSION}
    docker push ${REGISTRY}/${APP_NAME}:latest
    
    log "Images built and pushed successfully"
}

deploy_application() {
    log "Deploying application..."
    
    # Stop existing containers
    log "Stopping existing containers..."
    docker-compose -f ${DOCKER_COMPOSE_FILE} down --remove-orphans
    
    # Pull latest images
    log "Pulling latest images..."
    docker-compose -f ${DOCKER_COMPOSE_FILE} pull
    
    # Start services
    log "Starting services..."
    docker-compose -f ${DOCKER_COMPOSE_FILE} up -d
    
    # Wait for services to be healthy
    log "Waiting for services to be healthy..."
    local timeout=300
    local start_time=$(date +%s)
    
    while true; do
        local current_time=$(date +%s)
        local elapsed=$((current_time - start_time))
        
        if [[ $elapsed -gt $timeout ]]; then
            error "Deployment timeout after ${timeout} seconds"
        fi
        
        # Check all services health
        local unhealthy_count=$(docker-compose -f ${DOCKER_COMPOSE_FILE} ps | grep -c "unhealthy\|starting")
        
        if [[ $unhealthy_count -eq 0 ]]; then
            log "All services are healthy"
            break
        fi
        
        log "Waiting for ${unhealthy_count} services to become healthy..."
        sleep 10
    done
}

run_migrations() {
    log "Running database migrations..."
    
    # Run migrations in a temporary container
    docker run --rm \
        --network ${APP_NAME}_default \
        -e DATABASE_URL=${DATABASE_URL} \
        ${REGISTRY}/${APP_NAME}:${APP_VERSION} \
        npm run migrate
    
    log "Migrations completed"
}

cleanup() {
    log "Cleaning up old images..."
    
    # Remove old images (keep last 5)
    docker images ${REGISTRY}/${APP_NAME} \
        --format "{{.Tag}} {{.CreatedAt}}" \
        | sort -rk2 \
        | awk 'NR>5 {print $1}' \
        | xargs -I {} docker rmi ${REGISTRY}/${APP_NAME}:{} 2>/dev/null || true
    
    # Clean up Docker system
    docker system prune -f
    
    log "Cleanup completed"
}

monitor_deployment() {
    log "Monitoring deployment..."
    
    echo "=== Service Status ==="
    docker-compose -f ${DOCKER_COMPOSE_FILE} ps
    
    echo -e "\n=== Resource Usage ==="
    docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"
    
    echo -e "\n=== Recent Logs ==="
    docker-compose -f ${DOCKER_COMPOSE_FILE} logs --tail=20
    
    log "Deployment monitoring completed"
}

main() {
    log "Starting deployment of ${APP_NAME} v${APP_VERSION} to ${ENVIRONMENT}"
    
    check_prerequisites
    build_images
    deploy_application
    run_migrations
    cleanup
    monitor_deployment
    
    log "Deployment completed successfully!"
    log "Application is available at: https://${APP_NAME}.example.com"
}

# Handle signals
trap 'error "Deployment interrupted"' INT TERM

# Run main function
main "$@"

6. Troubleshooting & Monitoring

Common Docker Issues & Solutions

Issue Symptoms Solution Command
Container Won't Start Exits immediately, no logs Check entrypoint/cmd, run interactively docker run -it --entrypoint /bin/bash image
Port Already in Use Bind: address already in use Change port or stop conflicting container docker ps -a | grep :80
Out of Memory Container killed, exit code 137 Increase memory limit or optimize app docker run --memory=2g app
Permission Denied Cannot write to volume Fix volume permissions or use named volumes chown 1000:1000 /path
DNS Issues Cannot resolve hostnames Configure DNS or use custom network --dns 8.8.8.8
Slow Builds Docker build takes too long Optimize Dockerfile, use build cache --no-cache for clean builds
Disk Space Full No space left on device Clean up unused images/containers docker system prune -a
Network Connectivity Containers can't talk Use custom bridge network docker network create app-net

Monitoring & Debugging Commands

# Container Inspection
docker inspect container_name | jq '.[0]' # JSON output with jq
docker inspect --format='{{.State.Status}}' container_name
docker inspect --format='{{json .NetworkSettings.Networks}}' container_name
# Logs Analysis
docker logs --tail 100 -f container_name # Follow last 100 lines
docker logs --since 10m container_name # Last 10 minutes
docker logs container_name 2>&1 | grep -i error # Search for errors
# Resource Monitoring
docker stats --no-stream # One-time stats
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
docker top container_name # Container processes
docker exec container_name ps aux # Process list inside
# Network Debugging
docker exec container_name ping google.com # Test connectivity
docker exec container_name nslookup db # DNS resolution
docker exec container_name netstat -tulpn # Listening ports
docker network inspect bridge | jq '.[0].Containers'
# Filesystem Debugging
docker diff container_name # Changed files
docker exec container_name df -h # Disk usage inside
docker exec container_name ls -la /app # Directory listing
# System Diagnostics
docker system df -v # Detailed disk usage
docker info # Docker system info
docker version # Version info
docker events --since '10m' --until '0s' # Recent events
# Debugging Containers
docker run -it --rm --net container:web alpine sh # Debug network namespace
docker run -it --rm --pid container:web alpine sh # Debug PID namespace
nsenter -t $(docker inspect -f '{{.State.Pid}}' container) -n ip addr # Enter network namespace

Master Docker on Linux

Docker on Linux provides a powerful platform for building, shipping, and running applications in consistent, isolated containers. By mastering Docker installation, container fundamentals, Dockerfile creation, networking, storage, and production practices, you can efficiently deploy applications at scale.

Key Takeaways: Start with proper installation using official repositories. Master essential Docker commands for daily operations. Write efficient Dockerfiles using best practices and multi-stage builds. Implement proper networking and storage for production workloads. Follow security best practices and monitor containers effectively.

Next Steps: Practice with real applications by containerizing existing projects. Explore Docker Compose for multi-service applications. Learn Docker Swarm or Kubernetes for orchestration at scale. Implement CI/CD pipelines with Docker. Monitor production containers with tools like Prometheus and Grafana. Stay updated with Docker security advisories and best practices.