Master Docker on Linux with this comprehensive guide covering installation, container fundamentals, Dockerfile creation, networking, storage management, and production best practices for efficient container deployment.
Why Docker on Linux?
Docker revolutionized application deployment by providing consistent, portable, and isolated environments using Linux container technology.
- Consistency: Works the same on any Linux distribution
- Isolation: Secure process and resource isolation
- Portability: Build once, run anywhere
- Efficiency: Lightweight compared to virtual machines
- Version Control: Image layers enable efficient updates
- Microservices: Perfect for modern application architecture
- DevOps Integration: Fits seamlessly into CI/CD pipelines
1. Docker Installation on Linux
Installation Methods Comparison
| Method | Command | Best For | Updates | Stability |
|---|---|---|---|---|
| Official Repository | curl -fsSL https://get.docker.com | sh |
Most users, production | Regular | ✅ Stable |
| Distribution Package | sudo apt install docker.io |
Ubuntu/Debian users | Distro updates | ✅ Very Stable |
| Snap Package | sudo snap install docker |
Easy installation | Auto-updates | ⚠️ Sandboxed |
| Binary Installation | wget docker.com/binary.tar.gz |
Air-gapped systems | Manual | ⚠️ Manual |
| Docker Desktop | GUI installer | Developers (WSL2) | Auto-updates | ✅ Stable |
| Rootless Mode | dockerd-rootless-setuptool.sh |
Security-conscious | Manual | ✅ Experimental |
Complete Installation Guide
#!/bin/bash
# docker-install-ubuntu.sh - Complete Docker installation for Ubuntu/Debian
set -e
echo "=== Docker Installation for Ubuntu/Debian ==="
# 1. Remove old versions
sudo apt-get remove -y docker docker-engine docker.io containerd runc
# 2. Update package index
sudo apt-get update
# 3. Install prerequisites
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
# 4. Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# 5. Set up stable repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# 6. Update package index again
sudo apt-get update
# 7. Install Docker Engine
sudo apt-get install -y \
docker-ce \
docker-ce-cli \
containerd.io \
docker-compose-plugin
# 8. Verify installation
sudo docker run hello-world
# 9. Add user to docker group (optional but recommended)
sudo usermod -aG docker $USER
# 10. Enable Docker to start on boot
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
# 11. Install Docker Compose (separate)
DOCKER_COMPOSE_VERSION=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
sudo curl -L "https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
echo "=== Installation Complete ==="
echo "Log out and log back in for group changes to take effect"
echo "Test with: docker run hello-world"
#!/bin/bash
# docker-install-rhel.sh - Docker installation for RHEL/CentOS/Fedora
set -e
echo "=== Docker Installation for RHEL/CentOS/Fedora ==="
# Check distribution
if [[ -f /etc/redhat-release ]]; then
echo "Detected RHEL-based distribution"
# 1. Remove old versions
sudo yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
# 2. Install prerequisites
sudo yum install -y yum-utils
# 3. Add Docker repository
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
# 4. Install Docker Engine
sudo yum install -y docker-ce docker-ce-cli containerd.io
elif [[ -f /etc/fedora-release ]]; then
echo "Detected Fedora"
# 1. Remove old versions
sudo dnf remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
# 2. Install prerequisites
sudo dnf -y install dnf-plugins-core
# 3. Add Docker repository
sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
# 4. Install Docker Engine
sudo dnf install -y docker-ce docker-ce-cli containerd.io
else
echo "Unsupported distribution"
exit 1
fi
# 5. Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker
# 6. Verify installation
sudo docker run hello-world
# 7. Add user to docker group
sudo usermod -aG docker $USER
# 8. Install Docker Compose
DOCKER_COMPOSE_VERSION=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
sudo curl -L "https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
echo "=== Installation Complete ==="
echo "Log out and log back in for group changes"
echo "Test with: docker run hello-world"
# Verify Docker installation
docker --version
docker-compose --version
docker system info
# Test Docker functionality
docker run hello-world
# Check Docker service status
sudo systemctl status docker
# View Docker daemon logs
sudo journalctl -u docker.service -f
# Test networking
docker run -it --rm alpine ping -c 4 google.com
# Test volume mounting
docker run -v /tmp:/host-tmp alpine ls /host-tmp
# Test port mapping
docker run -d -p 8080:80 nginx:alpine
curl http://localhost:8080
# Clean up test containers
docker container prune -f
docker image prune -f
2. Docker Fundamentals & Core Commands
Essential Docker Commands
3. Dockerfile Mastery
Complete Dockerfile Reference
# Dockerfile - Production-ready Node.js Application
# ============================================
# BUILD STAGE
# ============================================
# Use official Node.js LTS as base image
FROM node:18-alpine AS builder
# Set environment variables
ENV NODE_ENV=production \
APP_PORT=3000 \
NPM_CONFIG_LOGLEVEL=warn
# Set working directory
WORKDIR /app
# Install dependencies first (caching layer)
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Copy application source
COPY . .
# Build application (if needed)
# RUN npm run build
# ============================================
# RUNTIME STAGE (Multi-stage build)
# ============================================
# Use smaller runtime image
FROM node:18-alpine
# Add non-root user for security
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
# Install runtime dependencies
RUN apk add --no-cache \
tini \
curl \
dumb-init
# Use tini as init process
ENTRYPOINT ["/sbin/tini", "--"]
# Set working directory
WORKDIR /app
# Copy built artifacts from builder stage
COPY --from=builder --chown=nodejs:nodejs /app /app
# Switch to non-root user
USER nodejs
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:${APP_PORT}/health || exit 1
# Expose application port
EXPOSE ${APP_PORT}
# Define volume for persistent data
VOLUME ["/app/data", "/app/logs"]
# Set labels for metadata
LABEL maintainer="devops@example.com" \
version="1.0.0" \
description="Node.js Application" \
org.opencontainers.image.source="https://github.com/your/repo"
# Command to run the application
CMD ["node", "server.js"]
# ============================================
# ADDITIONAL DOCKERFILE DIRECTIVES
# ============================================
# ARG - Build-time arguments
# ARG BUILD_VERSION=1.0.0
# ARG NODE_VERSION=18
# ENV - Environment variables
# ENV DATABASE_URL=postgres://user:pass@db:5432/app
# ENV REDIS_URL=redis://redis:6379
# COPY - Copy files with patterns
# COPY src/ ./src/
# COPY public/ ./public/
# COPY *.config.js ./
# ADD - Copy with URL and tar extraction
# ADD https://example.com/file.tar.gz /tmp/
# ADD app.tar.gz /app/
# RUN - Execute commands
# RUN apt-get update && apt-get install -y \
# git \
# python3 \
# build-essential \
# && rm -rf /var/lib/apt/lists/*
# USER - Switch user
# USER nobody
# WORKDIR - Set working directory
# WORKDIR /usr/src/app
# VOLUME - Create mount point
# VOLUME /var/lib/mysql
# EXPOSE - Document ports
# EXPOSE 80/tcp
# EXPOSE 443/tcp
# HEALTHCHECK - Container health
# HEALTHCHECK --interval=5m --timeout=3s \
# CMD curl -f http://localhost/ || exit 1
# SHELL - Change default shell
# SHELL ["/bin/bash", "-c"]
# STOPSIGNAL - Signal for stopping
# STOPSIGNAL SIGTERM
# ONBUILD - Trigger instructions
# ONBUILD COPY package.json ./
# ONBUILD RUN npm install
Dockerfile Best Practices
| Practice | Good Example | Bad Example | Why |
|---|---|---|---|
| Use Official Images | FROM node:18-alpine |
FROM ubuntu:latest |
Security, size, maintenance |
| Multi-stage Builds | Separate build and runtime stages | Single stage with all tools | Smaller final images |
| Layer Caching | Copy package.json before source | Copy everything then install | Faster builds |
| Non-root User | USER nodejs |
Run as root | Security |
| Cleanup | apt-get clean && rm -rf /var/lib/apt/lists/* |
Leave cache files | Smaller images |
| Health Checks | HEALTHCHECK CMD curl -f http://localhost/health |
No health check | Reliability |
| .dockerignore | Exclude node_modules, .git | Copy everything | Smaller context |
| Tag Explicitly | node:18.15.0-alpine3.17 |
node:latest |
Reproducibility |
Sample Application Dockerfiles
# Dockerfile for Python Flask Application
FROM python:3.11-slim AS builder
# Install build dependencies
RUN apt-get update && apt-get install -y \
gcc \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy requirements first for caching
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
# Runtime stage
FROM python:3.11-slim
# Create non-root user
RUN groupadd -r flask && useradd -r -g flask flask
WORKDIR /app
# Copy Python dependencies from builder
COPY --from=builder /root/.local /root/.local
# Copy application
COPY --chown=flask:flask . .
# Add .local to PATH
ENV PATH=/root/.local/bin:$PATH
# Switch to non-root user
USER flask
# Expose port
EXPOSE 5000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s \
CMD python -c "import requests; requests.get('http://localhost:5000/health')"
# Run application
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]
# Dockerfile for Custom Nginx
FROM nginx:1.23-alpine
# Remove default nginx configuration
RUN rm /etc/nginx/conf.d/default.conf
# Copy custom configuration
COPY nginx.conf /etc/nginx/nginx.conf
COPY sites/ /etc/nginx/sites-available/
COPY ssl/ /etc/nginx/ssl/
# Copy static files
COPY static/ /usr/share/nginx/html/
# Create necessary directories
RUN mkdir -p /var/log/nginx && \
mkdir -p /var/cache/nginx && \
chown -R nginx:nginx /var/log/nginx /var/cache/nginx
# Switch to nginx user (already exists in base image)
USER nginx
# Expose HTTP and HTTPS
EXPOSE 80
EXPOSE 443
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s \
CMD curl -f http://localhost/ || exit 1
# Start nginx in foreground
CMD ["nginx", "-g", "daemon off;"]
# docker-compose.yml - Full Stack Application
version: '3.8'
services:
# Frontend - React Application
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
args:
NODE_ENV: production
ports:
- "3000:3000"
environment:
- REACT_APP_API_URL=http://backend:5000/api
depends_on:
- backend
networks:
- app-network
volumes:
- frontend-node-modules:/app/node_modules
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
# Backend - Node.js API
backend:
build:
context: ./backend
dockerfile: Dockerfile
ports:
- "5000:5000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://postgres:password@db:5432/app
- REDIS_URL=redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
networks:
- app-network
volumes:
- backend-data:/app/data
- ./backend/logs:/app/logs
restart: unless-stopped
# Database - PostgreSQL
db:
image: postgres:15-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
volumes:
- postgres-data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- app-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# Cache - Redis
redis:
image: redis:7-alpine
command: redis-server --appendonly yes
volumes:
- redis-data:/data
networks:
- app-network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
# Reverse Proxy - Nginx
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/sites:/etc/nginx/sites-enabled:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- frontend
- backend
networks:
- app-network
restart: always
networks:
app-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
postgres-data:
redis-data:
frontend-node-modules:
backend-data:
4. Docker Networking & Storage
Docker Network Types
Docker Storage & Volumes
# Volume Management Commands
docker volume ls # List volumes
docker volume create app-data # Create volume
docker volume inspect app-data # Volume details
docker volume rm app-data # Remove volume
docker volume prune # Remove unused volumes
# Bind Mounts (Host directories)
docker run -d \
--name mysql \
-v /home/user/mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
mysql:8.0
# Named Volumes (Docker managed)
docker run -d \
--name postgres \
-v pgdata:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=secret \
postgres:15
# Read-only volumes
docker run -d \
--name nginx \
-v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /usr/share/nginx/html:/usr/share/nginx/html:ro \
nginx:alpine
# Volume with specific options
docker run -d \
--name app \
-v app-logs:/app/logs:rw,noexec,nosuid \
myapp:latest
# Multi-container volume sharing
docker volume create shared-data
docker run -d --name writer -v shared-data:/data alpine sh -c "echo 'Hello' > /data/file.txt"
docker run -it --name reader -v shared-data:/data alpine cat /data/file.txt
# tmpfs mounts (in-memory)
docker run -d \
--name tmpfs-app \
--tmpfs /tmp:size=100M,mode=1777 \
alpine:latest
# Volume drivers (NFS example)
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=192.168.1.100,rw \
--opt device=:/path/to/nfs/share \
nfs-volume
# Backup volume data
docker run --rm \
-v pgdata:/source \
-v /backup:/backup \
alpine tar czf /backup/pgdata-$(date +%Y%m%d).tar.gz -C /source .
# Restore volume from backup
docker run --rm \
-v pgdata:/target \
-v /backup:/backup \
alpine sh -c "rm -rf /target/* && tar xzf /backup/pgdata-20231210.tar.gz -C /target"
5. Production Best Practices
1. Use specific image tags: Avoid
latest, use ubuntu:20.042. Implement health checks:
HEALTHCHECK CMD curl -f http://localhost/health3. Set resource limits:
--memory=512m --cpus=1.04. Use non-root users:
USER nodejs in Dockerfile5. Implement logging: Use JSON logging driver for structured logs
6. Secure secrets: Use Docker secrets or external vaults
7. Regular updates: Update base images and dependencies
8. Monitor containers: Implement monitoring and alerting
9. Backup volumes: Regular backups of persistent data
10. Test thoroughly: Test images in staging before production
Production Deployment Script
#!/bin/bash
# docker-deploy.sh - Production deployment script
set -euo pipefail
# Configuration
APP_NAME="myapp"
APP_VERSION="1.2.3"
REGISTRY="registry.example.com"
ENVIRONMENT="production"
DOCKER_COMPOSE_FILE="docker-compose.prod.yml"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() {
echo -e "${GREEN}[$(date '+%Y-%m-%d %H:%M:%S')]${NC} $1"
}
error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
exit 1
}
check_prerequisites() {
log "Checking prerequisites..."
# Check Docker
if ! command -v docker &> /dev/null; then
error "Docker is not installed"
fi
# Check Docker Compose
if ! command -v docker-compose &> /dev/null; then
error "Docker Compose is not installed"
fi
# Check disk space
local free_space=$(df -h /var/lib/docker | awk 'NR==2 {print $5}' | sed 's/%//')
if [[ $free_space -gt 90 ]]; then
error "Low disk space on Docker storage: ${free_space}% used"
fi
log "Prerequisites satisfied"
}
build_images() {
log "Building Docker images..."
# Build application image
docker build \
-t ${REGISTRY}/${APP_NAME}:${APP_VERSION} \
-t ${REGISTRY}/${APP_NAME}:latest \
--build-arg BUILD_VERSION=${APP_VERSION} \
--build-arg NODE_ENV=production \
--no-cache \
.
# Push to registry
log "Pushing images to registry..."
docker push ${REGISTRY}/${APP_NAME}:${APP_VERSION}
docker push ${REGISTRY}/${APP_NAME}:latest
log "Images built and pushed successfully"
}
deploy_application() {
log "Deploying application..."
# Stop existing containers
log "Stopping existing containers..."
docker-compose -f ${DOCKER_COMPOSE_FILE} down --remove-orphans
# Pull latest images
log "Pulling latest images..."
docker-compose -f ${DOCKER_COMPOSE_FILE} pull
# Start services
log "Starting services..."
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d
# Wait for services to be healthy
log "Waiting for services to be healthy..."
local timeout=300
local start_time=$(date +%s)
while true; do
local current_time=$(date +%s)
local elapsed=$((current_time - start_time))
if [[ $elapsed -gt $timeout ]]; then
error "Deployment timeout after ${timeout} seconds"
fi
# Check all services health
local unhealthy_count=$(docker-compose -f ${DOCKER_COMPOSE_FILE} ps | grep -c "unhealthy\|starting")
if [[ $unhealthy_count -eq 0 ]]; then
log "All services are healthy"
break
fi
log "Waiting for ${unhealthy_count} services to become healthy..."
sleep 10
done
}
run_migrations() {
log "Running database migrations..."
# Run migrations in a temporary container
docker run --rm \
--network ${APP_NAME}_default \
-e DATABASE_URL=${DATABASE_URL} \
${REGISTRY}/${APP_NAME}:${APP_VERSION} \
npm run migrate
log "Migrations completed"
}
cleanup() {
log "Cleaning up old images..."
# Remove old images (keep last 5)
docker images ${REGISTRY}/${APP_NAME} \
--format "{{.Tag}} {{.CreatedAt}}" \
| sort -rk2 \
| awk 'NR>5 {print $1}' \
| xargs -I {} docker rmi ${REGISTRY}/${APP_NAME}:{} 2>/dev/null || true
# Clean up Docker system
docker system prune -f
log "Cleanup completed"
}
monitor_deployment() {
log "Monitoring deployment..."
echo "=== Service Status ==="
docker-compose -f ${DOCKER_COMPOSE_FILE} ps
echo -e "\n=== Resource Usage ==="
docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"
echo -e "\n=== Recent Logs ==="
docker-compose -f ${DOCKER_COMPOSE_FILE} logs --tail=20
log "Deployment monitoring completed"
}
main() {
log "Starting deployment of ${APP_NAME} v${APP_VERSION} to ${ENVIRONMENT}"
check_prerequisites
build_images
deploy_application
run_migrations
cleanup
monitor_deployment
log "Deployment completed successfully!"
log "Application is available at: https://${APP_NAME}.example.com"
}
# Handle signals
trap 'error "Deployment interrupted"' INT TERM
# Run main function
main "$@"
6. Troubleshooting & Monitoring
Common Docker Issues & Solutions
| Issue | Symptoms | Solution | Command |
|---|---|---|---|
| Container Won't Start | Exits immediately, no logs | Check entrypoint/cmd, run interactively | docker run -it --entrypoint /bin/bash image |
| Port Already in Use | Bind: address already in use | Change port or stop conflicting container | docker ps -a | grep :80 |
| Out of Memory | Container killed, exit code 137 | Increase memory limit or optimize app | docker run --memory=2g app |
| Permission Denied | Cannot write to volume | Fix volume permissions or use named volumes | chown 1000:1000 /path |
| DNS Issues | Cannot resolve hostnames | Configure DNS or use custom network | --dns 8.8.8.8 |
| Slow Builds | Docker build takes too long | Optimize Dockerfile, use build cache | --no-cache for clean builds |
| Disk Space Full | No space left on device | Clean up unused images/containers | docker system prune -a |
| Network Connectivity | Containers can't talk | Use custom bridge network | docker network create app-net |
Monitoring & Debugging Commands
Master Docker on Linux
Docker on Linux provides a powerful platform for building, shipping, and running applications in consistent, isolated containers. By mastering Docker installation, container fundamentals, Dockerfile creation, networking, storage, and production practices, you can efficiently deploy applications at scale.
Key Takeaways: Start with proper installation using official repositories. Master essential Docker commands for daily operations. Write efficient Dockerfiles using best practices and multi-stage builds. Implement proper networking and storage for production workloads. Follow security best practices and monitor containers effectively.
Next Steps: Practice with real applications by containerizing existing projects. Explore Docker Compose for multi-service applications. Learn Docker Swarm or Kubernetes for orchestration at scale. Implement CI/CD pipelines with Docker. Monitor production containers with tools like Prometheus and Grafana. Stay updated with Docker security advisories and best practices.