Linux Container Networking: Complete Guide to Network Configuration

Master Linux container networking with this comprehensive guide covering network namespaces, virtual networking, CNI plugins, Docker networking models, Kubernetes networking, and advanced container network configurations for production environments.

Container Networking Architecture Host Network Stack eth0: 10.0.0.1 Routing Table iptables docker0 DNS Server Network Namespace 1 eth0: 172.17.0.2 veth pair: veth123@if456 Route: 0.0.0.0/0 nat POSTROUTING lo: 127.0.0.1 resolv.conf Network Namespace 2 eth0: 172.17.0.3 veth pair: veth789@if101 Route: 0.0.0.0/0 Port: 80/tcp lo: 127.0.0.1 Network Policy Network Connectivity Path Container to Container Via docker0 bridge Container to Host Direct routing Container to Internet NAT via iptables Advanced Networking Features Overlay Networks VXLAN, GRE tunnels Service Mesh Istio, Linkerd Network Policies Kubernetes, Calico Load Balancer Ingress Controller Service Discovery
Complete container networking architecture showing network namespaces, connectivity, and advanced features

Why Container Networking?

Container networking enables isolated, secure, and scalable communication between containers while providing connectivity to external networks.

  • Isolation: Network namespaces provide complete network stack isolation
  • Security: Network policies and firewalls control traffic flow
  • Scalability: Support for thousands of container networks
  • Multi-host: Overlay networks span multiple hosts
  • Service Discovery: Automatic DNS resolution for containers
  • Load Balancing: Distributed traffic across containers
  • Observability: Network monitoring and traffic analysis

1. Linux Network Namespaces Fundamentals

πŸ”—
Network Namespace
ip netns add myns
Isolated network stack with interfaces, routing, and iptables. Isolation Security
πŸŒ‰
Virtual Ethernet (veth)
ip link add veth0 type veth peer name veth1
Virtual Ethernet pairs connect namespaces to bridges. Connectivity
🌐
Linux Bridge
ip link add br0 type bridge
Layer 2 bridge connecting multiple network namespaces. L2 Switching
πŸ”„
iptables/NFTables
iptables -t nat -A POSTROUTING
Packet filtering and NAT for container networking. Firewall
πŸ“‘
Overlay Networks
ip link add vxlan0 type vxlan
VXLAN/GRE tunnels for multi-host container networks. Multi-host
πŸ”Œ
CNI Plugins
CNI_PATH=/opt/cni/bin
Container Network Interface plugins for Kubernetes. Kubernetes

Network Namespace Types

Namespace Type Command Isolation Level Use Case Performance
Private Network unshare --net Complete isolation Security, testing βœ… High
Bridge Network docker network create Isolated with NAT Default Docker βœ… Good
Host Network --network host No isolation Performance ⚑ Excellent
Container Network --network container:id Shared namespace Sidecar patterns βœ… Good
Overlay Network --network overlay Multi-host Swarm, Kubernetes ⚠️ Medium
Macvlan Network --network macvlan Direct physical Legacy integration βœ… Good
IPvlan Network --network ipvlan Shared MAC High density βœ… Good
None Network --network none No networking Security, air-gapped N/A
Container
β†’
Network Namespace
β†’
Bridge (docker0)
β†’
Host Network
β†’
Internet

2. Manual Network Namespace Configuration

Creating Custom Network Namespaces

# Create and manage network namespaces
ip netns add red # Create namespace "red"
ip netns add blue # Create namespace "blue"
ip netns list # List all namespaces
ip netns delete red # Delete namespace
ip -n red link show # Show interfaces in namespace
ip netns exec red ip addr show # Execute command in namespace
# Create veth pair to connect namespaces
ip link add veth-red type veth peer name veth-blue
ip link set veth-red netns red # Move interface to namespace
ip link set veth-blue netns blue
# Configure IP addresses
ip -n red addr add 10.0.0.1/24 dev veth-red
ip -n blue addr add 10.0.0.2/24 dev veth-blue
ip -n red link set veth-red up
ip -n blue link set veth-blue up
# Test connectivity
ip netns exec red ping 10.0.0.2 # Ping from red to blue
ip netns exec blue ping 10.0.0.1 # Ping from blue to red
# Create Linux bridge for multiple containers
ip link add br0 type bridge # Create bridge
ip link set br0 up # Enable bridge
ip addr add 172.17.0.1/16 dev br0 # Assign IP to bridge
# Connect namespaces to bridge
ip link add veth-container1 type veth peer name veth-br1
ip link set veth-container1 netns container1
ip link set veth-br1 master br0 # Connect to bridge
ip link set veth-br1 up
ip -n container1 addr add 172.17.0.2/16 dev veth-container1
ip -n container1 link set veth-container1 up
ip -n container1 route add default via 172.17.0.1
# Enable NAT for internet access
iptables -t nat -A POSTROUTING -s 172.17.0.0/16 ! -o br0 -j MASQUERADE
iptables -A FORWARD -i br0 -j ACCEPT
iptables -A FORWARD -o br0 -j ACCEPT
# Enable IP forwarding
sysctl -w net.ipv4.ip_forward=1
echo 1 > /proc/sys/net/ipv4/ip_forward
# Configure DNS in namespace
mkdir -p /etc/netns/red
echo "nameserver 8.8.8.8" > /etc/netns/red/resolv.conf
echo "nameserver 1.1.1.1" >> /etc/netns/red/resolv.conf
# Create network namespace with Docker
docker network create --driver bridge isolated-net
docker run -d --name container1 --network isolated-net nginx
docker run -d --name container2 --network isolated-net nginx
# Inspect Docker network configuration
docker network inspect isolated-net
docker exec container1 ip addr show
docker exec container1 route -n
docker exec container1 cat /etc/resolv.conf

Advanced Network Namespace Script

advanced-network-setup.sh - Complete Network Namespace Configuration
#!/bin/bash
# advanced-network-setup.sh - Complete network namespace configuration

set -euo pipefail

# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'

log() {
    echo -e "${GREEN}[+]${NC} $1"
}

error() {
    echo -e "${RED}[!]${NC} $1" >&2
    exit 1
}

cleanup() {
    log "Cleaning up network configuration..."
    
    # Delete namespaces
    ip netns delete red 2>/dev/null || true
    ip netns delete blue 2>/dev/null || true
    ip netns delete green 2>/dev/null || true
    
    # Delete bridge
    ip link delete br0 2>/dev/null || true
    
    # Clean iptables rules
    iptables -t nat -D POSTROUTING -s 10.0.0.0/24 -j MASQUERADE 2>/dev/null || true
    iptables -D FORWARD -i br0 -j ACCEPT 2>/dev/null || true
    iptables -D FORWARD -o br0 -j ACCEPT 2>/dev/null || true
    
    log "Cleanup completed"
}

trap cleanup EXIT

create_namespaces() {
    log "Creating network namespaces..."
    
    # Create namespaces
    ip netns add red
    ip netns add blue
    ip netns add green
    
    # Create loopback interfaces
    ip netns exec red ip link set lo up
    ip netns exec blue ip link set lo up
    ip netns exec green ip link set lo up
    
    log "Namespaces created: red, blue, green"
}

create_bridge() {
    log "Creating Linux bridge..."
    
    # Create bridge
    ip link add br0 type bridge
    ip link set br0 up
    ip addr add 10.0.0.1/24 dev br0
    
    log "Bridge br0 created with IP 10.0.0.1/24"
}

connect_namespace_to_bridge() {
    local namespace="$1"
    local ip_address="$2"
    
    log "Connecting namespace $namespace to bridge (IP: $ip_address)..."
    
    # Create veth pair
    local veth_container="veth-$namespace"
    local veth_bridge="veth-br-$namespace"
    
    ip link add ${veth_container} type veth peer name ${veth_bridge}
    
    # Move container end to namespace
    ip link set ${veth_container} netns ${namespace}
    
    # Connect bridge end to bridge
    ip link set ${veth_bridge} master br0
    ip link set ${veth_bridge} up
    
    # Configure container interface
    ip netns exec ${namespace} ip addr add ${ip_address}/24 dev ${veth_container}
    ip netns exec ${namespace} ip link set ${veth_container} up
    ip netns exec ${namespace} ip route add default via 10.0.0.1
    
    log "Namespace $namespace connected to bridge with IP $ip_address"
}

setup_dns() {
    log "Setting up DNS in namespaces..."
    
    # Create custom resolv.conf for each namespace
    for ns in red blue green; do
        mkdir -p /etc/netns/${ns}
        cat > /etc/netns/${ns}/resolv.conf << EOF
nameserver 8.8.8.8
nameserver 1.1.1.1
search localdomain
EOF
    done
    
    log "DNS configured for all namespaces"
}

enable_nat() {
    log "Enabling NAT for internet access..."
    
    # Enable IP forwarding
    sysctl -w net.ipv4.ip_forward=1
    
    # NAT configuration
    iptables -t nat -A POSTROUTING -s 10.0.0.0/24 ! -o br0 -j MASQUERADE
    iptables -A FORWARD -i br0 -j ACCEPT
    iptables -A FORWARD -o br0 -j ACCEPT
    
    log "NAT enabled for 10.0.0.0/24 network"
}

setup_firewall() {
    log "Setting up firewall rules..."
    
    # Allow established connections
    iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
    
    # Allow red to access internet
    iptables -A FORWARD -i veth-br-red -o eth0 -j ACCEPT
    iptables -A FORWARD -i eth0 -o veth-br-red -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
    
    # Allow blue to access only specific ports
    iptables -A FORWARD -i veth-br-blue -o eth0 -p tcp --dport 80 -j ACCEPT
    iptables -A FORWARD -i veth-br-blue -o eth0 -p tcp --dport 443 -j ACCEPT
    
    # Block green from internet
    iptables -A FORWARD -i veth-br-green -o eth0 -j DROP
    
    log "Firewall rules configured"
}

test_connectivity() {
    log "Testing connectivity..."
    
    echo "=== Testing internal connectivity ==="
    
    # Test between namespaces
    ip netns exec red ping -c 2 10.0.0.3 && echo "βœ“ Red can reach Blue" || echo "βœ— Red cannot reach Blue"
    ip netns exec blue ping -c 2 10.0.0.2 && echo "βœ“ Blue can reach Red" || echo "βœ— Blue cannot reach Red"
    ip netns exec green ping -c 2 10.0.0.2 && echo "βœ“ Green can reach Red" || echo "βœ— Green cannot reach Red"
    
    echo -e "\n=== Testing internet connectivity ==="
    
    # Test internet access
    ip netns exec red ping -c 2 8.8.8.8 && echo "βœ“ Red has internet access" || echo "βœ— Red no internet"
    ip netns exec blue ping -c 2 8.8.8.8 && echo "βœ“ Blue has internet access" || echo "βœ— Blue no internet"
    ip netns exec green ping -c 2 8.8.8.8 && echo "βœ“ Green has internet access" || echo "βœ— Green no internet"
    
    echo -e "\n=== Testing DNS resolution ==="
    
    # Test DNS
    ip netns exec red nslookup google.com && echo "βœ“ Red DNS working" || echo "βœ— Red DNS failed"
    ip netns exec blue nslookup github.com && echo "βœ“ Blue DNS working" || echo "βœ— Blue DNS failed"
}

show_configuration() {
    log "Current network configuration:"
    
    echo -e "\n=== Network Namespaces ==="
    ip netns list
    
    echo -e "\n=== Bridge Configuration ==="
    ip addr show br0
    bridge link show
    
    echo -e "\n=== Namespace Interfaces ==="
    for ns in red blue green; do
        echo -e "\n--- Namespace: $ns ---"
        ip netns exec $ns ip addr show
        ip netns exec $ns ip route show
    done
    
    echo -e "\n=== iptables Rules ==="
    iptables -t nat -L POSTROUTING -n -v
    iptables -L FORWARD -n -v
}

main() {
    log "Starting advanced network namespace setup..."
    
    # Cleanup any existing configuration
    cleanup
    
    # Setup sequence
    create_namespaces
    create_bridge
    connect_namespace_to_bridge "red" "10.0.0.2"
    connect_namespace_to_bridge "blue" "10.0.0.3"
    connect_namespace_to_bridge "green" "10.0.0.4"
    setup_dns
    enable_nat
    setup_firewall
    
    # Show and test configuration
    show_configuration
    test_connectivity
    
    log "Network setup completed successfully!"
    log "Namespaces are running. Press Ctrl+C to clean up."
    
    # Keep running
    sleep infinity
}

# Check root privileges
if [[ $EUID -ne 0 ]]; then
    error "This script must be run as root"
fi

# Run main function
main "$@"
Network Namespace Architecture Host Network eth0: 192.168.1.100 docker0: 172.17.0.1 iptables NAT Routing Table DNS Server Bridge Network (docker0) veth123 ↔ veth-br123 veth456 ↔ veth-br456 veth789 ↔ veth-br789 docker0 Bridge MAC: 02:42:ac:11:00:01 ARP Table Layer 2 Forwarding Container Namespaces Container 1 eth0: 172.17.0.2 lo: 127.0.0.1 Container 2 eth0: 172.17.0.3 lo: 127.0.0.1 Container 3 eth0: 172.17.0.4 lo: 127.0.0.1 Connectivity Path: Container β†’ veth β†’ Bridge β†’ Host Network β†’ Internet (via NAT)
Network namespace architecture showing bridge connectivity and veth pairs

3. Docker Networking Models

Docker Network Drivers

# Docker Network Management
docker network ls # List networks
docker network create mynet # Create bridge network
docker network create --driver bridge isolated-net
docker network create --driver overlay swarm-net
docker network create --driver macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 macvlan-net
docker network create --driver ipvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 -o ipvlan_mode=l2 ipvlan-net
docker network inspect mynet # Detailed network info
docker network connect mynet container # Connect container
docker network disconnect mynet container # Disconnect container
docker network prune # Remove unused networks
# Bridge Network (Default)
docker run -d --name web --network bridge nginx
docker run -d --name app --network bridge node
docker exec web ping app # Can ping by container name
# Custom Bridge Network
docker network create --driver bridge app-network
docker run -d --name db --network app-network -e MYSQL_ROOT_PASSWORD=secret mysql
docker run -d --name api --network app-network -e DB_HOST=db myapp
docker exec api ping db # DNS resolution works
# Host Network (No isolation)
docker run -d --name nginx-host --network host nginx # Uses host network directly
ss -tulpn | grep :80 # Check port on host
# Container Network (Share namespace)
docker run -d --name web nginx
docker run -it --name debug --network container:web alpine sh # Share web's network
docker exec debug ip addr show # Same interfaces as web
# None Network (No networking)
docker run -d --name isolated --network none alpine sleep 3600
docker exec isolated ip addr show # Only loopback interface
# Overlay Network (Swarm mode)
docker swarm init # Initialize swarm
docker network create --driver overlay --attachable overlay-net
docker service create --name web --network overlay-net --replicas 3 nginx
docker run -d --name test --network overlay-net alpine ping web
# Macvlan Network (Direct physical)
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
macvlan-net
docker run -d --name macvlan-container --network macvlan-net --ip=192.168.1.100 nginx
# IPvlan Network (Shared MAC)
docker network create -d ipvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
-o ipvlan_mode=l2 \
ipvlan-net
# Network Aliases
docker network create app-net
docker run -d --name service1 --network app-net --network-alias api --network-alias backend nginx
docker run -it --network app-net alpine ping api # Resolves to service1
docker run -it --network app-net alpine ping backend # Also resolves

Docker Network Driver Comparison

Driver Isolation Performance Use Case Multi-host Configuration
bridge Container isolation Good Default, single host ❌ No Automatic
host No isolation Excellent Performance critical ❌ No Simple
overlay Swarm isolation Medium Multi-host, Swarm βœ… Yes Complex
macvlan Direct physical Excellent Legacy integration βœ… Yes Complex
ipvlan Shared MAC Excellent High density βœ… Yes Complex
none Complete N/A Security, air-gapped ❌ No Simple
container Shared namespace Good Sidecar patterns ❌ No Simple

4. Kubernetes Networking (CNI)

CNI Plugins and Configuration

CNI Configuration Examples
# CNI Configuration Directory Structure
/etc/cni/net.d/
β”œβ”€β”€ 10-flannel.conflist          # Flannel plugin configuration
β”œβ”€β”€ 20-calico.conflist           # Calico plugin configuration
β”œβ”€β”€ 30-weave.conflist            # Weave Net configuration
└── 99-loopback.conf             # Loopback plugin

# Basic CNI Configuration (JSON format)
{
  "cniVersion": "0.4.0",
  "name": "mynet",
  "type": "bridge",
  "bridge": "cni0",
  "isGateway": true,
  "ipMasq": true,
  "ipam": {
    "type": "host-local",
    "ranges": [
      [
        {
          "subnet": "10.22.0.0/16",
          "gateway": "10.22.0.1"
        }
      ]
    ],
    "routes": [
      { "dst": "0.0.0.0/0" }
    ]
  },
  "dns": {
    "nameservers": ["8.8.8.8", "1.1.1.1"]
  }
}

# Flannel CNI Configuration
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

# Calico CNI Configuration
{
  "name": "k8s-pod-network",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "calico",
      "log_level": "info",
      "datastore_type": "kubernetes",
      "nodename": "__KUBERNETES_NODE_NAME__",
      "mtu": __CNI_MTU__,
      "ipam": {
        "type": "calico-ipam"
      },
      "policy": {
        "type": "k8s"
      },
      "kubernetes": {
        "kubeconfig": "__KUBECONFIG_FILEPATH__"
      }
    },
    {
      "type": "portmap",
      "snat": true,
      "capabilities": {"portMappings": true}
    }
  ]
}

# Weave Net CNI Configuration
{
  "cniVersion": "0.3.0",
  "name": "weave",
  "type": "weave-net",
  "hairpinMode": true
}

# Cilium CNI Configuration
{
  "cniVersion": "0.3.1",
  "name": "cilium",
  "type": "cilium-cni",
  "enable-debug": false
}

# Multus CNI Configuration (Multiple networks)
{
  "name": "multus-cni-network",
  "type": "multus",
  "confdir": "/etc/cni/multus/net.d",
  "binDir": "/opt/cni/bin",
  "readinessindicatorfile": "",
  "delegates": [{
    "name": "default-network",
    "cniVersion": "0.3.1",
    "type": "flannel",
    "delegate": {
      "isDefaultGateway": true
    }
  }]
}

Kubernetes Network Policies

# Network Policy Examples
# Allow all traffic (default in some CNIs)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- {}
# Deny all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
# Allow traffic from specific namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: frontend
ports:
- protocol: TCP
port: 8080
# Allow traffic to specific IP ranges
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-external
spec:
podSelector:
matchLabels:
app: external-client
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 8.8.8.8/32
ports:
- protocol: TCP
port: 53
- to:
- ipBlock:
cidr: 1.1.1.1/32
ports:
- protocol: TCP
port: 53
# Multi-port policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: multi-port-policy
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
- protocol: TCP
port: 8080
# Apply network policies
kubectl apply -f network-policy.yaml
kubectl get networkpolicy
kubectl describe networkpolicy allow-frontend
kubectl delete networkpolicy allow-frontend

5. Advanced Container Networking

Service Mesh Networking (Istio/Linkerd)

1 Istio Service Mesh Configuration
# Istio Sidecar Injection
apiVersion: apps/v1
kind: Deployment
metadata:
  name: productpage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: productpage
  template:
    metadata:
      labels:
        app: productpage
      annotations:
        sidecar.istio.io/inject: "true"  # Enable sidecar
    spec:
      containers:
      - name: productpage
        image: istio/examples-bookinfo-productpage-v1:1.16.2
        ports:
        - containerPort: 9080

# Istio Gateway Configuration
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: bookinfo-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

# Virtual Service (Traffic Routing)
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: productpage
spec:
  hosts:
  - productpage
  http:
  - route:
    - destination:
        host: productpage
        subset: v1
      weight: 90
    - destination:
        host: productpage
        subset: v2
      weight: 10

# Destination Rule (Subset definition)
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: productpage
spec:
  host: productpage
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

# Service Entry (External services)
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: external-api
spec:
  hosts:
  - api.example.com
  ports:
  - number: 443
    name: https
    protocol: HTTPS
  resolution: DNS
  location: MESH_EXTERNAL
2 Linkerd Service Mesh
# Linkerd Installation
curl -sL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin
linkerd version
linkerd check --pre
linkerd install | kubectl apply -f -
linkerd check

# Inject Linkerd into deployments
kubectl get deploy -o yaml | linkerd inject - | kubectl apply -f -

# Linkerd Service Profile
apiVersion: linkerd.io/v1alpha2
kind: ServiceProfile
metadata:
  name: web-svc.linkerd-namespace.svc.cluster.local
  namespace: linkerd-namespace
spec:
  routes:
  - name: GET /
    condition:
      method: GET
      pathRegex: /
  - name: POST /api
    condition:
      method: POST
      pathRegex: /api/.*

# Traffic Split (Canary deployment)
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
  name: web-split
spec:
  service: web-svc
  backends:
  - service: web-v1
    weight: 90
  - service: web-v2
    weight: 10

Network Monitoring and Troubleshooting

# Container Network Troubleshooting
# Check container network configuration
docker inspect --format='{{json .NetworkSettings}}' container_name | jq '.'
docker exec container_name ip addr show
docker exec container_name ip route show
docker exec container_name cat /etc/resolv.conf
docker exec container_name nslookup google.com
# Network connectivity tests
docker exec container_name ping -c 4 8.8.8.8
docker exec container_name curl -I http://google.com
docker exec container_name telnet google.com 80
docker exec container_name traceroute 8.8.8.8
docker exec container_name mtr 8.8.8.8
# Port and connection checks
docker exec container_name netstat -tulpn
docker exec container_name ss -tulpn
docker exec container_name lsof -i
# Network namespace inspection
nsenter -t $(docker inspect -f '{{.State.Pid}}' container) -n ip addr
nsenter -t $(docker inspect -f '{{.State.Pid}}' container) -n ip route
nsenter -t $(docker inspect -f '{{.State.Pid}}' container) -n iptables -L -n -v
# Docker network inspection
docker network inspect bridge | jq '.[0].Containers'
brctl show docker0
bridge link show
# iptables rules for Docker
iptables -t nat -L -n -v
iptables -t filter -L DOCKER -n -v
iptables -t filter -L DOCKER-ISOLATION -n -v
# TCP dump from container
docker run --rm --net container:web nicolaka/netshoot tcpdump -i eth0 -n
docker run --rm --net container:web nicolaka/netshoot tcpdump -i eth0 port 80 -w /tmp/capture.pcap
# Network performance testing
docker run --rm networkstatic/iperf3 -s
docker run --rm --net container:web networkstatic/iperf3 -c iperf-server
# Kubernetes network troubleshooting
kubectl get pods -o wide
kubectl describe pod pod-name
kubectl logs pod-name -c container-name
kubectl exec pod-name -- ip addr show
kubectl exec pod-name -- nslookup kubernetes.default
kubectl get networkpolicy
kubectl describe networkpolicy policy-name

6. Production Network Configurations

Container Networking Best Practices:
1. Use network policies: Implement zero-trust networking
2. Separate network tiers: Frontend, backend, data layers
3. Implement service mesh: For microservices communication
4. Monitor network traffic: Use tools like Cilium Hubble
5. Limit container capabilities: Drop NET_RAW capability
6. Use dedicated CNI plugins: Calico/Cilium for production
7. Implement network encryption: Use WireGuard or IPsec
8. Regular security audits: Check iptables/network policies
9. Plan for scale: Use overlay networks for multi-host
10. Test failure scenarios: Network partition, DNS failures

Production Network Architecture

production-network-architecture.sh - Complete Production Setup
#!/bin/bash
# production-network-architecture.sh - Production container networking setup

set -euo pipefail

# Configuration
CLUSTER_NAME="production"
NETWORK_CIDR="10.244.0.0/16"
SERVICE_CIDR="10.96.0.0/12"
DNS_SERVICE_IP="10.96.0.10"
CALICO_VERSION="3.25"
METALLB_VERSION="0.13"

log() {
    echo "[+] $1"
}

error() {
    echo "[!] $1" >&2
    exit 1
}

setup_calico_network() {
    log "Setting up Calico CNI..."
    
    # Download Calico manifests
    curl -L https://docs.projectcalico.org/manifests/calico.yaml -o calico.yaml
    
    # Customize Calico configuration
    sed -i "s|# - name: CALICO_IPV4POOL_CIDR|- name: CALICO_IPV4POOL_CIDR|g" calico.yaml
    sed -i "s|#   value: \"192.168.0.0/16\"|  value: \"${NETWORK_CIDR}\"|g" calico.yaml
    
    # Enable eBPF dataplane (optional, for performance)
    sed -i '/# Enable eBPF dataplane/a\            - name: FELIX_BPFENABLED\n              value: \"true\"' calico.yaml
    
    # Apply Calico
    kubectl apply -f calico.yaml
    
    # Wait for Calico to be ready
    kubectl wait --for=condition=ready pod -l k8s-app=calico-node -n kube-system --timeout=300s
    
    log "Calico CNI installed and configured"
}

setup_network_policies() {
    log "Setting up network policies..."
    
    # Default deny all ingress
    cat < /dev/null; then
        error "kubectl is not installed"
    fi
    
    # Setup sequence
    setup_calico_network
    setup_network_policies
    setup_metallb_loadbalancer
    setup_ingress_controller
    setup_monitoring
    create_network_diagram
    
    log "Production networking setup completed!"
    log ""
    log "Summary:"
    log "  β€’ Calico CNI installed with CIDR: ${NETWORK_CIDR}"
    log "  β€’ Network policies: Default deny ingress, allow egress"
    log "  β€’ MetalLB load balancer configured"
    log "  β€’ NGINX Ingress Controller deployed"
    log "  β€’ Network monitoring tools installed"
    log ""
    log "Next steps:"
    log "  1. Deploy your applications"
    log "  2. Configure service mesh (if needed)"
    log "  3. Set up monitoring alerts"
    log "  4. Test network policies"
}

# Run main function
main "$@"

7. Troubleshooting Guide

Common Network Issues & Solutions

Issue Symptoms Root Cause Solution
Container Cannot Reach Internet Ping fails to external IPs Missing NAT rules, IP forwarding disabled iptables -t nat -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
DNS Resolution Fails Can ping IPs but not hostnames DNS configuration incorrect Check /etc/resolv.conf, ensure DNS server reachable
Containers Cannot Communicate Ping fails between containers Different networks, firewall rules Ensure same network, check iptables rules
Port Already in Use Bind: address already in use Another process using port ss -tulpn | grep :port, change port or stop process
Slow Network Performance High latency, low throughput MTU issues, network congestion Check MTU settings, use host network for performance
Bridge Network Not Working Containers have no IP address Docker daemon network issues Restart Docker, check docker0 bridge exists
Overlay Network Issues Cross-host communication fails Firewall blocking VXLAN ports Open UDP port 4789 for VXLAN, 7946 for gossip
Network Policy Blocking Traffic Traffic denied between pods Network policy too restrictive Check and modify network policies

Network Debugging Toolkit

# Network debugging commands toolkit
# Basic connectivity tests
ping -c 4 8.8.8.8 # Test internet connectivity
ping -c 4 google.com # Test DNS and connectivity
traceroute 8.8.8.8 # Trace route to destination
mtr 8.8.8.8 # Continuous traceroute
# Port and service checks
telnet host port # Test TCP connectivity
nc -zv host port # Test if port is open
curl -I http://host:port # HTTP connectivity test
openssl s_client -connect host:port -servername host # SSL test
# Network configuration
ip addr show # Show IP addresses
ip route show # Show routing table
ip link show # Show network interfaces
ss -tulpn # Show listening ports
netstat -tulpn # Alternative port listing
# DNS troubleshooting
nslookup domain # DNS lookup
dig domain # Detailed DNS query
dig domain ANY # All DNS records
cat /etc/resolv.conf # Check DNS configuration
# Packet analysis
tcpdump -i any -n port 80 # Capture HTTP traffic
tcpdump -i docker0 -w capture.pcap # Capture to file
tshark -i docker0 -Y "http" # Wireshark CLI
# Firewall and iptables
iptables -L -n -v # List iptables rules
iptables -t nat -L -n -v # List NAT rules
nft list ruleset # List nftables rules
# Bridge and veth
brctl show # Show bridge configuration
bridge link show # Show bridge links
ip -d link show # Detailed link information
# Performance testing
iperf3 -s # Start iperf server
iperf3 -c server -t 30 # Client test for 30 seconds
nping --tcp -p 80 --flags syn google.com # TCP SYN test

Master Container Networking

Linux container networking provides the foundation for modern distributed applications. By understanding network namespaces, virtual networking components, Docker networking models, Kubernetes CNI plugins, and advanced networking features, you can design robust, secure, and scalable container networks.

Key Takeaways: Start with network namespace fundamentals. Master veth pairs and Linux bridges. Choose appropriate Docker network drivers for your use case. Implement Kubernetes networking with CNI plugins like Calico or Cilium. Apply network policies for security. Consider service mesh for advanced traffic management. Monitor network performance and troubleshoot effectively.

Next Steps: Practice manual network namespace creation. Experiment with different Docker network drivers. Deploy a Kubernetes cluster with Calico CNI. Implement network policies in a test environment. Explore service mesh technologies like Istio or Linkerd. Monitor container networks with Cilium Hubble. Stay updated with evolving container networking standards and technologies.