Master Linux container networking with this comprehensive guide covering network namespaces, virtual networking, CNI plugins, Docker networking models, Kubernetes networking, and advanced container network configurations for production environments.
Why Container Networking?
Container networking enables isolated, secure, and scalable communication between containers while providing connectivity to external networks.
- Isolation: Network namespaces provide complete network stack isolation
- Security: Network policies and firewalls control traffic flow
- Scalability: Support for thousands of container networks
- Multi-host: Overlay networks span multiple hosts
- Service Discovery: Automatic DNS resolution for containers
- Load Balancing: Distributed traffic across containers
- Observability: Network monitoring and traffic analysis
1. Linux Network Namespaces Fundamentals
Network Namespace Types
| Namespace Type | Command | Isolation Level | Use Case | Performance |
|---|---|---|---|---|
| Private Network | unshare --net |
Complete isolation | Security, testing | β High |
| Bridge Network | docker network create |
Isolated with NAT | Default Docker | β Good |
| Host Network | --network host |
No isolation | Performance | β‘ Excellent |
| Container Network | --network container:id |
Shared namespace | Sidecar patterns | β Good |
| Overlay Network | --network overlay |
Multi-host | Swarm, Kubernetes | β οΈ Medium |
| Macvlan Network | --network macvlan |
Direct physical | Legacy integration | β Good |
| IPvlan Network | --network ipvlan |
Shared MAC | High density | β Good |
| None Network | --network none |
No networking | Security, air-gapped | N/A |
2. Manual Network Namespace Configuration
Creating Custom Network Namespaces
Advanced Network Namespace Script
#!/bin/bash
# advanced-network-setup.sh - Complete network namespace configuration
set -euo pipefail
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log() {
echo -e "${GREEN}[+]${NC} $1"
}
error() {
echo -e "${RED}[!]${NC} $1" >&2
exit 1
}
cleanup() {
log "Cleaning up network configuration..."
# Delete namespaces
ip netns delete red 2>/dev/null || true
ip netns delete blue 2>/dev/null || true
ip netns delete green 2>/dev/null || true
# Delete bridge
ip link delete br0 2>/dev/null || true
# Clean iptables rules
iptables -t nat -D POSTROUTING -s 10.0.0.0/24 -j MASQUERADE 2>/dev/null || true
iptables -D FORWARD -i br0 -j ACCEPT 2>/dev/null || true
iptables -D FORWARD -o br0 -j ACCEPT 2>/dev/null || true
log "Cleanup completed"
}
trap cleanup EXIT
create_namespaces() {
log "Creating network namespaces..."
# Create namespaces
ip netns add red
ip netns add blue
ip netns add green
# Create loopback interfaces
ip netns exec red ip link set lo up
ip netns exec blue ip link set lo up
ip netns exec green ip link set lo up
log "Namespaces created: red, blue, green"
}
create_bridge() {
log "Creating Linux bridge..."
# Create bridge
ip link add br0 type bridge
ip link set br0 up
ip addr add 10.0.0.1/24 dev br0
log "Bridge br0 created with IP 10.0.0.1/24"
}
connect_namespace_to_bridge() {
local namespace="$1"
local ip_address="$2"
log "Connecting namespace $namespace to bridge (IP: $ip_address)..."
# Create veth pair
local veth_container="veth-$namespace"
local veth_bridge="veth-br-$namespace"
ip link add ${veth_container} type veth peer name ${veth_bridge}
# Move container end to namespace
ip link set ${veth_container} netns ${namespace}
# Connect bridge end to bridge
ip link set ${veth_bridge} master br0
ip link set ${veth_bridge} up
# Configure container interface
ip netns exec ${namespace} ip addr add ${ip_address}/24 dev ${veth_container}
ip netns exec ${namespace} ip link set ${veth_container} up
ip netns exec ${namespace} ip route add default via 10.0.0.1
log "Namespace $namespace connected to bridge with IP $ip_address"
}
setup_dns() {
log "Setting up DNS in namespaces..."
# Create custom resolv.conf for each namespace
for ns in red blue green; do
mkdir -p /etc/netns/${ns}
cat > /etc/netns/${ns}/resolv.conf << EOF
nameserver 8.8.8.8
nameserver 1.1.1.1
search localdomain
EOF
done
log "DNS configured for all namespaces"
}
enable_nat() {
log "Enabling NAT for internet access..."
# Enable IP forwarding
sysctl -w net.ipv4.ip_forward=1
# NAT configuration
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 ! -o br0 -j MASQUERADE
iptables -A FORWARD -i br0 -j ACCEPT
iptables -A FORWARD -o br0 -j ACCEPT
log "NAT enabled for 10.0.0.0/24 network"
}
setup_firewall() {
log "Setting up firewall rules..."
# Allow established connections
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Allow red to access internet
iptables -A FORWARD -i veth-br-red -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o veth-br-red -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Allow blue to access only specific ports
iptables -A FORWARD -i veth-br-blue -o eth0 -p tcp --dport 80 -j ACCEPT
iptables -A FORWARD -i veth-br-blue -o eth0 -p tcp --dport 443 -j ACCEPT
# Block green from internet
iptables -A FORWARD -i veth-br-green -o eth0 -j DROP
log "Firewall rules configured"
}
test_connectivity() {
log "Testing connectivity..."
echo "=== Testing internal connectivity ==="
# Test between namespaces
ip netns exec red ping -c 2 10.0.0.3 && echo "β Red can reach Blue" || echo "β Red cannot reach Blue"
ip netns exec blue ping -c 2 10.0.0.2 && echo "β Blue can reach Red" || echo "β Blue cannot reach Red"
ip netns exec green ping -c 2 10.0.0.2 && echo "β Green can reach Red" || echo "β Green cannot reach Red"
echo -e "\n=== Testing internet connectivity ==="
# Test internet access
ip netns exec red ping -c 2 8.8.8.8 && echo "β Red has internet access" || echo "β Red no internet"
ip netns exec blue ping -c 2 8.8.8.8 && echo "β Blue has internet access" || echo "β Blue no internet"
ip netns exec green ping -c 2 8.8.8.8 && echo "β Green has internet access" || echo "β Green no internet"
echo -e "\n=== Testing DNS resolution ==="
# Test DNS
ip netns exec red nslookup google.com && echo "β Red DNS working" || echo "β Red DNS failed"
ip netns exec blue nslookup github.com && echo "β Blue DNS working" || echo "β Blue DNS failed"
}
show_configuration() {
log "Current network configuration:"
echo -e "\n=== Network Namespaces ==="
ip netns list
echo -e "\n=== Bridge Configuration ==="
ip addr show br0
bridge link show
echo -e "\n=== Namespace Interfaces ==="
for ns in red blue green; do
echo -e "\n--- Namespace: $ns ---"
ip netns exec $ns ip addr show
ip netns exec $ns ip route show
done
echo -e "\n=== iptables Rules ==="
iptables -t nat -L POSTROUTING -n -v
iptables -L FORWARD -n -v
}
main() {
log "Starting advanced network namespace setup..."
# Cleanup any existing configuration
cleanup
# Setup sequence
create_namespaces
create_bridge
connect_namespace_to_bridge "red" "10.0.0.2"
connect_namespace_to_bridge "blue" "10.0.0.3"
connect_namespace_to_bridge "green" "10.0.0.4"
setup_dns
enable_nat
setup_firewall
# Show and test configuration
show_configuration
test_connectivity
log "Network setup completed successfully!"
log "Namespaces are running. Press Ctrl+C to clean up."
# Keep running
sleep infinity
}
# Check root privileges
if [[ $EUID -ne 0 ]]; then
error "This script must be run as root"
fi
# Run main function
main "$@"
3. Docker Networking Models
Docker Network Drivers
Docker Network Driver Comparison
| Driver | Isolation | Performance | Use Case | Multi-host | Configuration |
|---|---|---|---|---|---|
| bridge | Container isolation | Good | Default, single host | β No | Automatic |
| host | No isolation | Excellent | Performance critical | β No | Simple |
| overlay | Swarm isolation | Medium | Multi-host, Swarm | β Yes | Complex |
| macvlan | Direct physical | Excellent | Legacy integration | β Yes | Complex |
| ipvlan | Shared MAC | Excellent | High density | β Yes | Complex |
| none | Complete | N/A | Security, air-gapped | β No | Simple |
| container | Shared namespace | Good | Sidecar patterns | β No | Simple |
4. Kubernetes Networking (CNI)
CNI Plugins and Configuration
# CNI Configuration Directory Structure
/etc/cni/net.d/
βββ 10-flannel.conflist # Flannel plugin configuration
βββ 20-calico.conflist # Calico plugin configuration
βββ 30-weave.conflist # Weave Net configuration
βββ 99-loopback.conf # Loopback plugin
# Basic CNI Configuration (JSON format)
{
"cniVersion": "0.4.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[
{
"subnet": "10.22.0.0/16",
"gateway": "10.22.0.1"
}
]
],
"routes": [
{ "dst": "0.0.0.0/0" }
]
},
"dns": {
"nameservers": ["8.8.8.8", "1.1.1.1"]
}
}
# Flannel CNI Configuration
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
# Calico CNI Configuration
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
# Weave Net CNI Configuration
{
"cniVersion": "0.3.0",
"name": "weave",
"type": "weave-net",
"hairpinMode": true
}
# Cilium CNI Configuration
{
"cniVersion": "0.3.1",
"name": "cilium",
"type": "cilium-cni",
"enable-debug": false
}
# Multus CNI Configuration (Multiple networks)
{
"name": "multus-cni-network",
"type": "multus",
"confdir": "/etc/cni/multus/net.d",
"binDir": "/opt/cni/bin",
"readinessindicatorfile": "",
"delegates": [{
"name": "default-network",
"cniVersion": "0.3.1",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}]
}
Kubernetes Network Policies
5. Advanced Container Networking
Service Mesh Networking (Istio/Linkerd)
# Istio Sidecar Injection
apiVersion: apps/v1
kind: Deployment
metadata:
name: productpage
spec:
replicas: 1
selector:
matchLabels:
app: productpage
template:
metadata:
labels:
app: productpage
annotations:
sidecar.istio.io/inject: "true" # Enable sidecar
spec:
containers:
- name: productpage
image: istio/examples-bookinfo-productpage-v1:1.16.2
ports:
- containerPort: 9080
# Istio Gateway Configuration
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
# Virtual Service (Traffic Routing)
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- route:
- destination:
host: productpage
subset: v1
weight: 90
- destination:
host: productpage
subset: v2
weight: 10
# Destination Rule (Subset definition)
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: productpage
spec:
host: productpage
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
# Service Entry (External services)
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-api
spec:
hosts:
- api.example.com
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
# Linkerd Installation
curl -sL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin
linkerd version
linkerd check --pre
linkerd install | kubectl apply -f -
linkerd check
# Inject Linkerd into deployments
kubectl get deploy -o yaml | linkerd inject - | kubectl apply -f -
# Linkerd Service Profile
apiVersion: linkerd.io/v1alpha2
kind: ServiceProfile
metadata:
name: web-svc.linkerd-namespace.svc.cluster.local
namespace: linkerd-namespace
spec:
routes:
- name: GET /
condition:
method: GET
pathRegex: /
- name: POST /api
condition:
method: POST
pathRegex: /api/.*
# Traffic Split (Canary deployment)
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-split
spec:
service: web-svc
backends:
- service: web-v1
weight: 90
- service: web-v2
weight: 10
Network Monitoring and Troubleshooting
6. Production Network Configurations
1. Use network policies: Implement zero-trust networking
2. Separate network tiers: Frontend, backend, data layers
3. Implement service mesh: For microservices communication
4. Monitor network traffic: Use tools like Cilium Hubble
5. Limit container capabilities: Drop NET_RAW capability
6. Use dedicated CNI plugins: Calico/Cilium for production
7. Implement network encryption: Use WireGuard or IPsec
8. Regular security audits: Check iptables/network policies
9. Plan for scale: Use overlay networks for multi-host
10. Test failure scenarios: Network partition, DNS failures
Production Network Architecture
#!/bin/bash
# production-network-architecture.sh - Production container networking setup
set -euo pipefail
# Configuration
CLUSTER_NAME="production"
NETWORK_CIDR="10.244.0.0/16"
SERVICE_CIDR="10.96.0.0/12"
DNS_SERVICE_IP="10.96.0.10"
CALICO_VERSION="3.25"
METALLB_VERSION="0.13"
log() {
echo "[+] $1"
}
error() {
echo "[!] $1" >&2
exit 1
}
setup_calico_network() {
log "Setting up Calico CNI..."
# Download Calico manifests
curl -L https://docs.projectcalico.org/manifests/calico.yaml -o calico.yaml
# Customize Calico configuration
sed -i "s|# - name: CALICO_IPV4POOL_CIDR|- name: CALICO_IPV4POOL_CIDR|g" calico.yaml
sed -i "s|# value: \"192.168.0.0/16\"| value: \"${NETWORK_CIDR}\"|g" calico.yaml
# Enable eBPF dataplane (optional, for performance)
sed -i '/# Enable eBPF dataplane/a\ - name: FELIX_BPFENABLED\n value: \"true\"' calico.yaml
# Apply Calico
kubectl apply -f calico.yaml
# Wait for Calico to be ready
kubectl wait --for=condition=ready pod -l k8s-app=calico-node -n kube-system --timeout=300s
log "Calico CNI installed and configured"
}
setup_network_policies() {
log "Setting up network policies..."
# Default deny all ingress
cat < /dev/null; then
error "kubectl is not installed"
fi
# Setup sequence
setup_calico_network
setup_network_policies
setup_metallb_loadbalancer
setup_ingress_controller
setup_monitoring
create_network_diagram
log "Production networking setup completed!"
log ""
log "Summary:"
log " β’ Calico CNI installed with CIDR: ${NETWORK_CIDR}"
log " β’ Network policies: Default deny ingress, allow egress"
log " β’ MetalLB load balancer configured"
log " β’ NGINX Ingress Controller deployed"
log " β’ Network monitoring tools installed"
log ""
log "Next steps:"
log " 1. Deploy your applications"
log " 2. Configure service mesh (if needed)"
log " 3. Set up monitoring alerts"
log " 4. Test network policies"
}
# Run main function
main "$@"
7. Troubleshooting Guide
Common Network Issues & Solutions
| Issue | Symptoms | Root Cause | Solution |
|---|---|---|---|
| Container Cannot Reach Internet | Ping fails to external IPs | Missing NAT rules, IP forwarding disabled | iptables -t nat -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE |
| DNS Resolution Fails | Can ping IPs but not hostnames | DNS configuration incorrect | Check /etc/resolv.conf, ensure DNS server reachable |
| Containers Cannot Communicate | Ping fails between containers | Different networks, firewall rules | Ensure same network, check iptables rules |
| Port Already in Use | Bind: address already in use | Another process using port | ss -tulpn | grep :port, change port or stop process |
| Slow Network Performance | High latency, low throughput | MTU issues, network congestion | Check MTU settings, use host network for performance |
| Bridge Network Not Working | Containers have no IP address | Docker daemon network issues | Restart Docker, check docker0 bridge exists |
| Overlay Network Issues | Cross-host communication fails | Firewall blocking VXLAN ports | Open UDP port 4789 for VXLAN, 7946 for gossip |
| Network Policy Blocking Traffic | Traffic denied between pods | Network policy too restrictive | Check and modify network policies |
Network Debugging Toolkit
Master Container Networking
Linux container networking provides the foundation for modern distributed applications. By understanding network namespaces, virtual networking components, Docker networking models, Kubernetes CNI plugins, and advanced networking features, you can design robust, secure, and scalable container networks.
Key Takeaways: Start with network namespace fundamentals. Master veth pairs and Linux bridges. Choose appropriate Docker network drivers for your use case. Implement Kubernetes networking with CNI plugins like Calico or Cilium. Apply network policies for security. Consider service mesh for advanced traffic management. Monitor network performance and troubleshoot effectively.
Next Steps: Practice manual network namespace creation. Experiment with different Docker network drivers. Deploy a Kubernetes cluster with Calico CNI. Implement network policies in a test environment. Explore service mesh technologies like Istio or Linkerd. Monitor container networks with Cilium Hubble. Stay updated with evolving container networking standards and technologies.