Cloud CLI Tools: Complete Guide to AWS CLI, gcloud, az Command Line

Master cloud command line tools with this comprehensive guide covering AWS CLI, Google Cloud SDK (gcloud), Azure CLI (az), IBM Cloud CLI, Oracle Cloud CLI, and multi-cloud management strategies for cloud automation and operations.

Cloud CLI Tools Architecture Local Development Environment AWS CLI v2 gcloud SDK Azure CLI Terraform Ansible Config Files Credentials Scripts Cloud Providers AWS EC2 Instances S3 Buckets RDS Databases IAM Roles Google Cloud Compute Engine Cloud Storage Cloud SQL Kubernetes Microsoft Azure Virtual Machines Blob Storage SQL Database AKS Automation & Integration CI/CD Pipelines GitHub Actions, Jenkins Infrastructure as Code Terraform, CloudFormation Monitoring CloudWatch, Stackdriver Security & Compliance Multi-cloud Management Backup & Disaster Recovery
Cloud CLI tools architecture showing local environment, cloud providers, and automation integration

Why Cloud CLI Tools?

Cloud command line tools enable efficient cloud resource management, automation, and infrastructure as code workflows across multiple cloud providers.

  • Automation: Script and automate cloud operations
  • Efficiency: Faster than web console for repetitive tasks
  • Consistency: Standardized commands across environments
  • Integration: Seamless integration with CI/CD pipelines
  • Multi-cloud: Manage resources across different clouds
  • Infrastructure as Code: Version control for cloud resources
  • Development: Local development and testing workflows
  • Monitoring: Real-time cloud resource monitoring

1. AWS CLI (Amazon Web Services)

☁️
AWS CLI v2
aws --version
Official AWS command line interface for all AWS services. AWS CLI
🛠️
AWS SAM CLI
sam --version
Serverless Application Model for AWS Lambda development. Lambda Serverless
🚀
AWS CDK CLI
cdk --version
Cloud Development Kit for infrastructure as code. IaC TypeScript
📊
AWS CloudShell
aws cloudshell
Browser-based shell with pre-installed AWS tools. Browser Web
🔒
AWS SSO CLI
aws sso login
Single Sign-On authentication for AWS CLI. Security Auth
📦
AWS Tools for PowerShell
Get-AWSPowerShellVersion
AWS modules for Windows PowerShell. Windows PowerShell

AWS CLI Installation and Configuration

# AWS CLI v2 Installation
# Linux/macOS Installation
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version
# macOS with Homebrew
brew install awscli
# Windows Installation
msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi
# Docker installation
docker run --rm -it amazon/aws-cli --version
# Configuration
aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json
# Multiple profiles
aws configure --profile dev
aws configure --profile prod
aws configure --profile staging
# SSO Configuration
aws configure sso
SSO session name (Recommended): my-sso
SSO start URL [None]: https://my-sso-portal.awsapps.com/start
SSO region [None]: us-east-1
SSO registration scopes [None]: sso:account:access
# Assume role configuration
cat > ~/.aws/config << EOF
[profile cross-account]
role_arn = arn:aws:iam::123456789012:role/CrossAccountRole
source_profile = default
region = us-east-1
EOF
# Environment variables
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-east-1
export AWS_PROFILE=dev
export AWS_PAGER="" # Disable pager
# Credential helper for ECR
aws ecr get-login-password --region us-east-1 | docker login \
--username AWS \
--password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
# MFA Configuration
cat > ~/.aws/config << EOF
[profile mfa]
region = us-east-1
output = json
[profile mfa-assumed]
role_arn = arn:aws:iam::123456789012:role/AdminRole
source_profile = mfa
mfa_serial = arn:aws:iam::123456789012:mfa/user
EOF
# Verify configuration
aws sts get-caller-identity
aws configure list
aws configure list-profiles

AWS CLI Service Commands

aws-cli-cheatsheet.sh - AWS CLI Comprehensive Cheat Sheet
#!/bin/bash
# aws-cli-cheatsheet.sh - Comprehensive AWS CLI commands reference

# EC2 (Elastic Compute Cloud)
aws ec2 describe-instances
aws ec2 run-instances \
    --image-id ami-0c55b159cbfafe1f0 \
    --count 1 \
    --instance-type t2.micro \
    --key-name MyKeyPair \
    --security-group-ids sg-0abcdef1234567890 \
    --subnet-id subnet-0abcdef1234567890

aws ec2 start-instances --instance-ids i-1234567890abcdef0
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0

aws ec2 create-security-group \
    --group-name MySecurityGroup \
    --description "My security group" \
    --vpc-id vpc-12345678

aws ec2 authorize-security-group-ingress \
    --group-id sg-12345678 \
    --protocol tcp \
    --port 22 \
    --cidr 0.0.0.0/0

# S3 (Simple Storage Service)
aws s3 ls
aws s3 mb s3://my-bucket
aws s3 rb s3://my-bucket --force
aws s3 cp file.txt s3://my-bucket/
aws s3 sync ./local-folder s3://my-bucket/remote-folder
aws s3 rm s3://my-bucket/file.txt
aws s3 presign s3://my-bucket/file.txt --expires-in 3600

aws s3api create-bucket \
    --bucket my-bucket \
    --region us-east-1 \
    --create-bucket-configuration LocationConstraint=us-east-1

aws s3api put-bucket-lifecycle-configuration \
    --bucket my-bucket \
    --lifecycle-configuration file://lifecycle.json

# IAM (Identity and Access Management)
aws iam create-user --user-name new-user
aws iam create-group --group-name Admins
aws iam add-user-to-group --user-name new-user --group-name Admins
aws iam attach-group-policy --group-name Admins --policy-arn arn:aws:iam::aws:policy/AdministratorAccess

aws iam create-policy \
    --policy-name MyPolicy \
    --policy-document file://policy.json

aws iam create-role \
    --role-name MyRole \
    --assume-role-policy-document file://trust-policy.json

aws iam list-users
aws iam list-groups
aws iam list-policies
aws iam list-roles

# RDS (Relational Database Service)
aws rds describe-db-instances
aws rds create-db-instance \
    --db-instance-identifier mydb \
    --db-instance-class db.t3.micro \
    --engine mysql \
    --master-username admin \
    --master-user-password password123 \
    --allocated-storage 20

aws rds create-db-snapshot \
    --db-snapshot-identifier mydb-snapshot \
    --db-instance-identifier mydb

aws rds restore-db-instance-from-db-snapshot \
    --db-instance-identifier mydb-restored \
    --db-snapshot-identifier mydb-snapshot

# Lambda
aws lambda list-functions
aws lambda create-function \
    --function-name my-function \
    --runtime python3.8 \
    --role arn:aws:iam::123456789012:role/lambda-role \
    --handler lambda_function.lambda_handler \
    --zip-file fileb://function.zip

aws lambda invoke \
    --function-name my-function \
    --payload '{"key": "value"}' \
    output.txt

aws lambda update-function-code \
    --function-name my-function \
    --zip-file fileb://function.zip

# CloudFormation
aws cloudformation create-stack \
    --stack-name my-stack \
    --template-body file://template.yaml \
    --parameters ParameterKey=InstanceType,ParameterValue=t2.micro

aws cloudformation update-stack \
    --stack-name my-stack \
    --template-body file://template-updated.yaml

aws cloudformation describe-stacks --stack-name my-stack
aws cloudformation delete-stack --stack-name my-stack

# ECS (Elastic Container Service)
aws ecs list-clusters
aws ecs list-services --cluster my-cluster
aws ecs list-tasks --cluster my-cluster --service my-service

aws ecs register-task-definition \
    --family my-task \
    --network-mode awsvpc \
    --requires-compatibilities FARGATE \
    --cpu 256 \
    --memory 512 \
    --execution-role-arn arn:aws:iam::123456789012:role/ecsTaskExecutionRole \
    --container-definitions file://container-def.json

aws ecs create-service \
    --cluster my-cluster \
    --service-name my-service \
    --task-definition my-task:1 \
    --desired-count 2 \
    --launch-type FARGATE \
    --network-configuration file://network-config.json

# CloudWatch
aws cloudwatch list-metrics --namespace AWS/EC2
aws cloudwatch get-metric-statistics \
    --namespace AWS/EC2 \
    --metric-name CPUUtilization \
    --dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
    --start-time 2023-01-01T00:00:00Z \
    --end-time 2023-01-01T23:59:59Z \
    --period 3600 \
    --statistics Average

aws cloudwatch put-metric-alarm \
    --alarm-name high-cpu \
    --metric-name CPUUtilization \
    --namespace AWS/EC2 \
    --statistic Average \
    --period 300 \
    --threshold 80 \
    --comparison-operator GreaterThanThreshold \
    --dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
    --evaluation-periods 2 \
    --alarm-actions arn:aws:sns:us-east-1:123456789012:my-topic

# VPC (Virtual Private Cloud)
aws ec2 describe-vpcs
aws ec2 create-vpc --cidr-block 10.0.0.0/16
aws ec2 create-subnet --vpc-id vpc-12345678 --cidr-block 10.0.1.0/24
aws ec2 create-internet-gateway
aws ec2 attach-internet-gateway --vpc-id vpc-12345678 --internet-gateway-id igw-12345678

aws ec2 create-route-table --vpc-id vpc-12345678
aws ec2 create-route \
    --route-table-id rtb-12345678 \
    --destination-cidr-block 0.0.0.0/0 \
    --gateway-id igw-12345678

# EKS (Elastic Kubernetes Service)
aws eks list-clusters
aws eks describe-cluster --name my-cluster
aws eks update-kubeconfig --name my-cluster --region us-east-1

aws eks create-cluster \
    --name my-cluster \
    --role-arn arn:aws:iam::123456789012:role/eks-role \
    --resources-vpc-config subnetIds=subnet-123,subnet-456,securityGroupIds=sg-789

# SNS (Simple Notification Service)
aws sns list-topics
aws sns create-topic --name my-topic
aws sns subscribe \
    --topic-arn arn:aws:sns:us-east-1:123456789012:my-topic \
    --protocol email \
    --notification-endpoint user@example.com

aws sns publish \
    --topic-arn arn:aws:sns:us-east-1:123456789012:my-topic \
    --message "Hello World" \
    --subject "Test Message"

# SQS (Simple Queue Service)
aws sqs list-queues
aws sqs create-queue --queue-name my-queue
aws sqs send-message \
    --queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue \
    --message-body "Test message"

aws sqs receive-message \
    --queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue \
    --max-number-of-messages 10

# Route 53
aws route53 list-hosted-zones
aws route53 create-hosted-zone \
    --name example.com \
    --caller-reference 2019-01-01-12:00

aws route53 change-resource-record-sets \
    --hosted-zone-id Z123456789ABC \
    --change-batch file://record-set.json

# Cost Explorer
aws ce get-cost-and-usage \
    --time-period Start=2023-01-01,End=2023-01-31 \
    --granularity MONTHLY \
    --metrics "BlendedCost" "UnblendedCost" "UsageQuantity"

aws ce get-cost-forecast \
    --time-period Start=2023-02-01,End=2023-02-28 \
    --granularity MONTHLY \
    --metric BLENDED_COST

# Systems Manager (SSM)
aws ssm describe-instance-information
aws ssm send-command \
    --instance-ids i-1234567890abcdef0 \
    --document-name "AWS-RunShellScript" \
    --parameters 'commands=["ls -la"]'

aws ssm get-parameter --name /prod/database/password --with-decryption
aws ssm put-parameter \
    --name /prod/database/password \
    --value "secret123" \
    --type SecureString

# Advanced Queries with JMESPath
aws ec2 describe-instances --query 'Reservations[].Instances[].{ID:InstanceId,Type:InstanceType,State:State.Name}'
aws ec2 describe-instances --query 'Reservations[].Instances[?State.Name==`running`].InstanceId'
aws ec2 describe-instances --query 'length(Reservations[].Instances[])'
aws ec2 describe-instances --query 'sort_by(Reservations[].Instances[], &LaunchTime)[-1].InstanceId'

# Output Formats
aws ec2 describe-instances --output json
aws ec2 describe-instances --output text
aws ec2 describe-instances --output table
aws ec2 describe-instances --output yaml
aws ec2 describe-instances --output yaml-stream

2. Google Cloud SDK (gcloud)

gcloud Installation and Configuration

# Google Cloud SDK Installation
# Linux Installation
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-411.0.0-linux-x86_64.tar.gz
tar -xzf google-cloud-sdk-411.0.0-linux-x86_64.tar.gz
./google-cloud-sdk/install.sh
source ~/.bashrc
# macOS with Homebrew
brew install --cask google-cloud-sdk
source "$(brew --prefix)/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/path.bash.inc"
# Windows Installation
# Download from https://cloud.google.com/sdk/docs/install#windows
# Docker installation
docker run -it google/cloud-sdk:alpine gcloud --version
# Initialize gcloud
gcloud init
gcloud auth login
gcloud config set project my-project-123456
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
# Multiple configurations
gcloud config configurations create dev
gcloud config configurations create prod
gcloud config configurations list
gcloud config configurations activate dev
# Service account authentication
gcloud auth activate-service-account --key-file=service-account-key.json
export GOOGLE_APPLICATION_CREDENTIALS="service-account-key.json"
# Application Default Credentials
gcloud auth application-default login
gcloud auth application-default print-access-token
# Configure Docker with gcloud
gcloud auth configure-docker
gcloud auth configure-docker us-central1-docker.pkg.dev
# IAM and permissions
gcloud auth list
gcloud config list
gcloud info
# Update components
gcloud components update
gcloud components install beta
gcloud components list
# Environment variables
export CLOUDSDK_CORE_PROJECT=my-project-123456
export CLOUDSDK_COMPUTE_REGION=us-central1
export CLOUDSDK_COMPUTE_ZONE=us-central1-a
export CLOUDSDK_AUTH_ACCESS_TOKEN=$(gcloud auth print-access-token)
# Verify configuration
gcloud auth print-access-token
gcloud projects list
gcloud config get-value project

gcloud Service Commands

1 Compute Engine (GCE)
# List instances
gcloud compute instances list
gcloud compute instances list --filter="status=RUNNING"
gcloud compute instances list --format="table(name,zone,machineType,status)"

# Create instance
gcloud compute instances create my-instance \
    --zone=us-central1-a \
    --machine-type=e2-micro \
    --image-family=debian-11 \
    --image-project=debian-cloud \
    --tags=http-server,https-server

# SSH to instance
gcloud compute ssh my-instance --zone=us-central1-a
gcloud compute scp local-file.txt my-instance:~/remote-file.txt --zone=us-central1-a

# Manage instances
gcloud compute instances start my-instance --zone=us-central1-a
gcloud compute instances stop my-instance --zone=us-central1-a
gcloud compute instances delete my-instance --zone=us-central1-a

# Disks and snapshots
gcloud compute disks create my-disk --size=100GB --zone=us-central1-a
gcloud compute snapshots create my-snapshot --source-disk=my-disk --source-disk-zone=us-central1-a

# Instance templates
gcloud compute instance-templates create my-template \
    --machine-type=e2-micro \
    --image-family=debian-11 \
    --image-project=debian-cloud \
    --tags=http-server

# Instance groups
gcloud compute instance-groups managed create my-group \
    --base-instance-name=my-instance \
    --size=3 \
    --template=my-template \
    --zone=us-central1-a

# Load balancer
gcloud compute addresses create lb-ip --region=us-central1
gcloud compute forwarding-rules create http-rule \
    --region=us-central1 \
    --ports=80 \
    --address=lb-ip \
    --target-pool=my-pool
2 Cloud Storage (GCS)
# List buckets
gcloud storage ls
gcloud storage buckets list
gcloud storage buckets describe gs://my-bucket

# Create bucket
gcloud storage buckets create gs://my-bucket --location=us-central1
gcloud storage buckets create gs://my-bucket --location=us-central1 --uniform-bucket-level-access

# Copy files
gcloud storage cp local-file.txt gs://my-bucket/
gcloud storage cp gs://my-bucket/remote-file.txt ./
gcloud storage cp -r local-dir/ gs://my-bucket/remote-dir/

# Manage objects
gcloud storage ls gs://my-bucket/
gcloud storage rm gs://my-bucket/file.txt
gcloud storage mv gs://my-bucket/old.txt gs://my-bucket/new.txt

# Set permissions
gcloud storage buckets add-iam-policy-binding gs://my-bucket \
    --member=user:user@example.com \
    --role=roles/storage.objectViewer

gcloud storage objects update gs://my-bucket/file.txt \
    --add-acl-grant=entity=user-user@example.com,role=READER

# Lifecycle rules
gcloud storage buckets update gs://my-bucket \
    --lifecycle-file=lifecycle.json

# Generate signed URL
gcloud storage sign-url gs://my-bucket/file.txt \
    --duration=1h \
    --private-key-file=key.pem
3 Cloud SQL
# List instances
gcloud sql instances list
gcloud sql instances describe my-instance

# Create instance
gcloud sql instances create my-instance \
    --database-version=MYSQL_8_0 \
    --tier=db-f1-micro \
    --region=us-central1 \
    --root-password=my-password

# Manage databases
gcloud sql databases list --instance=my-instance
gcloud sql databases create my-database --instance=my-instance
gcloud sql databases delete my-database --instance=my-instance

# Users management
gcloud sql users list --instance=my-instance
gcloud sql users create my-user --instance=my-instance --password=my-password
gcloud sql users set-password my-user --instance=my-instance --password=new-password

# Backup and restore
gcloud sql backups create --instance=my-instance --async
gcloud sql backups list --instance=my-instance
gcloud sql instances restore my-instance \
    --restore-backup-id=backup-id \
    --async

# Export and import
gcloud sql export sql my-instance gs://my-bucket/backup.sql \
    --database=my-database

gcloud sql import sql my-instance gs://my-bucket/backup.sql \
    --database=my-database

# Connect to instance
gcloud sql connect my-instance --user=root

3. Azure CLI (az)

Azure CLI Installation and Configuration

# Azure CLI Installation
# Linux Installation
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# or
sudo apt-get update && sudo apt-get install azure-cli
# macOS Installation
brew update && brew install azure-cli
# Windows Installation
# Download MSI from https://aka.ms/installazurecliwindows
# Docker installation
docker run -it mcr.microsoft.com/azure-cli az --version
# Login and configuration
az login
az login --use-device-code # For headless systems
az login --service-principal -u -p --tenant
# Set subscription
az account list --output table
az account set --subscription "My Subscription"
az account show
# Configure defaults
az configure --defaults location=eastus
az configure --defaults group=my-resource-group
az configure --list-defaults
# Create service principal
az ad sp create-for-rbac --name MyServicePrincipal --role Contributor
az ad sp create-for-rbac --name MyServicePrincipal \
--role Contributor \
--scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group}
# Login with service principal
az login --service-principal \
--username \
--password \
--tenant
# Managed Identity
az login --identity
# Configure output format
az configure --defaults output=table
az configure --defaults output=json
az configure --defaults output=jsonc
az configure --defaults output=tsv
az configure --defaults output=yaml
# Environment variables
export AZURE_SUBSCRIPTION_ID="your-subscription-id"
export AZURE_TENANT_ID="your-tenant-id"
export AZURE_CLIENT_ID="your-client-id"
export AZURE_CLIENT_SECRET="your-client-secret"
export AZURE_DEFAULT_LOCATION="eastus"
export AZURE_DEFAULT_RESOURCE_GROUP="my-resource-group"
# Update Azure CLI
az upgrade
az extension add --name
az extension list
az extension update --name
# Verify installation
az --version
az account show
az config get

Azure CLI Service Commands

Service Command Category Common Commands Description
Virtual Machines az vm create, list, start, stop, deallocate Manage Azure Virtual Machines
Storage az storage account create, container create, blob upload Manage Azure Storage accounts and blobs
Networking az network vnet create, nsg create, public-ip create Manage virtual networks and security
App Service az webapp create, list, deploy, config set Manage web applications
Kubernetes az aks create, get-credentials, scale, upgrade Manage Azure Kubernetes Service
Database az sql server create, db create, firewall-rule create Manage SQL databases
Key Vault az keyvault create, secret set, secret show Manage secrets and keys
Monitor az monitor metrics list, alert create, log-analytics Monitor resources and set alerts
azure-cli-automation.sh - Azure Automation Script
#!/bin/bash
# azure-cli-automation.sh - Complete Azure infrastructure automation

set -euo pipefail

# Configuration
RESOURCE_GROUP="my-resource-group"
LOCATION="eastus"
VNET_NAME="my-vnet"
SUBNET_NAME="my-subnet"
VM_NAME="my-vm"
STORAGE_ACCOUNT="mystorage$(date +%s)"
SQL_SERVER="mysqlserver$(date +%s)"
SQL_DATABASE="mydatabase"
AKS_CLUSTER="myakscluster"

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}

error() {
    echo "[ERROR] $1" >&2
    exit 1
}

check_prerequisites() {
    log "Checking prerequisites..."
    
    # Check Azure CLI
    if ! command -v az &> /dev/null; then
        error "Azure CLI is not installed. Please install it first."
    fi
    
    # Check login status
    if ! az account show &> /dev/null; then
        log "Please login to Azure..."
        az login
    fi
    
    log "Prerequisites check passed"
}

create_resource_group() {
    log "Creating resource group: $RESOURCE_GROUP"
    
    az group create \
        --name $RESOURCE_GROUP \
        --location $LOCATION \
        --tags "Environment=Production" "Project=MyProject"
    
    log "Resource group created"
}

create_virtual_network() {
    log "Creating virtual network: $VNET_NAME"
    
    az network vnet create \
        --resource-group $RESOURCE_GROUP \
        --name $VNET_NAME \
        --address-prefixes 10.0.0.0/16 \
        --subnet-name $SUBNET_NAME \
        --subnet-prefixes 10.0.1.0/24
    
    log "Virtual network created"
}

create_storage_account() {
    log "Creating storage account: $STORAGE_ACCOUNT"
    
    az storage account create \
        --name $STORAGE_ACCOUNT \
        --resource-group $RESOURCE_GROUP \
        --location $LOCATION \
        --sku Standard_LRS \
        --kind StorageV2 \
        --access-tier Hot
    
    # Get connection string
    CONNECTION_STRING=$(az storage account show-connection-string \
        --name $STORAGE_ACCOUNT \
        --resource-group $RESOURCE_GROUP \
        --query connectionString \
        --output tsv)
    
    # Create container
    az storage container create \
        --name mycontainer \
        --account-name $STORAGE_ACCOUNT \
        --auth-mode key \
        --connection-string "$CONNECTION_STRING"
    
    log "Storage account created"
}

create_virtual_machine() {
    log "Creating virtual machine: $VM_NAME"
    
    az vm create \
        --resource-group $RESOURCE_GROUP \
        --name $VM_NAME \
        --image UbuntuLTS \
        --size Standard_B1s \
        --admin-username azureuser \
        --generate-ssh-keys \
        --vnet-name $VNET_NAME \
        --subnet $SUBNET_NAME \
        --public-ip-sku Standard \
        --nsg ""  # Create without NSG
    
    # Open port 80
    az vm open-port \
        --resource-group $RESOURCE_GROUP \
        --name $VM_NAME \
        --port 80
    
    # Install web server
    az vm run-command invoke \
        --resource-group $RESOURCE_GROUP \
        --name $VM_NAME \
        --command-id RunShellScript \
        --scripts "sudo apt-get update && sudo apt-get install -y nginx"
    
    log "Virtual machine created and configured"
}

create_sql_database() {
    log "Creating SQL server and database"
    
    # Generate password
    SQL_PASSWORD=$(openssl rand -base64 16)
    
    # Create SQL server
    az sql server create \
        --resource-group $RESOURCE_GROUP \
        --name $SQL_SERVER \
        --location $LOCATION \
        --admin-user sqladmin \
        --admin-password "$SQL_PASSWORD"
    
    # Configure firewall
    az sql server firewall-rule create \
        --resource-group $RESOURCE_GROUP \
        --server $SQL_SERVER \
        --name AllowAzureServices \
        --start-ip-address 0.0.0.0 \
        --end-ip-address 0.0.0.0
    
    # Create database
    az sql db create \
        --resource-group $RESOURCE_GROUP \
        --server $SQL_SERVER \
        --name $SQL_DATABASE \
        --service-objective Basic \
        --max-size 2GB
    
    log "SQL database created"
    log "SQL Server: $SQL_SERVER.database.windows.net"
    log "Username: sqladmin"
    log "Password: $SQL_PASSWORD"
}

create_aks_cluster() {
    log "Creating AKS cluster: $AKS_CLUSTER"
    
    az aks create \
        --resource-group $RESOURCE_GROUP \
        --name $AKS_CLUSTER \
        --node-count 3 \
        --node-vm-size Standard_B2s \
        --enable-managed-identity \
        --network-plugin azure \
        --enable-addons monitoring \
        --generate-ssh-keys
    
    # Get credentials
    az aks get-credentials \
        --resource-group $RESOURCE_GROUP \
        --name $AKS_CLUSTER \
        --overwrite-existing
    
    log "AKS cluster created"
}

create_app_service() {
    log "Creating App Service plan and web app"
    
    # Create App Service plan
    az appservice plan create \
        --resource-group $RESOURCE_GROUP \
        --name myAppServicePlan \
        --sku B1 \
        --is-linux
    
    # Create web app
    az webapp create \
        --resource-group $RESOURCE_GROUP \
        --plan myAppServicePlan \
        --name "mywebapp$(date +%s)" \
        --runtime "PYTHON|3.9"
    
    log "App Service created"
}

create_key_vault() {
    log "Creating Key Vault"
    
    az keyvault create \
        --resource-group $RESOURCE_GROUP \
        --name "mykeyvault$(date +%s)" \
        --location $LOCATION \
        --sku standard \
        --enabled-for-template-deployment true
    
    # Store SQL password in Key Vault
    az keyvault secret set \
        --vault-name "mykeyvault$(date +%s)" \
        --name sqlPassword \
        --value "$SQL_PASSWORD"
    
    log "Key Vault created with SQL password stored"
}

setup_monitoring() {
    log "Setting up monitoring"
    
    # Create Log Analytics workspace
    az monitor log-analytics workspace create \
        --resource-group $RESOURCE_GROUP \
        --workspace-name myLogAnalyticsWorkspace \
        --location $LOCATION
    
    # Create action group for alerts
    az monitor action-group create \
        --resource-group $RESOURCE_GROUP \
        --name myActionGroup \
        --action email myemail my.email@example.com
    
    # Create metric alert
    az monitor metrics alert create \
        --resource-group $RESOURCE_GROUP \
        --name "HighCPUAlert" \
        --scopes "/subscriptions/$(az account show --query id -o tsv)/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Compute/virtualMachines/$VM_NAME" \
        --condition "avg Percentage CPU > 80" \
        --window-size 5m \
        --evaluation-frequency 1m \
        --action-group myActionGroup
    
    log "Monitoring configured"
}

export_outputs() {
    log "Exporting infrastructure outputs"
    
    cat > outputs.json << EOF
{
    "resourceGroup": "$RESOURCE_GROUP",
    "location": "$LOCATION",
    "virtualMachine": {
        "name": "$VM_NAME",
        "publicIP": "$(az vm show -g $RESOURCE_GROUP -n $VM_NAME --show-details --query publicIps -o tsv)"
    },
    "storageAccount": "$STORAGE_ACCOUNT",
    "sqlServer": "$SQL_SERVER.database.windows.net",
    "aksCluster": "$AKS_CLUSTER",
    "keyVault": "mykeyvault$(date +%s)"
}
EOF
    
    log "Outputs exported to outputs.json"
}

cleanup() {
    log "Cleanup function (optional)"
    # Uncomment to enable cleanup
    # az group delete --name $RESOURCE_GROUP --yes --no-wait
}

main() {
    log "Starting Azure infrastructure automation..."
    
    check_prerequisites
    create_resource_group
    create_virtual_network
    create_storage_account
    create_virtual_machine
    create_sql_database
    create_aks_cluster
    create_app_service
    create_key_vault
    setup_monitoring
    export_outputs
    
    log "Azure infrastructure deployment completed successfully!"
    log ""
    log "Summary of created resources:"
    log "  • Resource Group: $RESOURCE_GROUP"
    log "  • Virtual Network: $VNET_NAME"
    log "  • Virtual Machine: $VM_NAME"
    log "  • Storage Account: $STORAGE_ACCOUNT"
    log "  • SQL Database: $SQL_DATABASE"
    log "  • AKS Cluster: $AKS_CLUSTER"
    log "  • App Service: mywebapp$(date +%s)"
    log "  • Key Vault: mykeyvault$(date +%s)"
    log ""
    log "Next steps:"
    log "  1. SSH to VM: ssh azureuser@$(az vm show -g $RESOURCE_GROUP -n $VM_NAME --show-details --query publicIps -o tsv)"
    log "  2. Deploy to AKS: kubectl apply -f deployment.yaml"
    log "  3. Configure CI/CD pipeline"
    log "  4. Set up backup policies"
}

# Run main function
main "$@"

4. Multi-Cloud Management Tools

Terraform - Infrastructure as Code

# Terraform Multi-Cloud Configuration
# main.tf - Multi-cloud Terraform configuration
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0"
}
}
}
# AWS Provider
provider "aws" {
region = "us-east-1"
profile = "dev"
}
# Google Cloud Provider
provider "google" {
project = "my-gcp-project"
region = "us-central1"
}
# Azure Provider
provider "azurerm" {
features {}
subscription_id = var.azure_subscription_id
client_id = var.azure_client_id
client_secret = var.azure_client_secret
tenant_id = var.azure_tenant_id
}
# AWS Resources
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "Main VPC"
}
}
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
subnet_id = aws_subnet.main.id
tags = {
Name = "WebServer"
}
}
# Google Cloud Resources
resource "google_compute_instance" "web" {
name = "web-instance"
machine_type = "e2-micro"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
}
}
network_interface {
network = "default"
access_config {}
}
}
# Azure Resources
resource "azurerm_resource_group" "main" {
name = "my-resources"
location = "East US"
}
resource "azurerm_virtual_network" "main" {
name = "my-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
}
# Terraform Commands
terraform init
terraform plan
terraform apply
terraform apply -auto-approve
terraform destroy
terraform state list
terraform state show aws_instance.web
terraform output
terraform fmt
terraform validate
terraform workspace new dev
terraform workspace select dev

Cloud Provider Comparison

Feature AWS CLI gcloud (GCP) az (Azure) IBM Cloud Oracle Cloud
Installation curl/bundle/MSI curl/brew/MSI apt/brew/MSI curl/brew curl/yum
Authentication IAM keys, SSO, MFA OAuth2, service accounts Azure AD, service principals IAM API keys API keys, federation
Config Profiles Multiple profiles Configurations Subscriptions Regions, accounts Profiles, compartments
Output Formats JSON, text, table, YAML JSON, CSV, table, YAML JSON, table, tsv, YAML JSON, table JSON, table
Query Language JMESPath --filter, --format JMESPath --output JSONPath --query (JMESPath)
Auto-completion aws_completer source completion.* az completion ibmcloud completion oci setup autocomplete
Plugin System Limited gcloud components az extension ibmcloud plugin Limited
Shell Integration CloudShell Cloud Shell Cloud Shell Cloud Shell Cloud Shell

5. Advanced Cloud CLI Techniques

Cloud CLI Best Practices:
1. Use profiles/configurations: Separate environments (dev/staging/prod)
2. Implement MFA: Always use multi-factor authentication
3. Rotate credentials: Regularly rotate access keys and secrets
4. Use IAM roles: Prefer roles over long-term credentials
5. Version control: Store scripts and configurations in Git
6. Parameterize scripts: Use variables and configuration files
7. Error handling: Implement proper error checking and logging
8. Security scanning: Scan scripts for hardcoded secrets
9. Documentation: Document all automation scripts
10. Testing: Test scripts in non-production environments first

Cloud CLI Automation Framework

cloud-automation-framework.sh - Enterprise Cloud Automation
#!/bin/bash
# cloud-automation-framework.sh - Enterprise multi-cloud automation framework

set -euo pipefail

# Configuration
CONFIG_FILE="${1:-config.yaml}"
LOG_DIR="${LOG_DIR:-./logs}"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
ENVIRONMENT="${ENVIRONMENT:-dev}"

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'

# Logging functions
log_info() {
    echo -e "${GREEN}[INFO]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_DIR/cloud-automation.log"
}

log_warn() {
    echo -e "${YELLOW}[WARN]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_DIR/cloud-automation.log"
}

log_error() {
    echo -e "${RED}[ERROR]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_DIR/cloud-automation.log"
}

log_debug() {
    if [[ "${DEBUG:-false}" == "true" ]]; then
        echo -e "${BLUE}[DEBUG]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_DIR/cloud-automation-debug.log"
    fi
}

# Setup directories
setup_directories() {
    mkdir -p "$LOG_DIR"
    mkdir -p "$BACKUP_DIR"
    mkdir -p ./configs
    mkdir -p ./scripts
    mkdir -p ./templates
    
    log_info "Directories setup complete"
}

# Load configuration
load_config() {
    if [[ ! -f "$CONFIG_FILE" ]]; then
        log_error "Configuration file $CONFIG_FILE not found"
        exit 1
    fi
    
    # Load YAML configuration (using yq or python)
    if command -v yq &> /dev/null; then
        export AWS_PROFILE=$(yq e '.aws.profile' "$CONFIG_FILE")
        export AWS_REGION=$(yq e '.aws.region' "$CONFIG_FILE")
        export GCP_PROJECT=$(yq e '.gcp.project' "$CONFIG_FILE")
        export AZURE_SUBSCRIPTION=$(yq e '.azure.subscription' "$CONFIG_FILE")
        export ENVIRONMENT=$(yq e '.environment' "$CONFIG_FILE")
    elif command -v python3 &> /dev/null; then
        # Python fallback for YAML parsing
        python3 -c "
import yaml, os
with open('$CONFIG_FILE', 'r') as f:
    config = yaml.safe_load(f)
    os.environ['AWS_PROFILE'] = config.get('aws', {}).get('profile', '')
    os.environ['AWS_REGION'] = config.get('aws', {}).get('region', 'us-east-1')
    os.environ['GCP_PROJECT'] = config.get('gcp', {}).get('project', '')
    os.environ['AZURE_SUBSCRIPTION'] = config.get('azure', {}).get('subscription', '')
    os.environ['ENVIRONMENT'] = config.get('environment', 'dev')
        "
    else
        log_error "Neither yq nor python3 found for YAML parsing"
        exit 1
    fi
    
    log_info "Configuration loaded from $CONFIG_FILE"
}

# Check cloud prerequisites
check_prerequisites() {
    log_info "Checking cloud CLI prerequisites..."
    
    local missing_tools=()
    
    # Check AWS CLI
    if command -v aws &> /dev/null; then
        log_info "AWS CLI: $(aws --version 2>&1 | head -n1)"
    else
        missing_tools+=("aws")
    fi
    
    # Check Google Cloud SDK
    if command -v gcloud &> /dev/null; then
        log_info "Google Cloud SDK: $(gcloud --version 2>&1 | head -n1)"
    else
        missing_tools+=("gcloud")
    fi
    
    # Check Azure CLI
    if command -v az &> /dev/null; then
        log_info "Azure CLI: $(az --version 2>&1 | head -n1)"
    else
        missing_tools+=("az")
    fi
    
    # Check Terraform
    if command -v terraform &> /dev/null; then
        log_info "Terraform: $(terraform version | head -n1)"
    else
        missing_tools+=("terraform")
    fi
    
    # Check kubectl
    if command -v kubectl &> /dev/null; then
        log_info "kubectl: $(kubectl version --client --short 2>&1)"
    else
        missing_tools+=("kubectl")
    fi
    
    if [[ ${#missing_tools[@]} -gt 0 ]]; then
        log_warn "Missing tools: ${missing_tools[*]}"
        log_warn "Some features may not work properly"
    fi
    
    log_info "Prerequisites check completed"
}

# Cloud authentication
authenticate_clouds() {
    log_info "Authenticating to cloud providers..."
    
    # AWS Authentication
    if [[ -n "${AWS_PROFILE:-}" ]]; then
        log_info "Authenticating to AWS with profile: $AWS_PROFILE"
        export AWS_PROFILE
        aws sts get-caller-identity --profile "$AWS_PROFILE" > /dev/null 2>&1 || {
            log_warn "AWS authentication failed for profile $AWS_PROFILE"
            log_info "Attempting interactive login..."
            aws sso login --profile "$AWS_PROFILE"
        }
    fi
    
    # Google Cloud Authentication
    if [[ -n "${GCP_PROJECT:-}" ]]; then
        log_info "Authenticating to Google Cloud for project: $GCP_PROJECT"
        gcloud config set project "$GCP_PROJECT" > /dev/null 2>&1 || true
        gcloud auth list --filter="status:ACTIVE" --format="value(account)" > /dev/null 2>&1 || {
            log_warn "Google Cloud authentication required"
            log_info "Attempting interactive login..."
            gcloud auth login
        }
    fi
    
    # Azure Authentication
    if [[ -n "${AZURE_SUBSCRIPTION:-}" ]]; then
        log_info "Authenticating to Azure for subscription: $AZURE_SUBSCRIPTION"
        az account show > /dev/null 2>&1 || {
            log_warn "Azure authentication required"
            log_info "Attempting interactive login..."
            az login
        }
        az account set --subscription "$AZURE_SUBSCRIPTION" > /dev/null 2>&1 || true
    fi
    
    log_info "Cloud authentication completed"
}

# Resource inventory
inventory_resources() {
    log_info "Creating cloud resource inventory..."
    
    local inventory_file="$BACKUP_DIR/inventory-$(date +%Y%m%d-%H%M%S).json"
    
    cat > "$inventory_file" << EOF
{
    "timestamp": "$(date -u +'%Y-%m-%dT%H:%M:%SZ')",
    "environment": "$ENVIRONMENT",
    "inventory": {
EOF
    
    # AWS Inventory
    if command -v aws &> /dev/null && [[ -n "${AWS_PROFILE:-}" ]]; then
        log_info "Inventorying AWS resources..."
        
        cat >> "$inventory_file" << EOF
        "aws": {
            "ec2_instances": $(aws ec2 describe-instances --query 'length(Reservations[].Instances[])' --output text 2>/dev/null || echo "null"),
            "s3_buckets": $(aws s3api list-buckets --query 'length(Buckets)' --output text 2>/dev/null || echo "null"),
            "rds_instances": $(aws rds describe-db-instances --query 'length(DBInstances)' --output text 2>/dev/null || echo "null"),
            "lambda_functions": $(aws lambda list-functions --query 'length(Functions)' --output text 2>/dev/null || echo "null")
        },
EOF
    fi
    
    # GCP Inventory
    if command -v gcloud &> /dev/null && [[ -n "${GCP_PROJECT:-}" ]]; then
        log_info "Inventorying GCP resources..."
        
        cat >> "$inventory_file" << EOF
        "gcp": {
            "compute_instances": $(gcloud compute instances list --format="value(NAME)" 2>/dev/null | wc -l || echo "null"),
            "storage_buckets": $(gcloud storage buckets list --format="value(NAME)" 2>/dev/null | wc -l || echo "null"),
            "sql_instances": $(gcloud sql instances list --format="value(NAME)" 2>/dev/null | wc -l || echo "null"),
            "kubernetes_clusters": $(gcloud container clusters list --format="value(NAME)" 2>/dev/null | wc -l || echo "null")
        },
EOF
    fi
    
    # Azure Inventory
    if command -v az &> /dev/null && [[ -n "${AZURE_SUBSCRIPTION:-}" ]]; then
        log_info "Inventorying Azure resources..."
        
        cat >> "$inventory_file" << EOF
        "azure": {
            "virtual_machines": $(az vm list --query 'length([])' --output tsv 2>/dev/null || echo "null"),
            "storage_accounts": $(az storage account list --query 'length([])' --output tsv 2>/dev/null || echo "null"),
            "sql_servers": $(az sql server list --query 'length([])' --output tsv 2>/dev/null || echo "null"),
            "aks_clusters": $(az aks list --query 'length([])' --output tsv 2>/dev/null || echo "null")
        }
EOF
    fi
    
    cat >> "$inventory_file" << EOF
    }
}
EOF
    
    log_info "Resource inventory saved to: $inventory_file"
}

# Cost reporting
generate_cost_report() {
    log_info "Generating cloud cost report..."
    
    local cost_report="$BACKUP_DIR/cost-report-$(date +%Y%m%d).csv"
    
    echo "Service,Provider,Estimated Monthly Cost,Environment" > "$cost_report"
    
    # AWS Cost Estimation (simplified)
    if command -v aws &> /dev/null && [[ -n "${AWS_PROFILE:-}" ]]; then
        log_info "Calculating AWS costs..."
        
        # Get EC2 instances and estimate cost
        local ec2_count=$(aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceType' --output text 2>/dev/null | wc -w || echo 0)
        local ec2_cost=$(echo "$ec2_count * 10" | bc)  # Simplified estimation
        
        echo "EC2,AWS,\$$ec2_cost,$ENVIRONMENT" >> "$cost_report"
        
        # Get S3 storage size (simplified)
        local s3_size=$(aws s3api list-buckets --query 'sum(Buckets[].Size)' --output text 2>/dev/null || echo 0)
        local s3_cost=$(echo "$s3_size * 0.023 / 1024 / 1024 / 1024" | bc)  # GB to cost
        
        echo "S3 Storage,AWS,\$$s3_cost,$ENVIRONMENT" >> "$cost_report"
    fi
    
    log_info "Cost report generated: $cost_report"
}

# Security compliance check
security_compliance_check() {
    log_info "Running security compliance checks..."
    
    local security_report="$BACKUP_DIR/security-report-$(date +%Y%m%d).txt"
    
    cat > "$security_report" << EOF
Security Compliance Report
Generated: $(date)
Environment: $ENVIRONMENT
========================================

EOF
    
    # AWS Security Checks
    if command -v aws &> /dev/null && [[ -n "${AWS_PROFILE:-}" ]]; then
        log_info "Running AWS security checks..."
        
        cat >> "$security_report" << EOF
AWS Security Findings:
---------------------

1. IAM Users with Console Access:
$(aws iam list-users --query "Users[?PasswordLastUsed].UserName" --output table 2>/dev/null || echo "   Unable to retrieve")

2. Unused IAM Access Keys (90+ days):
$(aws iam generate-credential-report 2>/dev/null && aws iam get-credential-report --query Content --output text | base64 -d | awk -F, '{if(NR>1 && $4=="false" && $5>90) print "   User: "$1, "Key: "$2, "Age: "$5" days"}' 2>/dev/null || echo "   Unable to retrieve")

3. Public S3 Buckets:
$(aws s3api list-buckets --query "Buckets[].Name" --output text 2>/dev/null | xargs -I {} sh -c 'aws s3api get-bucket-acl --bucket {} --query "Grants[?Grantee.URI==\"http://acs.amazonaws.com/groups/global/AllUsers\"].Permission" --output text 2>/dev/null | grep -q . && echo "   {} is public"' || echo "   Unable to retrieve")

EOF
    fi
    
    # GCP Security Checks
    if command -v gcloud &> /dev/null && [[ -n "${GCP_PROJECT:-}" ]]; then
        log_info "Running GCP security checks..."
        
        cat >> "$security_report" << EOF
Google Cloud Security Findings:
------------------------------

1. Project IAM Policy:
$(gcloud projects get-iam-policy "$GCP_PROJECT" --format=json 2>/dev/null | jq -r '.bindings[] | select(.members[] | contains("allUsers") or contains("allAuthenticatedUsers")) | "   Role: \(.role) has public access"' || echo "   Unable to retrieve")

2. Firewall Rules with 0.0.0.0/0:
$(gcloud compute firewall-rules list --format="table(name,sourceRanges.list():label=SRC_RANGES,destinationRanges.list():label=DEST_RANGES,allowed[].map().firewall_rule().list():label=ALLOW)" 2>/dev/null | grep "0.0.0.0/0" || echo "   No wide-open firewall rules found")

EOF
    fi
    
    log_info "Security report generated: $security_report"
}

# Backup cloud configurations
backup_configurations() {
    log_info "Backing up cloud configurations..."
    
    local backup_timestamp=$(date +%Y%m%d-%H%M%S)
    local backup_dir="$BACKUP_DIR/config-backup-$backup_timestamp"
    
    mkdir -p "$backup_dir"
    
    # Backup AWS configurations
    if command -v aws &> /dev/null && [[ -n "${AWS_PROFILE:-}" ]]; then
        log_info "Backing up AWS configurations..."
        
        mkdir -p "$backup_dir/aws"
        
        # Backup IAM policies
        aws iam list-policies --scope Local --query 'Policies[].Arn' --output text 2>/dev/null | \
            xargs -I {} sh -c 'aws iam get-policy-version --policy-arn {} --version-id $(aws iam get-policy --policy-arn {} --query "Policy.DefaultVersionId" --output text) --query "PolicyVersion.Document" > "$backup_dir/aws/$(echo {} | cut -d/ -f2).json"' 2>/dev/null || true
        
        # Backup security groups
        aws ec2 describe-security-groups --query 'SecurityGroups[]' --output json > "$backup_dir/aws/security-groups.json" 2>/dev/null || true
        
        # Backup VPC configurations
        aws ec2 describe-vpcs --query 'Vpcs[]' --output json > "$backup_dir/aws/vpcs.json" 2>/dev/null || true
    fi
    
    # Backup Terraform state
    if [[ -f terraform.tfstate ]]; then
        log_info "Backing up Terraform state..."
        cp terraform.tfstate "$backup_dir/terraform.tfstate"
        cp terraform.tfstate.backup "$backup_dir/" 2>/dev/null || true
    fi
    
    # Backup scripts
    if [[ -d ./scripts ]]; then
        log_info "Backing up automation scripts..."
        cp -r ./scripts "$backup_dir/"
    fi
    
    # Create backup archive
    tar -czf "$backup_dir.tar.gz" -C "$(dirname "$backup_dir")" "$(basename "$backup_dir")"
    rm -rf "$backup_dir"
    
    log_info "Backup completed: $backup_dir.tar.gz"
}

# Main workflow
main() {
    log_info "Starting enterprise cloud automation framework..."
    
    # Setup
    setup_directories
    load_config
    check_prerequisites
    authenticate_clouds
    
    # Core functions
    inventory_resources
    generate_cost_report
    security_compliance_check
    backup_configurations
    
    log_info "Cloud automation framework execution completed!"
    log_info ""
    log_info "Summary of generated reports:"
    log_info "  • Inventory: $(ls -t $BACKUP_DIR/inventory-*.json 2>/dev/null | head -1 || echo "Not generated")"
    log_info "  • Cost Report: $(ls -t $BACKUP_DIR/cost-report-*.csv 2>/dev/null | head -1 || echo "Not generated")"
    log_info "  • Security Report: $(ls -t $BACKUP_DIR/security-report-*.txt 2>/dev/null | head -1 || echo "Not generated")"
    log_info "  • Backup: $(ls -t $BACKUP_DIR/config-backup-*.tar.gz 2>/dev/null | head -1 || echo "Not generated")"
    log_info ""
    log_info "Next steps:"
    log_info "  1. Review the generated reports"
    log_info "  2. Address security findings"
    log_info "  3. Optimize costs based on recommendations"
    log_info "  4. Schedule regular automation runs"
}

# Error handling
trap 'log_error "Script interrupted"; exit 1' INT TERM
trap 'log_error "Error on line $LINENO"; exit 1' ERR

# Run main function
main "$@"

6. Cloud Shell Environments

Cloud Shell Comparison

Feature AWS CloudShell Google Cloud Shell Azure Cloud Shell IBM Cloud Shell Oracle Cloud Shell
Access Method Console, CLI, API Console, CLI Portal, CLI, VS Code Console Console
Pre-installed Tools AWS CLI, languages gcloud, languages Azure CLI, languages ibmcloud, tools OCI CLI, tools
Storage 1GB persistent 5GB persistent Azure Files share 512MB persistent 5GB persistent
Compute 1 vCPU, 2GB RAM 2 vCPU, 8GB RAM 2 vCPU, 8GB RAM 2 vCPU, 4GB RAM 2 vCPU, 8GB RAM
Web Preview ❌ No ✅ Yes (port 8080) ✅ Yes ❌ No ✅ Yes
VS Code Integration ❌ No ✅ Yes ✅ Yes ❌ No ❌ No
Free Tier ✅ Free ✅ Free ✅ Free ✅ Free ✅ Free
Region Availability All regions Multi-region Multi-region Global All regions

7. Troubleshooting & Best Practices

Common Issues & Solutions

Issue Symptoms Root Cause Solution
Authentication Failed 401/403 errors Expired tokens, wrong credentials Re-authenticate, check credentials, verify permissions
Rate Limiting 429 Too Many Requests API call limits exceeded Implement retry logic, reduce call frequency
Network Issues Timeout, connection refused Network connectivity, proxy issues Check network, configure proxy, verify endpoints
Permission Denied Access denied errors Insufficient IAM permissions Update IAM policies, use appropriate roles
Command Not Found Command not recognized CLI not installed or PATH issue Install CLI, update PATH, verify installation
Version Conflicts Unsupported features, errors Outdated CLI version Update CLI to latest version
Configuration Issues Wrong region/project Incorrect default configuration Set correct defaults, use profiles/configurations
Output Format Issues Unreadable output Wrong output format specified Use appropriate format (table, json, etc.)

Performance Optimization Tips

# Cloud CLI Performance Optimization
# Use JMESPath queries to filter results
aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceId'
az vm list --query "[].name" -o tsv
# Limit output with --max-items
aws s3api list-objects-v2 --bucket my-bucket --max-items 10
# Use pagination for large datasets
aws ec2 describe-instances --page-size 100
# Disable pager for script usage
export AWS_PAGER=""
gcloud config set disable_prompts true
# Use --no-cli-pager for Azure
az vm list --no-cli-pager
# Cache frequently used data
aws configure set cli_cache enabled
gcloud config set api_endpoint_overrides/compute https://compute.googleapis.com/
# Use parallel processing for batch operations
parallel -j 10 aws s3 cp {} s3://my-bucket/ ::: *.txt
# Optimize network settings
export AWS_MAX_ATTEMPTS=10
export AWS_RETRY_MODE=standard
export AZURE_CORE_OUTPUT=jsonc
# Use dry-run for testing
aws ec2 run-instances --dry-run
gcloud compute instances create --dry-run
az group create --dry-run
# Enable CLI auto-completion
complete -C '/usr/local/bin/aws_completer' aws
source /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/completion.bash.inc
source /usr/local/etc/bash_completion.d/az
# Use aliases for common commands
alias awslist='aws ec2 describe-instances --query "Reservations[].Instances[].{ID:InstanceId,Type:InstanceType,State:State.Name}" --output table'
alias gcpvms='gcloud compute instances list --format="table(name,zone,machineType,status)"'
alias azvms='az vm list --query "[].{Name:name,Location:location,Size:hardwareProfile.vmSize}" --output table'

Master Cloud CLI Tools

Cloud command line tools are essential for efficient cloud resource management, automation, and infrastructure as code workflows. By mastering AWS CLI, Google Cloud SDK (gcloud), Azure CLI (az), and multi-cloud management strategies, you can streamline cloud operations across different providers.

Key Takeaways: Each cloud provider has its own CLI with unique features and syntax. Authentication and configuration management are critical for security and efficiency. Infrastructure as Code tools like Terraform enable multi-cloud management. Automation scripts can significantly improve operational efficiency. Cloud Shell environments provide convenient browser-based access. Monitoring, cost management, and security compliance are essential aspects of cloud operations.

Next Steps: Practice with each cloud provider's CLI. Implement Infrastructure as Code with Terraform. Develop automation scripts for common tasks. Set up monitoring and alerting for cloud resources. Implement security best practices across all cloud environments. Explore serverless and container orchestration tools. Stay updated with new CLI features and cloud services.