DevOps with GitHub: Use Cases

Master real-world DevOps practices with GitHub. This guide covers CI/CD pipeline design, GitOps implementation, secrets management strategies, infrastructure as code, and enterprise DevOps patterns through practical use cases and scenarios.

CI/CD Pipelines GitOps Secrets Management Cloud Deployments
CI/CD Pipeline Design Patterns
1 How would you design a CI/CD pipeline for a microservices architecture?

A microservices CI/CD pipeline needs to handle multiple services independently while maintaining consistency. The key patterns include:

Independent pipelines per service - Each service has its own workflow triggered by changes in its directory. Use path filters to prevent unnecessary runs.

Shared reusable workflows - Create reusable workflows for common patterns (test, build, deploy) that all services consume. This ensures consistency.

Contract testing - When services communicate via APIs, run contract tests to ensure changes don't break consumers.

Versioned artifacts - Each service publishes versioned Docker images or packages. Use semantic versioning with automation.

Environment promotion - Deploy to dev → staging → production with automated promotion when tests pass.

2 How do you implement canary deployments using GitHub Actions?

Canary deployment gradually shifts traffic from the old version to the new version. Implementation steps:

First, deploy the new version alongside the existing version (e.g., new Kubernetes deployment or ECS service). Configure the load balancer or service mesh to route a small percentage of traffic (e.g., 5%) to the new version.

Monitor key metrics (error rate, latency, CPU usage) for the canary. If metrics are healthy, gradually increase traffic (10%, 25%, 50%, 100%). If metrics degrade, automatically roll back by routing all traffic to the old version.

GitHub Actions can orchestrate this using cloud provider SDKs. Use sleep between steps to allow metrics to stabilize. For advanced scenarios, integrate with observability tools via API checks.

3 How do you handle database migrations in a CI/CD pipeline?

Database migrations are the riskiest part of deployment. Best practices include:

Backward-compatible migrations - Always make migrations that work with both old and new code. Add columns, don't remove or rename them.

Two-phase deployment - First deploy the migration (code still expects old schema), then deploy application code that uses the new schema. This allows safe rollback.

Migration testing - Run migrations against a copy of production data in staging before production.

Rollback plan - Have a tested rollback procedure. For destructive migrations, keep backups.

In GitHub Actions, run migrations as a separate job before application deployment. Use a dedicated database user with minimal permissions. Store migration scripts alongside application code for versioning.

4 How would you implement a CI pipeline for a mobile app (iOS/Android)?

Mobile app CI/CD has unique challenges: long build times, code signing, and app store submission. Key practices:

Use macOS runners - GitHub-hosted macOS runners are available for iOS builds. For Android, Linux runners work.

Code signing - Store certificates and provisioning profiles as GitHub secrets. Use tools like fastlane match to manage signing across the team.

Fastlane integration - Fastlane automates beta distribution and app store submission. Integrate it into GitHub Actions.

Build caching - Cache derived data, Pods, and Gradle caches to speed up builds.

Distribution to TestFlight/Play Store - Automatically distribute beta builds to testers on every commit to main. Use manual approval for production releases.

GitOps: Infrastructure as Code
5 What is GitOps and how do you implement it with GitHub?

GitOps is a practice where the entire infrastructure is managed through Git. Changes are made via pull requests, reviewed, then automatically applied when merged. This brings the same benefits to infrastructure as application code: auditability, review, and rollback.

Implementation steps: Store all infrastructure configuration (Terraform, Kubernetes manifests, Helm charts) in GitHub repositories. Use GitHub Actions to automatically run terraform plan on PRs and post results as comments. When PR merges to main, automatically run terraform apply or kubectl apply.

For Kubernetes specifically, tools like Flux and ArgoCD can be integrated with GitHub to automatically sync cluster state with Git. This creates a fully declarative, version-controlled infrastructure.

6 How do you manage multiple environments (dev, staging, prod) with GitOps?

Environment management in GitOps uses directory structure or branches. The most common pattern is directory-based:

/environments/dev, /environments/staging, /environments/prod each contain environment-specific configurations. Use Kustomize or Helm values to override common settings.

Deployment automation: Changes to /environments/dev deploy automatically. Changes to /environments/prod require pull request approval. GitHub Actions can enforce this with different triggers and protection rules.

For security, production environment secrets are stored in GitHub Secrets, not in Git. Use OIDC for cloud authentication to avoid long-lived credentials. Environment-specific variables are managed via GitHub Environments.

7 What are the benefits and challenges of using GitHub Actions for GitOps?

Benefits: No additional infrastructure to maintain (unlike ArgoCD/Flux). Tight integration with GitHub's PR workflow. Built-in secrets management. Easy to add custom logic with Actions.

Challenges: Pull-based tools like ArgoCD automatically detect drift; Actions-based GitOps only applies changes when pushed. For large clusters, kubectl apply can be slow. No built-in drift correction—if someone changes infrastructure manually, Actions won't fix it.

Hybrid approach: Use GitHub Actions for pull request planning and approval, and ArgoCD for actual deployment and drift correction. This gives the best of both worlds.

8 How do you implement Terraform with GitHub Actions for infrastructure deployment?

A complete Terraform pipeline includes: Validation - Run terraform fmt -check and terraform validate on every PR. Planning - Run terraform plan and post results as a PR comment. Apply - When PR merges to main, run terraform apply -auto-approve.

Use remote state storage (S3 with DynamoDB locking) for team collaboration. Store state in a separate AWS account for security. Use OIDC for authentication instead of long-lived AWS keys.

For large infrastructures, break Terraform into modules and use terraform workspace or directory-based separation for environments. Use Terragrunt for complex multi-environment setups.

Secrets Management & Security
9 What is the most secure way to manage cloud credentials in GitHub Actions?

The most secure approach is OIDC (OpenID Connect) authentication. GitHub generates a short-lived JWT token that your cloud provider validates, returning temporary credentials scoped to specific permissions. No long-lived secrets are stored in GitHub.

To implement OIDC: configure your cloud provider to trust GitHub's OIDC issuer, create an IAM role with specific permissions, and use actions like aws-actions/configure-aws-credentials with the role ARN.

If OIDC isn't available, use GitHub Secrets with encryption at rest and in transit. Rotate secrets regularly. Never store secrets in workflow files or as plain text. Use environment-specific secrets so production credentials aren't accessible to staging workflows.

10 How do you prevent secrets from being exposed in GitHub Actions logs?

GitHub automatically redacts secrets in logs—any reference to ${{ secrets.NAME }} is shown as ***. However, secrets can be exposed if you echo them or pass them to commands that output them.

Best practices: Never run echo ${{ secrets.MY_SECRET }}. Avoid passing secrets to commands that might print them (e.g., curl -v exposes headers). Use environment variables instead of command-line arguments when possible.

Use add-mask to manually redact additional strings. For debugging, enable step debugging only temporarily. Audit logs regularly for accidental exposure.

11 How do you rotate secrets in GitHub Actions without downtime?

Secret rotation requires careful coordination. For service accounts that support multiple valid credentials simultaneously (like AWS IAM with multiple access keys), follow this pattern:

Create a new secret value in your cloud provider while keeping the old one active. Update the GitHub Secret with the new value. Deploy this change (workflows now use the new secret). After confirming all workflows succeed, revoke the old secret in the cloud provider.

For secrets that don't support multiple values, schedule a maintenance window. Use GitHub Actions to automate rotation by calling cloud provider APIs to generate new secrets and update GitHub Secrets via the API.

12 How do you handle secrets in pull requests from forks?

GitHub Actions from forks do NOT have access to secrets by default—this is a security feature. To safely handle fork PRs:

Use pull_request_target event instead of pull_request. This event runs in the base repository context and has access to secrets, but it runs the workflow from the base branch, not the fork's code. This is safe because malicious code from the fork isn't executed.

Never check out and run fork code with secrets access. For running tests from forks, use pull_request (no secrets) to run tests safely. For commenting or labeling, use pull_request_target with caution.

Enterprise DevOps Patterns
13 How do you enforce compliance and governance across multiple repositories?

Enterprise governance requires a multi-layered approach. Use organization-level policies for branch protection, actions permissions, and secret scanning. Repository rulesets can enforce consistent settings across all repos.

Template repositories provide standardized starting points. Reusable workflows enforce consistent CI/CD patterns. GitHub Apps can audit and enforce custom policies via API.

Policy as Code tools like Open Policy Agent (OPA) can validate repository configuration. Store compliance policies in a central repository and run scheduled scans to detect violations.

14 How do you manage cross-repository dependencies in GitHub Actions?

Cross-repository dependencies can be managed using reusable workflows from a central repository. When a shared library changes, trigger dependent repository tests using repository_dispatch or workflow_dispatch.

For build dependencies, use GitHub Packages to publish versioned artifacts. Downstream repositories consume specific versions, ensuring reproducibility. Use Dependabot to automatically update dependencies.

For complex dependency graphs, consider a monorepo with tools like Nx or Bazel. Or implement a CI orchestration service that understands your dependency graph and triggers builds in the correct order.

15 How do you implement a self-service developer platform using GitHub?

A self-service platform empowers developers to provision infrastructure and deploy applications without waiting for operations. Implementation approach:

Template repositories with standard application structures, CI/CD workflows, and infrastructure as code. Use repository templates and GitHub Actions workflow templates.

GitHub Apps to automate repository creation, team assignment, and initial configuration. Use Actions with workflow_dispatch to provide "deploy" buttons for non-developers.

Internal developer portal (using Backstage or similar) integrated with GitHub API. Provide a catalog of services, documentation, and one-click deployment from pull requests.

Real-World Use Cases
16 How would you automate a release process that creates GitHub Releases, updates changelogs, and publishes packages?

A fully automated release process includes several steps. Use semantic versioning based on commit messages (Conventional Commits). Tools like semantic-release automatically determine the next version.

When a PR is merged to main, the release workflow runs: Determine next version based on commits. Generate or update CHANGELOG.md using commit history. Create a Git tag and GitHub Release with release notes. Publish packages to npm, Maven, or PyPI.

Use actions/create-release and actions/upload-release-asset for GitHub Releases. Use ecosystem-specific publish actions for package registries. Use softprops/action-gh-release for more control over release assets.

17 How do you implement automated security scanning in the CI pipeline?

A comprehensive security pipeline includes multiple scanning tools:

CodeQL for static application security testing (SAST) - detects vulnerabilities in your code. Dependabot for dependency scanning - alerts on vulnerable packages and opens PRs.

Secret scanning with push protection - prevents secrets from being committed. Container scanning - scan Docker images for vulnerabilities using Trivy or Grype.

Infrastructure scanning - check Terraform/Kubernetes configs with checkov or tfsec. Software Bill of Materials (SBOM) - generate and upload SBOM for supply chain security.

Run these scans on every PR. Block merging on critical findings. Use GitHub Advanced Security for enterprise-grade scanning.

18 How do you implement a monorepo CI strategy with GitHub Actions?

Monorepo CI needs to be efficient—only test changed services. Strategies include:

Path filtering - Use paths-ignore or paths to trigger workflows only when relevant files change. Dynamic matrices - A first job detects changed services and outputs a JSON matrix for parallel testing.

Build caches per service - Use actions/cache with keys that include service names. Remote caching - For large builds, use a remote cache (like GitHub Packages) shared across runners.

Tools like Nx or Turborepo provide sophisticated task orchestration and caching. They can be integrated with GitHub Actions for optimal monorepo CI.

19 How would you design a CI/CD pipeline for a serverless application (AWS Lambda)?

Serverless CI/CD focuses on packaging and deployment. The pipeline includes:

Lint and test - Run unit tests and linting. Use serverless-specific testing tools like jest with Lambda emulation.

Package - Install dependencies, bundle code (using esbuild or similar), and create deployment package. Use serverless or SAM CLI for packaging.

Deploy to staging - Deploy to a staging environment, run integration tests against deployed API. Use serverless deploy --stage staging.

Smoke tests - Run smoke tests against staging endpoint. Production deployment - After approval, deploy to production with serverless deploy --stage prod.

Use canary deployments for Lambda (CodeDeploy) to gradually shift traffic.

Disaster Recovery & Rollback
20 How do you implement automated rollback in GitHub Actions?

Automated rollback requires monitoring and a deployment strategy that supports reverting. Implementation approaches:

Blue-green deployment - Keep both old and new versions. If health checks fail, switch traffic back to the blue environment. This is instant and safe.

Canary with monitoring - Gradually increase traffic to the new version. If error rates exceed thresholds, stop the canary and revert to the old version.

Rollback via Git revert - For infrastructure as code, revert the Terraform commit and re-apply. This restores the previous state.

Manual rollback workflow - Create a separate workflow that accepts a version to roll back to. This workflow redeploys the previous version from artifact storage or Git tag.

21 How do you recover from a failed deployment that broke production?

A structured incident response process is essential. First, assess impact - determine scope and severity. Roll back immediately using your deployment strategy (blue-green switch or re-deploy previous version).

Communicate status - Update stakeholders, post to status page, create incident ticket. Preserve evidence - Save logs, metrics, and deployment artifacts for post-mortem.

Post-mortem - Conduct blameless post-mortem to identify root cause. Prevent recurrence - Add safeguards (better tests, canary analysis, staged rollouts).

Use GitHub Issues to track the incident and post-mortem actions. Automate what you can from lessons learned.

Monitoring & Observability
22 How do you integrate GitHub Actions with monitoring tools?

Integration patterns for monitoring include:

Metrics publishing - Use Actions to send deployment metrics to Datadog, Prometheus, or CloudWatch. Track deployment frequency, lead time, and success rate.

Alerting on failures - Configure workflows to send alerts to Slack, Teams, or PagerDuty on deployment failures. Use github/slack action or custom webhooks.

Status page updates - Automatically update status pages when deployments complete. Use actions/github-script to call status page APIs.

Observability data - Upload test reports, coverage data, and performance metrics as artifacts or send to analytics platforms.

23 How do you measure DevOps metrics (DORA) using GitHub data?

The four DORA metrics can be measured using GitHub data:

Deployment Frequency - Count deployments per day/week using GitHub API to query deployment events or workflow runs.

Lead Time for Changes - Time from commit to deployment. Calculate using commit timestamp and deployment timestamp from workflow runs.

Mean Time to Recovery (MTTR) - Time from deployment failure to successful fix. Measure from failed workflow run to subsequent successful run.

Change Failure Rate - Percentage of deployments causing failures. Compare deployment events with incident tickets or rollback events.

Use GitHub's API to collect this data and send to analytics platforms. GitHub Actions can run scheduled workflows to compute and report these metrics.

Bonus: Advanced Scenarios
24 How would you implement cross-region or multi-cloud deployments using GitHub Actions?

Multi-cloud deployments require orchestration across providers. Implementation approach:

Use a matrix strategy to deploy to multiple regions or clouds in parallel. Each matrix combination runs a deployment job for its target.

For active-active deployments, deploy to all regions simultaneously. Use a load balancer (like CloudFlare or AWS Global Accelerator) to route traffic.

For active-passive (disaster recovery), deploy to primary region, then replicate to secondary. Use GitHub Actions to fail over by updating DNS or load balancer configuration.

Store region-specific configuration in environment variables. Use OIDC with provider-specific roles for each cloud.

25 How do you implement a GitHub Actions self-service platform for developers?

A self-service platform reduces operational bottlenecks. Implementation includes:

Workflow templates - Provide curated workflow templates in .github/workflow-templates/ that developers can use with one click.

Action form inputs - Use workflow_dispatch with inputs to create parameterized workflows that developers can trigger manually.

GitHub App with API automation - Build a GitHub App that responds to issues, comments, or labels to trigger infrastructure provisioning.

Repository templates - Provide starter repositories with pre-configured workflows, branch protection, and settings.

Documentation - Create a developer portal with runbooks, examples, and troubleshooting guides.

Interview Tips for DevOps Scenarios
For DevOps scenario interviews, focus on:
  • End-to-end thinking: Show how your solution addresses the entire lifecycle from code commit to production monitoring.
  • Trade-off analysis: Discuss pros and cons of different approaches (e.g., blue-green vs canary).
  • Security by design: Incorporate security considerations (secrets, scanning, access control) from the start.
  • Observability: Include metrics, logging, and alerting in your design.
  • Disaster recovery: Always have a rollback plan and discuss how you'd handle failures.
Practice whiteboarding: Interviewers often ask you to diagram a pipeline on a whiteboard. Practice explaining your design decisions and the flow of data through the system.
Previous: GitHub Actions Interview Next: GitHub Flow Best Practices

DevOps with GitHub is about automating the entire software delivery lifecycle. Master these patterns to design robust, secure, and scalable DevOps pipelines.