Migration Guide
This guide covers migrating existing Kubernetes workloads and infrastructure to DevOpsGenie — whether from a self-managed cluster, another managed Kubernetes service, or a legacy deployment system.
Migration Paths
| From | To | Estimated Effort | Strategy |
|---|---|---|---|
| Self-managed K8s on EC2 | EKS + DevOpsGenie | Medium (1–2 weeks) | Blue/green cluster |
| EKS (unmanaged, no IaC) | EKS + DevOpsGenie | Low (3–5 days) | In-place adoption |
| GKE → EKS | AWS + DevOpsGenie | High (2–4 weeks) | Cloud migration |
| AKS → EKS | AWS + DevOpsGenie | High (2–4 weeks) | Cloud migration |
| Helm-only → GitOps | ArgoCD adoption | Low (2–3 days) | Incremental |
| Jenkins → GitHub Actions | CI pipeline migration | Medium (1 week) | Parallel pipelines |
Phase 1 — Discovery
Before migrating, inventory your current environment:
# Install the migration scanner
npm install -g @devopsgenie/migrate
# Scan your current cluster
devopsgenie-migrate scan \
--kubeconfig ~/.kube/config \
--output migration-report.json
# View the report
devopsgenie-migrate report migration-report.json
The scanner identifies:
- All Deployments, StatefulSets, DaemonSets, and CronJobs
- Resource requests and limits (flags missing ones)
- Secrets and ConfigMaps (flags plaintext sensitive values)
- Service dependencies and Ingress configurations
- PersistentVolumes and storage classes
- Custom admission webhooks and CRDs
- IAM permissions required per workload
Phase 2 — Plan
# Generate a migration plan
devopsgenie-migrate plan \
--report migration-report.json \
--target-provider aws \
--target-region us-east-1 \
--output migration-plan.yaml
Sample plan output:
migrationPlan:
estimatedDuration: "5 days"
riskLevel: medium
phases:
- name: "Infrastructure Setup"
duration: "1 day"
tasks:
- "Provision EKS cluster via Terraform"
- "Configure VPC peering for migration window"
- "Set up ECR and push existing images"
- name: "Platform Install"
duration: "0.5 days"
tasks:
- "Install DevOpsGenie platform stack"
- "Configure ArgoCD with existing Git repos"
- name: "Workload Migration"
duration: "2 days"
tasks:
- "Convert Helm releases to GitOps Kustomize overlays"
- "Migrate Secrets to Secrets Manager"
- "Configure IRSA for AWS service access"
- name: "Traffic Cutover"
duration: "0.5 days"
tasks:
- "Update DNS TTL to 60s"
- "Shift 5% traffic to new cluster (canary)"
- "Validate SLOs for 30 minutes"
- "Shift 100% traffic"
- name: "Cleanup"
duration: "1 day"
tasks:
- "Decommission old cluster"
- "Archive old infrastructure code"
Phase 3 — Migrate Secrets
Secrets should be migrated to AWS Secrets Manager (or Azure Key Vault / GCP Secret Manager) before the workload migration:
# Automatically import Kubernetes Secrets to AWS Secrets Manager
devopsgenie-migrate secrets \
--source-kubeconfig ~/.kube/config \
--source-namespace team-payments \
--destination aws-secretsmanager \
--region us-east-1 \
--prefix /payments/production/
# Verify
aws secretsmanager list-secrets \
--filter Key=name,Values=/payments/production/
Phase 4 — Convert to GitOps
# Export existing Helm releases to GitOps-compatible Kustomize overlays
devopsgenie-migrate gitops \
--source-kubeconfig ~/.kube/config \
--namespace team-payments \
--output-dir ./gitops/apps/team-payments/ \
--strategy kustomize
# Output:
# gitops/apps/team-payments/
# ├── base/
# │ ├── deployment.yaml
# │ ├── service.yaml
# │ └── kustomization.yaml
# └── overlays/
# ├── staging/
# └── production/
Phase 5 — Traffic Cutover
Use DNS-based cutover with a short TTL for zero-downtime migration:
# Step 1 — Reduce DNS TTL 24 hours before cutover
# (do this in Route53 / Cloud DNS / Azure DNS)
# Step 2 — Validate the new cluster with synthetic traffic
devopsgenie cluster validate \
--cluster devopsgenie-production \
--run-smoke-tests
# Step 3 — Shift 5% of traffic via weighted DNS
devopsgenie migrate cutover \
--weight 5 \
--monitor-duration 30m
# Step 4 — Full cutover after validation
devopsgenie migrate cutover \
--weight 100
# Step 5 — Confirm old cluster receives no traffic, then decommission
Rollback
If issues arise during migration, revert DNS to the old cluster:
devopsgenie migrate rollback \
--confirm
# DNS reverted to old cluster within TTL seconds
Post-Migration Checklist
- All workloads running and passing health checks
- SLOs at baseline (error rate, latency p99)
- Alerts configured and Alertmanager routing verified
- Log ingestion confirmed in Loki
- Secrets accessible via External Secrets Operator
- CI/CD pipelines updated to deploy to new cluster
- Old cluster decommissioned and Terraform state archived
- DNS TTL restored to original value