Skip to main content

GKE Autopilot

GKE Autopilot is a fully managed, serverless GKE mode where Google manages the entire node infrastructure. You declare your workloads; Google provisions, scales, patches, and secures the nodes automatically.

Autopilot vs Standard GKE

FeatureGKE AutopilotGKE Standard
Node managementGoogle-managedUser-managed
BillingPer pod (CPU + memory)Per node (even when idle)
Security hardeningEnforced (Shielded VMs, locked node OS)Configurable
Pod Security Standardsrestricted enforcedConfigurable
DaemonSetsNot supportedSupported
Node SSHNot supportedSupported
info

DevOpsGenie recommends Autopilot for workload node pools. Platform add-ons that require DaemonSets (Falco, Prometheus node-exporter) run on a separate Standard node pool.

Provisioning an Autopilot Cluster

terraform/environments/production/gke-autopilot.tf
resource "google_container_cluster" "autopilot" {
name = "devopsgenie-autopilot"
location = var.region
project = var.project_id

enable_autopilot = true

network = google_compute_network.main.name
subnetwork = google_compute_subnetwork.gke.name

ip_allocation_policy {
cluster_secondary_range_name = "pods"
services_secondary_range_name = "services"
}

workload_identity_config {
workload_pool = "${var.project_id}.svc.id.goog"
}

private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = "172.16.0.0/28"
}

release_channel {
channel = "REGULAR"
}
}

Workload Configuration for Autopilot

Autopilot enforces resource requests on every container. Set them explicitly:

kubernetes/apps/payments-api/deployment.yaml
spec:
template:
spec:
containers:
- name: api
image: us-central1-docker.pkg.dev/devopsgenie-production/devopsgenie/payments-api:v1.0.0
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
# Autopilot enforces these security settings
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault

Spot Pods on Autopilot

Autopilot supports Spot Pods for cost-sensitive, interruption-tolerant workloads:

kubernetes/apps/batch-processor/deployment.yaml
spec:
template:
metadata:
labels:
cloud.google.com/gke-spot: "true"
spec:
nodeSelector:
cloud.google.com/gke-spot: "true"
terminationGracePeriodSeconds: 25
containers:
- name: processor
image: us-central1-docker.pkg.dev/devopsgenie-production/devopsgenie/batch:v1.0.0
resources:
requests:
cpu: "2"
memory: 4Gi

Cost Comparison

On Autopilot, you pay per vCPU and GB of memory consumed by running pods:

ResourceAutopilot Price (us-central1)
vCPU (on-demand)$0.0445/vCPU/hour
Memory (on-demand)$0.00490/GB/hour
vCPU (Spot)$0.0133/vCPU/hour (70% savings)
Memory (Spot)$0.00147/GB/hour (70% savings)
# Estimate Autopilot costs for your workloads
devopsgenie sizing estimate \
--provider gcp \
--mode autopilot \
--workloads-file workloads.yaml