I Built My First EKS Cluster in Auto Mode and Was Shocked by the Simplicity: A Terraform Guide

Herve Khg
7 min readDec 16, 2024

--

Amazon recently introduced EKS Auto Mode during re:Invent, and I couldn’t resist giving it a try. The results? Absolutely mind-blowing. This new mode takes the complexity out of managing Kubernetes infrastructure and lets developers focus on building and scaling their applications. In this article, I’ll show you how to create an EKS Auto Mode cluster using Terraform step by step.

What is EKS Auto Mode?

Managing Kubernetes clusters often involves configuring node groups, handling networking, and optimizing costs while maintaining performance and security. With EKS Auto Mode, AWS simplifies all of this. Auto Mode abstracts infrastructure details like node configuration and subnet management, while automatically optimizing costs and performance.

This is ideal for developers, new-comers… who want a no-fuss Kubernetes experience. By automating most of the cluster management, Auto Mode helps you focus on deploying and scaling applications seamlessly.

Prerequisites

Before we dive into creating a cluster, ensure you have the following:

  1. AWS CLI installed and configured with an IAM user having
  2. Terraform installed (latest version recommended).
  3. A basic understanding of Terraform and Kubernetes concepts.
  4. An S3 bucket for Terraform state

Step 1: Build Network

Before creating the EKS cluster, you need to set up the underlying network infrastructure. EKS managed nodes, Pods, and other components require subnets to operate. To simplify this process, we’ll leverage the widely-used VPC module from the Terraform community, which provides a robust and reusable way to create VPCs, subnets, and related networking resources.

################################################################################
# Provider
################################################################################
provider "aws" {
region = "eu-west-1"
profile = "<profile>"
}

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.8"
}
}
}

################################################################################
# Backend
################################################################################

terraform {
backend "s3" {
bucket = "<bucket>"
key = "<network_key>.tf"
region = "<region>"
profile = "<profile"
}
}


################################################################################
# Locals
################################################################################
data "aws_availability_zones" "available" {
# Exclude local zones
filter {
name = "opt-in-status"
values = ["opt-in-not-required"]
}
}

locals {
project_name = "cluster13"
cluster_version = "1.31"
aws_region = "eu-west-1"

vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)

tags = {
Project = local.project_name
}
}

################################################################################
# VPC Module
################################################################################
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"

name = local.project_name
cidr = local.vpc_cidr

azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
intra_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]

enable_nat_gateway = true
single_nat_gateway = true

public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}

private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}

tags = local.tags
}

################################################################################
# Outputs
################################################################################
output "vpc_id" {
value = module.vpc.vpc_id
}

output "private_subnets" {
value = module.vpc.private_subnets
}

Init and apply the code with terraform

terraform init
terraform apply

Say yes to the apply then wait that terrafrom create all the network resources. It will take around 5 minutes, to finish.

Step 2: Set Up EKS

With the VPC and networking in place, it’s time to set up the EKS cluster in Auto Mode. Begin by creating a directory for your project and adding a file named main.tf. This file will contain the base Terraform configuration for deploying your EKS cluster.

################################################################################
# Provider
################################################################################
provider "aws" {
region = "<region>"
profile = "<profile>"
}

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.8"
}
}
}

################################################################################
# Backend
################################################################################

terraform {
backend "s3" {
bucket = "<bucket>"
key = "<network_key>.tf"
region = "<region>"
profile = "<profile"
}
}

################################################################################
# Locals
################################################################################

locals {
project_name = "cluster13"
cluster_version = "1.31"
aws_region = "<region>"

vpc_id = data.terraform_remote_state.network.outputs.vpc_id
private_subnet_ids = data.terraform_remote_state.network.outputs.private_subnets

tags = {
Project = local.project_name
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}

################################################################################
# Remote State of Network
################################################################################

data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "<bucket_name>"
key = "<key_network>"
region = "<region>"
profile = "<profile>"
}
}

################################################################################
# EKS Module
################################################################################

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.31"

cluster_name = local.project_name
cluster_version = local.cluster_version
cluster_endpoint_public_access = true

enable_cluster_creator_admin_permissions = true

cluster_compute_config = {
enabled = true
node_pools = ["general-purpose"]
}

vpc_id = local.vpc_id
subnet_ids = local.private_subnet_ids

tags = local.tags
}
  1. Provider block: Defines the AWS region.
  2. Terraform backend: Uses S3 and DynamoDB for remote state and state locking (optional).
  3. EKS module: Leverages the popular terraform-aws-modules/eks module to simplify EKS cluster creation.

Init and apply the code with terraform

terraform init
terraform apply

Wait around 10–12 minutes that terraform create the cluster

Step 3: Update kubeconfig for accessing your cluster

To interact with your EKS cluster and its API, you need to update your ~/.kube/config file by adding the context of your newly created cluster. This allows tools like kubectl to communicate with the cluster.

Run the following command to update your kubeconfig:

# Update the your  kubeconfig file
aws eks update-kubeconfig --name cluster13 --alias <clustername>

# Check that your context is set
kubectl config get-contexts

At this stage, your EKS cluster is successfully created and operational. However, you might notice something unexpected: there are no nodes visible in your cluster. Why is that?

This behavior is perfectly normal when using EKS Auto Mode. In Auto Mode, nodes are not pre-provisioned. Instead, they are created dynamically only when there is a Pod that requires them.

This means that your cluster will automatically scale its infrastructure based on workload demand, ensuring efficient resource usage and cost optimization. Until you deploy a workload (such as a Pod), no nodes will be launched. This is one of the key benefits of EKS Auto Mode — your cluster remains lightweight and cost-effective when idle.

No nodes on the AWS Console
Node node with kubectl client

Step 4: Create your First deployment

Below is a simple YAML configuration for a Kubernetes Deployment. It creates two nginx Pods running the latest nginx image and exposes port 80 for HTTP traffic

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
env: sandbox
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
env: sandbox
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80

Then apply your deployment

kubectl apply -f deployment.yaml

Since your cluster currently has no nodes, you might notice a slight delay when deploying the nginx Pods. This is expected behavior when using EKS Auto Mode.

Behind the scenes, AWS leverages Karpenter, a powerful open-source cluster autoscaler, to provision the right EC2 instances to meet the specific resource requirements of your Pods.

Here’s what happens:

  1. Karpenter Analyzes the Deployment: It evaluates the resource requests (CPU, memory, etc.) defined in your Pod specifications.
  2. Provisioning EC2 Instances: Based on the requirements, Karpenter dynamically provisions the most suitable EC2 instance type, ensuring an optimal balance between performance and cost.
  3. Node Initialization: The provisioned instance is then initialized and added to the cluster, allowing your Pods to be scheduled.

This process ensures that the infrastructure adapts precisely to your workloads, but it might take a few moments for the nodes to be ready, especially if this is the first workload being deployed.

While the provisioning might feel slightly slow initially, it ultimately ensures efficient resource utilization and cost savings.

After 47 seconds the first pod is running

Automatically a node is created

A node is created

Your application will run on the EKS Auto Mode cluster without any manual node or scaling configuration

Why Choose EKS Auto Mode?

EKS Auto Mode is perfect if you:

  • Want to focus on application development rather than infrastructure management.
  • Need a cost-optimized, automatically scalable cluster.
  • Run standard workloads that don’t require highly customized configurations.

While it’s not yet ideal for every use case (e.g., highly specialized workloads), it’s a game-changer for simplifying Kubernetes on AWS.

Conclusion

EKS Auto Mode eliminates much of the complexity of managing Kubernetes clusters, enabling you to focus on building great applications. With Terraform, setting up an Auto Mode cluster becomes even easier, giving you reproducible and version-controlled infrastructure.

I’ve worked with Kubernetes and EKS for a while, but I must admit — setting up an EKS cluster with Auto Mode was by far the quickest and easiest experience I’ve ever had.

The combination of automated node management and the simplicity of Terraform configurations made the entire process seamless. No more worrying about provisioning or scaling nodes manually — AWS handles all of that for you behind the scenes.

Have you tried EKS Auto Mode yet? Share your experiences and thoughts in the comments.

Source :

Terraform module for EKS : https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/eks-auto-mode/outputs.tf

Terraform module for VPC : https://github.com/terraform-aws-modules/terraform-aws-vpc

— — —

I’m Hervé-Gaël KOUAMO, Founder and CTO at HK-TECH, a French tech company specializing in designing, building, and optimizing applications. We also assist businesses throughout their cloud migration journey, ensuring seamless transitions and maximizing their digital potential.

I published my first tech book — naturally, on Kubernetes (after my first two novels). You can find it here: https://amzn.eu/d/4R3gf5j

--

--

Herve Khg
Herve Khg

Written by Herve Khg

CTO at HK-TECH. We are building web and mobile. High hands on experience in cloud infrastructure. I published my first tech book — https://amzn.eu/d/4R3gf5j

Responses (6)