Stop using EBS as Persistant Volume for EKS pod, use EFS instead — (Fev-2023)

Herve Khg
6 min readFeb 8, 2023

--

I am writing this post to explain my journey for replacing all the EBS volumes attached to the pod of our EKS clusters to EFS. In this page, I will explain why I decided to do this move after using EBS as PV for 3 years and how to do that without downtime and not losing any data.

As technical lead of HK-TECH I’m in charge of managing clusters that host applications of our client that we design, develop and maintain. We’ve made the choice of using micro-service architecture using EKS. Currently, we have more than 200 applications that we maintain on EKS.

For reading/understanding this article, you will need as prerequisite general knowledge have to be familiarize with terraform, Kubectl and copy. I will not describe the step for creating EKS cluster.

Why do I decide to replace EBS with EFS?

  • The main reason: Impossible to mount EBS to multiple nodes. As EBS does not support ReadWriteMany. So we were forced to use NodeAffinity and label to have multiple pods in the same node. So we don’t take advantage of distributing a load of an application on several nodes.
  • It was not possible to do full CI/CD for delivery applications update without human action as regularly we were forced to delete deployment in order to force Kubernetes to update the pod image. This was slowing the process of releasing updates for the application.
  • Sometimes the node hadn’t enough resources to have a new pod, so the scaling of the pod number was complicated.
  • As some label in the node was set up through the command line, each update of nodes was generating incident as AWS removed labels during the node upgrade (In fact they replace nodes)
  • Managing the lifecycle and backup of each EBS took us time, even though we are using velero. With EFS the backup and replication process will ease

Step 1 — Create the EFS with terraform

The terraform code below will create EFS that will be used as PV for the Kubernetes pod. I attach to it security groups that allow the Kubernetes network to access the EFS on port 2949.

As the Kubernetes pod is running on multiple availability zones, I create an EFS Mount target on each of that AZ, to ensure that every pod in the cluster will be able to access it.

Keep the efs_id value, you will need it later. In our case the id was : fs-51500hktech2023

Step 2— Setup EFS CSI Driver in you Kubernetes cluster

According to Amazon EFS is a tool installed inside the EKS cluster that provides a CSI interface that allows Kubernetes clusters running on AWS to manage the lifecycle of Amazon EFS file systems.

In order, words is the mandatory tools that will translate and send EKS instruction to AWS to manage EFS

The documentation that provides AWS for setup and configuring EFS CSI Driver is complete and working well. I recommend following it instead of finding incomplete docs in google. I will not paste the same code that provides AWS, you have the link is here

Step 3— Adding more policy in the instance profile role of the nodes

This step is a fix for a missing policy in AWS documentation. If you don’t add this policy in the nodes of your cluster, the pod will be not able to resolve EFS DNS name, so impossible to mount it. It took me many hours to identify this issue, that why I add it.

As you can see on line 3, the policy is attached to the node role.

I don’t need to be precise when you deploy your terraform change you have to execute the needed init, plan and apply commands.

If you have completed all the previous steps, you’ve done half of the job. The other step will concern only Kubernetes. You will need EFS ID: fs-51500hktech2023 created in step 1

Step 5— Create Kubernetes StorageClass and Persistant VolumeClaim (Dynamic provisioning)

A StorageClass in Kubernetes is an object that describes the parameters for dynamically provisioning Persistent Volumes (PVs). It defines the available storage options, such as performance and durability, and maps to a specific underlying storage provider (In our case AWS EFS). The StorageClass is used to dynamically provision PVs and to create persistent volume claims that consume the dynamically provisioned PVs.

Below is the YAML file that will create Kubernetes StorageClass and PersistantVolumeClaim.

Line 27 (accessModes: ReadWriteMany) is the most beneficial of using EFS instead of EBS. EBS doesn’t support that mode. These simple instructions are very precious. It says that multiple nodes will be able to mount in the same that the PV associated to that PVC, so multiple pods in different nodes could mount in the same time at this volume. It’s not possible with EBS as PV.

The file is named pv-efs.yaml. To create the resource in Kubernetes execute this command

Create StorageClass and PersistantVolumeClaim

Check that PVC is created

Check that storage class is created

Step 6— Create Kubernetes deployment that mount the EFS volume

This code below defines a Kubernetes deployment that creates 3 replicas of pods called “efs-pod”. Each pod will run a container called “efs-container” which will use an Nginx image. The container will also have a mounted volume called “efs” that will be associated with a persistent volume claim called “efs-pvc” create previously. The mount path for this volume will be “/mnt/efs”.

Check that all the pod are running. They all share the same volume mounted in /mnt/efs

Step 7— Switch Existing Volume from EBS to EFS without losing data

Let's say that we have an EBS Volume created with EBS StorageAccount (line 1–9/ PVC (line 11–23) and mounted in pod deployment (line 26–56) as the configuration below

Initial Yaml with EBS

How I switch from EBS to EFS without losing data ?

Step 7.1 : Create a new StorageAccount and Persistent Volume Claim for the EFS as below

pvc-efs.yaml — Setup for creating PVC for EFS

Step 7.2: Update deployment — Mount a new volume in the pod that point to new EFS PVC

Take attention to lines (28–29) we mount a new EFS named images-efs volume declared in lines (34–36) from PVC (hktech-efs-pvc) created above

deployment.yaml — Deploy pod with mount volume

Then connect inside to the pod with kubectl and copy data from images-ebs mountPath (../static/img) to images-efs mountPath (../static/img-efs)

At this stage, your data is now present in the EFS :-). Now you have to mount the EFS with the data in the right path (../static/img) instead of EBS.

For doing that you just have to replace in deployment file the volumeMounts name by the one for EFS (form images-ebs to images-efs) in line 27 of deployment.yaml as below

deployment.yaml — Mount EFS volume in the application Path

Update the deployment with kubectl apply -f deployment.yaml.

Verify that the pod is running and that the image is visible in application side.

Step 7.3: Clean PVC and Storage Account EBS and

If all the previous step is ok. You can now clean your deployment by removing useless code and volume linked to EBS

  • On deployment.yaml — Remove the useless mountPath to ../static/img-efs (line 29–30)
  • Remove StorageClass (hktech-ebs-sc) and Persitant Volume Claim (hktech-ebs-pvc) linked to EBS (Describe in step 7)

And Voilà

I’m Hervé-Gaël KOUAMO, Founder and CTO of HK-TECH, a French tech company specializing in designing, building, and optimizing applications. Our mission is to assist businesses throughout their cloud migration journey, ensuring seamless transitions and maximizing their digital potential. You can follow me on LinkedIn : https://www.linkedin.com/in/herv%C3%A9-ga%C3%ABl-kouamo-157633197/

--

--

Herve Khg
Herve Khg

Written by Herve Khg

CTO at HK-TECH. We are building web and mobile. High hands on experience in cloud infrastructure. I published my first tech book — https://amzn.eu/d/4R3gf5j

Responses (6)