Introduction

In modern cloud computing, automating infrastructure provisioning is essential for efficiency and scalability. This guide provides a comprehensive tutorial on developing custom Terraform modules to deploy an Amazon Elastic Kubernetes Service (EKS) cluster from scratch. By leveraging Terraform, infrastructure as code (IaC) practices can be implemented to streamline cluster deployment and management.

Prerequisites

Before beginning, ensure the following tools and configurations are in place:

  • AWS CLI installed and configured
  • Terraform installed on the local machine
  • AWS IAM credentials with necessary permissions
  • Basic understanding of Terraform and Kubernetes

Step 1: Setting Up the Terraform Directory Structure

Organizing Terraform files into modules enhances reusability and maintainability. Create the following directory structure:

project-directory/

    modules/

        vpc/

        eks/

        node_group/

    main.tf

    variables.tf

    outputs.tf

Step 2: Creating the VPC Module

The first component is the Virtual Private Cloud (VPC) module, which provides networking resources for the EKS cluster.

  1. Inside modules/vpc/, create main.tf, variables.tf, and outputs.tf files.
  2. Define the VPC, subnets, and internet gateway in main.tf.
  3. Export necessary values such as VPC ID and subnet IDs in outputs.tf.

Example main.tf:

resource “aws_vpc” “eks_vpc” {

  cidr_block = var.vpc_cidr

  enable_dns_support = true

  enable_dns_hostnames = true

}

Step 3: Creating the EKS Module

The EKS module provisions the cluster and integrates it with the VPC module.

  1. Inside modules/eks/, create main.tf, variables.tf, and outputs.tf.
  2. Utilize the aws_eks_cluster resource to create the cluster.
  3. Reference the VPC module’s outputs to link networking components.

Example main.tf:

resource “aws_eks_cluster” “eks” {

  name     = var.cluster_name

  role_arn = var.cluster_role_arn

  vpc_config {

    subnet_ids = var.subnet_ids

  }

}

Step 4: Creating the Node Group Module

The node group module provisions worker nodes that form the compute capacity for the cluster.

  1. Inside modules/node_group/, create Terraform files.
  2. Define an Auto Scaling group and EC2 instances as worker nodes.
  3. Associate them with the EKS cluster for seamless communication.

Example main.tf:

resource “aws_eks_node_group” “node_group” {

  cluster_name  = var.cluster_name

  node_role_arn = var.node_role_arn

  subnet_ids    = var.subnet_ids

}

Step 5: Integrating Modules in the Root Module

Finally, use the created modules in the main Terraform configuration:

module “vpc” {

  source = “./modules/vpc”

  vpc_cidr = “10.0.0.0/16”

}

module “eks” {

  source         = “./modules/eks”

  cluster_name   = “my-eks-cluster”

  subnet_ids     = module.vpc.subnet_ids

  cluster_role_arn = “arn:aws:iam::123456789012:role/EKSRole”

}

Step 6: Deploying the Infrastructure

Execute the following commands to provision the EKS cluster:

terraform init

terraform plan

terraform apply -auto-approve

Step 7: Verifying the Deployment

After the deployment, confirm the cluster is running:

aws eks –region <region> describe-cluster –name my-eks-cluster –query cluster.status

Also, configure kubectl to interact with the cluster:

aws eks –region <region> update-kubeconfig –name my-eks-cluster

kubectl get nodes

Conclusion

By following this guide, Terraform modules can be effectively utilized to deploy an EKS cluster from scratch. Modularizing the infrastructure improves maintainability, scalability, and automation. Implementing infrastructure as code principles with Terraform ensures a reproducible and efficient Kubernetes environment.