This guide will explore deploying a Kubernetes cluster on AWS EC2 instances using Terraform, a robust Infrastructure as Code (IaC) tool. Terraform enables scalable, manageable, and repeatable infrastructure setups, making it ideal for orchestrating Kubernetes clusters in a cloud environment like AWS. This post walks you through each step of configuring, creating, and managing a Kubernetes cluster with Terraform on AWS.

1. Introduction to Kubernetes Deployment with Terraform on AWS

Terraform offers a structured approach to automating Kubernetes deployments on AWS. This guide covers setting up a cluster across multiple EC2 instances and using Terraform to streamline the process, ensuring a scalable, highly available cluster configuration.

2. Prerequisites for Setting Up AWS and Terraform

Before beginning, make sure you have:

  • An AWS account with administrative access.
  • The AWS CLI is installed and configured for your account.
  • Terraform installed (version 1.0 or above).
  • Basic familiarity with Kubernetes and Terraform.

3. Creating an IAM User with Necessary Permissions

For security, create a dedicated IAM user for Terraform operations with the necessary permissions to launch EC2 instances, manage VPCs, and create IAM roles for Kubernetes.

  • Navigate to IAM > Users in the AWS Management Console.
  • Click Add User and select Programmatic Access.
  • Attach policies such as AmazonEC2FullAccess, IAMFullAccess, and AmazonVPCFullAccess.
  • Download the access credentials, as you’ll need them for Terraform configuration.

4. Generating an SSH Key Pair for Secure Access

To enable SSH access to the EC2 instances, generate an SSH key pair:

ssh-keygen -t rsa -b 4096 -f ~/.ssh/terraform_k8s

Save the private key securely, as it will be required to access the Kubernetes nodes.

5. Overview of the Kubernetes Cluster Architecture

Our setup involves:

  • 1 Master Node: Handles the Kubernetes API and manages the cluster.
  • 2+ Worker Nodes: Run the containerized applications.
  • VPC and Subnets: Isolated network setup for the cluster.
  • Security Groups: Control access to the nodes.

6. Initializing the Master Node and Worker Nodes with Terraform

Define the EC2 instance types, storage, and networking details for both master and worker nodes in a Terraform configuration file. Organize the resources by defining the following:

  • VPC and Subnets: For network isolation.
  • Security Groups: For securing node access.
  • EC2 Instances: For master and worker nodes, specify instance types and AMI IDs.

7. Automating Kubernetes Cluster Setup with Terraform Scripts

Use Terraform scripts to automate the deployment:

  • Main.tf: Configure provider and initialize AWS resources.
  • Variables.tf: Define configurable variables (e.g., instance type, node count).
  • Outputs.tf: Capture outputs like IP addresses and instance IDs for reference.

Example of a simple EC2 instance configuration:

resource “aws_instance” “master” {

  ami           = var.master_ami

  instance_type = var.master_instance_type

  key_name      = var.ssh_key_name

  security_groups = [aws_security_group.master_sg.name]

  tags = {

    Name = “Kubernetes-Master”

  }

}

8. Adjusting Worker Node Count and Other Variables

Modify variables.tf to define the number of worker nodes and other settings. This allows easy scaling by adjusting the worker node count and reapplying the configuration.

variable “worker_count” {

  type    = number

  default = 2

}

9. Applying Terraform Configuration for Cluster Creation

To create the infrastructure, run the following Terraform commands:

  1. Initialize Terraform:
    terraform init
  2. Plan the Deployment:
    terraform plan -out=tfplan
  3. Apply the Configuration:
    terraform apply tfplan

These steps provision the resources and configure the master and worker nodes.

10. Accessing and Managing the Kubernetes Cluster

Once deployed, access the master node via SSH and set up kubectl to manage the cluster. The kubeconfig file will allow you to control and monitor your cluster from the master node or a local workstation.

ssh -i ~/.ssh/terraform_k8s ec2-user@<master_node_ip>

kubectl get nodes

11. Destroying the Kubernetes Cluster with Terraform

To delete the cluster, use the terraform destroy command. Terraform removes all resources created during the deployment, keeping your AWS account clean and free of unused resources.

terraform destroy

Conclusion: Leveraging Terraform for Efficient Kubernetes Infrastructure Management

Using Terraform for Kubernetes on AWS simplifies cluster management, enhances scalability, and reduces manual configuration. Terraform’s IaC approach ensures the entire infrastructure is versioned, repeatable, and easily adjustable.

References

What’s the Difference Between Terraform and Kubernetes?

What is Amazon EKS?