Introduction to K3s: A Beginner-Friendly Kubernetes Distribution

K3s is a lightweight Kubernetes distribution designed for easy use and streamlined performance. Ideal for IoT and edge computing, K3s simplifies the deployment of Kubernetes clusters without sacrificing functionality. This guide will walk you through deploying a K3s cluster on AWS, offering a practical approach to mastering Kubernetes.

Why K3s for Kubernetes Cluster Deployment?

Streamlined Setup and Production Readiness

K3s are perfect for beginners and seasoned developers due to their simplified installation process and minimal resource requirements. It’s designed to efficiently run production workloads, providing a fully certified Kubernetes experience without the overhead of a traditional setup.

Prerequisites: Preparing Your Application and AWS Environment

Before diving into K3s deployment, ensure you have a Dockerized application ready and an AWS environment set up. Here’s a quick rundown of what you need:

Dockerizing Your Application

If you’re new to Docker, refer to Docker’s official documentation for a comprehensive beginner’s guide. Dockerizing your application involves creating a Dockerfile, building your image, and pushing it to a Docker registry.

Setting up AWS Infrastructure: VPC, Subnets, and EC2 Instances

  1. Create a VPC: Configure a Virtual Private Cloud (VPC) with public and private subnets.
  2. Launch EC2 Instances: Provision EC2 instances for your K3s master and worker nodes.

Step-by-Step Guide: Configuring the K3s Master Node

Easy Installation with K3s Script

  1. SSH into the Master Node:

    ssh -i /path/to/key.pem ec2-user@your-master-node-public-ip
  1. Install K3s:

    curl -sfL https://get.k3s.io | sh –

Verifying Master Node Status with Kubectl

  1. Retrieve K3s Configuration:

    sudo cat /etc/rancher/k3s/k3s.yaml
  1. Set Up Kubectl:
    export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

kubectl get nodes

Integrating Worker Nodes into Your K3s Cluster

Retrieving the Master Node Token

  1. Retrieve Token:

    sudo cat /var/lib/rancher/k3s/server/node-token

Adding Worker Nodes with K3s Command

  1. SSH into Each Worker Node:
    ssh -i /path/to/key.pem ec2-user@your-worker-node-public-ip
  1. Install K3s Agent:
    curl -sfL https://get.k3s.io | K3S_URL=https://your-master-node-private-ip:6443 K3S_TOKEN=your-master-node-token sh –

Understanding Kubernetes Deployment and Service Resources

Deploying Your Application: Desired State and Replicas

  1. Create a Deployment:
    apiVersion: apps/v1

kind: Deployment

metadata:

  name: my-app

spec:

  replicas: 3

  selector:

    matchLabels:

      app: my-app

  template:

    metadata:

      labels:

        app: my-app

    spec:

      containers:

      – name: my-app

        image: your-docker-image

        ports:

        – containerPort: 80

  1. Apply the Deployment:

    kubectl apply -f deployment.yaml

Exposing Your Application: Choosing the Right Service Type

  1. Create a Service:
    apiVersion: v1

kind: Service

metadata:

  name: my-app-service

spec:

  selector:

    app: my-app

  ports:

  – protocol: TCP

    port: 80

    targetPort: 80

  type: LoadBalancer

  1. Apply the Service:
    kubectl apply -f service.yaml

Load Balancing User Traffic with NGINX

Creating an NGINX EC2 Instance

  1. Launch an EC2 Instance: Set up a new EC2 instance as your NGINX load balancer.

Configuring NGINX for Load Balancing

  1. Install NGINX:
    sudo amazon-linux-extras install nginx1
  1. Configure NGINX:
    sudo nano /etc/nginx/nginx.conf

Add the following configuration:

upstream k3s_cluster {

    server your-worker-node-private-ip-1;

    server your-worker-node-private-ip-2;

    server your-worker-node-private-ip-3;

}

server {

    listen 80;

    location / {

        proxy_pass http://k3s_cluster;

    }

}

  1. Restart NGINX:
    sudo systemctl restart nginx

Conclusion and Next Steps

Recap of Your K3s Cluster Deployment

In this guide, you’ve successfully deployed a K3s cluster on AWS, integrated worker nodes, and set up load balancing with NGINX. This foundational knowledge prepares you for more advanced Kubernetes topics and real-world applications.

References

Using the K3s Kubernetes distribution in an Amazon EKS CI/CD pipeline

Kubernetes on AWS