0% found this document useful (0 votes)
16 views21 pages

Deploy A Two Node Kubernetes Cluster On AWS 1726859898

The document outlines a mini project to deploy a two-node Kubernetes cluster on AWS using Terraform. It includes detailed steps for installing Terraform, configuring AWS credentials, writing Terraform configuration for resources, and setting up the Kubernetes environment on both control plane and worker nodes. The final step involves joining the worker node to the control plane to complete the cluster setup.

Uploaded by

akhilrao199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views21 pages

Deploy A Two Node Kubernetes Cluster On AWS 1726859898

The document outlines a mini project to deploy a two-node Kubernetes cluster on AWS using Terraform. It includes detailed steps for installing Terraform, configuring AWS credentials, writing Terraform configuration for resources, and setting up the Kubernetes environment on both control plane and worker nodes. The final step involves joining the worker node to the control plane to complete the cluster setup.

Uploaded by

akhilrao199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

(Mini Project)

Deploy a Two-Node Kubernetes Cluster on AWS


Name:- Rudraksh Wagh Date:- 11/7/2024

Task:- Deploy a Two-Node Kubernetes Cluster on AWS.

Scenario:- Your team requires a Kubernetes cluster for deploying


containerized applications. This task involves setting up the core
infrastructure for a two-node Kubernetes cluster on AWS.
- Use Terraform for provisioning the resources.
- Reference the official Kubernetes documentation for hardware
recommendations for control plane and worker nodes.
- Use Ubuntu 20.04 as a Operating System in your instances.
- Make Sure necessary port numbers are opened in the Security Group
for the kubernetes cluster to work.
- Kubernetes Version 1.30 should be deployed.

Solution:-
Step 1: Install Terraform on Windows:-

1. Download Terraform:-
 https://developer.hashicorp.com/terraform/install Downloads.
 Download the appropriate version for Windows.
2. Install Terraform:-
 Extract the downloaded zip file.
 Move the terraform.exe file to a directory included in your
system's PATH. You can add a new directory (e.g., C:\Terraform)
to your PATH if it’s not already included:
 Right-click on This PC or Computer on your desktop or in File
Explorer.
 Select Properties.
 Click on Advanced system settings.
 Click the Environment Variables button.
 In the System variables section, find the Path variable and click
Edit.
 Add the path to the directory containing terraform.exe (e.g.,
C:\Terraform).

3. Verify Installation:-
 Open Command Prompt and type terraform --version to ensure
Terraform is installed correctly.
Step 2: Configure AWS Credentials:-

1. Install AWS CLI:-


 Download and install the AWS CLI from AWS CLI Downloads.
 Verify the installation by running aws --version in Command
Prompt.

2. Create IAM user role with policies for this task:-


 Log into aws console & search for “IAM” role, then goto “users”
section & click on “create user”.
 Give name & select "Programmatic access" to enable access via
the AWS CLI, SDKs, etc.
 Select “Attach custom policy” option & select
“AmazonEC2FullAccess, AmazonVPCFullAccess” policies, then
click on create user.
 Create the “Access key” of that user to connect with AWS CLI.
3. Configure AWS CLI with your IAM user credentials:-
 Run “aws configure” in cmd and enter your AWS Access Key ID,
Secret Access Key, Default region name (us-east-1), and Default
output format (json).

Step 3: Write Terraform Configuration:-


1. Create a directory for your Terraform configuration files:-
 Create a new directory (e.g., C:\terraform-projects\k8s-cluster).

2. Create a Terraform configuration file (main.tf). Enter code in file:-


---------------------------------------------------------------------------------------
provider "aws" {
region = "us-east-1" # Specify your AWS region
}
variable "key_name" {
description = "Name of the SSH key pair to use for EC2 instances"
default = "demo" # Replaced with your SSH key pair name
}
variable "instance_type_control_plane" {
description = "Instance type for Kubernetes control plane node"
default = "t2.medium" # Adjust based on Kubernetes
recommendations
}
variable "instance_type_worker_node" {
description = "Instance type for Kubernetes worker node"
default = "t2.medium" # Adjust based on Kubernetes
recommendations
}
variable "ami" {
description = "AMI ID for Ubuntu 20.04 LTS"
default = "ami-04a81a99f5ec58529" # Your specified AMI ID
}
# Create VPC
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "main-vpc"
}
}
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main-igw"
}
}
# Create Public Subnet
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
tags = {
Name = "main-subnet-public"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = {
Name = "main-public-rt"
}
}
resource "aws_route_table_association" "public" {
subnet_id = aws_subnet.public.id
route_table_id = aws_route_table.public.id
}
# Security Group for Kubernetes
resource "aws_security_group" "kubernetes_sg" {
name = "kubernetes-sg"
description = "Security group for Kubernetes cluster"
vpc_id = aws_vpc.main.id
ingress {
description = "SSH access"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "Kubernetes API Server"
from_port = 6443
to_port = 6443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "etcd server client API"
from_port = 2379
to_port = 2380
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "Kubelet API"
from_port = 10250
to_port = 10250
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "kube-scheduler"
from_port = 10251
to_port = 10251
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "kube-controller-manager"
from_port = 10252
to_port = 10252
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "NodePort Services"
from_port = 30000
to_port = 32767
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port =0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# EC2 Instances for Kubernetes Control Plane and Worker Nodes
resource "aws_instance" "control_plane" {
ami = var.ami
instance_type = var.instance_type_control_plane
key_name = var.key_name
security_groups = [aws_security_group.kubernetes_sg.id]
subnet_id = aws_subnet.public.id
tags = {
Name = "k8s-control-plane"
}
}
resource "aws_instance" "worker_node" {
ami = var.ami
instance_type = var.instance_type_worker_node
key_name = var.key_name
security_groups = [aws_security_group.kubernetes_sg.id]
subnet_id = aws_subnet.public.id
tags = {
Name = "k8s-worker-node"
}
}
---------------------------------------------------------------------------------------
3. Initialize the Terraform project:-
 Navigate to your project directory in Command Prompt.
 Run “terraform init” to initialize the project and download the
necessary provider plugins.

4. Apply the Terraform configuration:-


 Run “terraform apply” to create the EC2 & VPC. Confirm the
action when prompted by typing “yes”.
Step 4: Verify the EC2 & VPC in AWS Console:-

 Log in to your AWS Management Console.


 Navigate to the EC2 & VPC dashboard in the us-east-1 region.
 Verify that the EC2 instances & VPC is created.
Step 5: Connect to the k8s-controlplane-node & copy the below script
into a file called “master.sh” & execute the file for k8s master node
installation:-
---------------------------------------------------------------------------------------
sudo sed -i '/ swap / s/^/#/' /etc/fstab
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward =1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
sudo sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-
ip6tables net.ipv4.ip_forward
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o
/etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-
by=/etc/apt/keyrings/docker.asc]
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-
buildx-plugin docker-compose-plugin
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that
package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# If the directory /etc/apt/keyrings does not exist, it should be created
before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in
/etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]
https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee
/etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo sh -c "containerd config default > /etc/containerd/config.toml"
sudo sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/'
/etc/containerd/config.toml
sudo systemctl restart containerd.service
sudo systemctl restart kubelet.service
sudo systemctl enable kubelet.service
sudo kubeadm config images pull
sudo kubeadm init --pod-network-cidr=10.10.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f
https://raw.githubusercontent.com/cloudnativelabs/kube-
router/master/daemonset/kubeadm-kuberouter.yaml
kubectl create -f
https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests
/tigera-operator.yaml
curl
https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests
/custom-resources.yaml -O
---------------------------------------------------------------------------------------
Step 6: Connect to the k8s-worker-node & copy the below script into a
file called “worker.sh” & execute the file for k8s worker node
installation:-
---------------------------------------------------------------------------------------
sudo sed -i '/ swap / s/^/#/' /etc/fstab
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward =1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
sudo sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-
ip6tables net.ipv4.ip_forward
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o
/etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-
by=/etc/apt/keyrings/docker.asc]
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-
buildx-plugin docker-compose-plugin
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that
package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# If the directory /etc/apt/keyrings does not exist, it should be created
before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in
/etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]
https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee
/etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo sh -c "containerd config default > /etc/containerd/config.toml"
sudo sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/'
/etc/containerd/config.toml
sudo systemctl restart containerd.service
sudo systemctl restart kubelet.service
sudo systemctl enable kubelet.service
sudo kubeadm config images pull
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f
https://raw.githubusercontent.com/cloudnativelabs/kube-
router/master/daemonset/kubeadm-kuberouter.yaml
kubectl create -f
https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests
/tigera-operator.yaml
curl
https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests
/custom-resources.yaml -O
---------------------------------------------------------------------------------------
Step 7: Now copy the join token command from controlplane (master
node) to worker node for both nodes to be joined in a same cluster:-

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy