5-k8s With Kind Cluster Sept 2024
5-k8s With Kind Cluster Sept 2024
------------------------------
agenda:
-------
* Intro to kubernetes
* Setup kind local cluster
* Pods
* Replicaset
* Deployment
* services
* namespace
* probes
* configMap and secreat
* persitance volume and statefullness
* HPA - Horizontal pod autoscaling
* ingress
--------------------
Intro to kubernetes
--------------------
Docker vs k8s
Docker provide OS level abstraction
K8S provide infrastructure abs on cloud
What is docker?
open source plateform for packing the app + all dependencies + run time
server
seperate app from underlying host
OS abstraction
What is kubernetes?
Kubernetes is an open-source container orchestration system for automating
software deployment,
scaling, and management.
kubernetes setup:
https://k21academy.com/docker-kubernetes/kubernetes-installation-options/
kubernetes Arch:
-----------------
Master: also called control plan (1-more master for HA)
Worker: 1 to many (5000) to make a cluster
components of k8s:
kube-controller-manager:
* A process that continously monitor workload/nodes etc
* desired state vs current state kubernetes
Worker Node:
-----------
kubelet:
* An agent that runs on each node in the cluster.
It makes sure that containers are running in a Pod.
* It create and manage container inside a pod
kube-proxy:
*kube-proxy is a network proxy that runs on each node in your
cluster,
implementing part of the Kubernetes Service concept.
Container runtime:
*A fundamental component that empowers Kubernetes to run
containers effectively.
It is responsible for managing the execution and lifecycle of
containers within the Kubernetes
environment.
verify installation:
kind version
kubectl version --output=yaml
-----------------------
Setup kind local cluster
------------------------
ls ~/.kube/config
01-cluster.yaml
----------------
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: dev-cluster
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30001
hostPort: 30001
protocol: TCP
- role: worker
- role: worker
to delete cluster:
kind delete cluster --name dev-cluster
What it do?
This will create local cluster with 3 nodes
1 master and 2 worker node
docker ps
command will list nodes
dev-cluster-worker-1
dev-cluster-worker-2
dev-cluster-control-plan-3
It will also create a bride network to conn bw different nodes
cat ~/.kube/config
will give information where the cluster, ip networking is running
-------------
get all notes in the cluster
docker exec -it <cid> bash now we are inside the master node
cd /etc/kubernetes/manifests
ls -l will display all yaml file related to master node
ps -aux process running within control plan
-------------
Pods
-------------
* Pods are the smallest deployable units of computing that you can create and
manage in Kubernetes.
* with shared storage and network resources, and a specification for how to
run the containers
* only one of the container contain app other container run as helper
container
https://k21academy.com/docker-kubernetes/kubernetes-pods-for-beginners/
Command K8s
------------
01-simple-pod.yaml
---------------
apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
name: mypod
spec:
containers:
- name: mypod
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
02-falling-pod.yaml
------------------
apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
name: mypod
spec:
containers:
- name: mypod
image: nginx:12
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
multiple pods:
03-multiple-pod.yaml
------------------------
apiVersion: v1
kind: Pod
metadata:
name: pod-1
labels:
dept: dept-1
team: team-a
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: pod-2
labels:
dept: dept-2
team: team-a
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: pod-3
labels:
dept: dept-3
team: team-b
spec:
containers:
- name: nginx
image: nginx
Port forwarding
------------------
04-port-forwarding.yaml
-----------------------
apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
name: mypod
spec:
containers:
- name: mypod
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
----------------
Replicaset
----------------
What is replicaset?
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at
any given time.
As such, it is often used to guarantee the availability of a specified number
of identical Pods.
metadata:
name: empapp
labels:
name: empapp
spec:
containers:
- name: empapp
image: rgupta00/empapp:1.1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
Example:
01-simple-rs.yaml
-----------------
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-rs
spec:
selector:
matchLabels:
app: my-app
replicas: 3
template:
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx
What happen if label is not matching pod label is not matching with Replicaset
---------------------------------------------------------------------------------
02-mismatch.yaml
-----------------
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-rs
spec:
selector:
matchLabels:
app: my-app-foo
replicas: 3
template:
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx
What happen if desire count is =2 and you alreay have 3 container running with same
labels
then one pod is deleted
---------------------
Deployment
---------------------
What is deployment?
Kubernetes saves time and migrate errors by automating the work and
repetitive manual functions involved in deploying, scaling, and updating
applications in production.
Deployments automate the launching of pod instances and ensure they are
running as
defined across all the nodes in the Kubernetes cluster.
Very Important:
---------------
Deployement ----> replicaset --> pod
01-simple-deploy.yaml
-----------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx
describe deployment:
kubectl describe deploy
port forwarding:
kubectl port-forward deploy/my-deploy 8080:80
02-deploy-rs.yaml
--------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
spec:
selector:
matchLabels:
app: empapp-service
replicas: 2
template:
metadata:
labels:
app: empapp-service
spec:
containers:
- name: empapp-service
image: rgupta00/empapp:1.1
ports:
- name: "app-port"
containerPort: 8080
Deployment revision
--------------------
Note : if we are changing no of pods that is managed in a deployment it dont
change
deployment revision
Exmaple 1: if u are just changing the desire pod count it is not consider new
revision
Example 2: if u change the version of application it is conider new
deployment
https://spacelift.io/blog/kubernetes-deployment-strategies
port forwarding
kubectl port-forward deploy/empapp-deploy 8080:8080
Annotations in kubernetes:
---------------------------
have special meaning for k8s
it is the way to pass some editional informationation using anntations
annotations:
kubernetes.io/change-cause: "deploying v3"
way to tell kubernete to show change cause when showing rollout history
kubectl rollout history deploy
03-deploy-rollout.yaml
---------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
annotations:
kubernetes.io/change-cause: "deploying v3"
spec:
selector:
matchLabels:
app: empapp-service
replicas: 2
template:
metadata:
labels:
app: empapp-service
spec:
containers:
- name: empapp-service
image: rgupta00/empapp:1.3
imagePullPolicy: Always
ports:
- name: "app-port"
containerPort: 8080
port forwarding
kubectl port-forward deploy/empapp-deploy 8080:8080
Rollout history
---------------
Switching from 1.1 to 1.2
let assume there is bug in 1.2 and we want to migrate from 1.2 to 1.1
Ex: 04-min-ready-seconds.yaml
-----------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
annotations:
kubernetes.io/change-cause: "deploying v1"
spec:
minReadySeconds: 10
selector:
matchLabels:
app: empapp-service
replicas: 2
template:
metadata:
labels:
app: empapp-service
spec:
containers:
- name: empapp-service
image: rgupta00/empapp:1.1
ports:
- name: "app-port"
containerPort: 8080
Recreate strategy
-----------------
05-deploy-recreate.yaml
-------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
annotations:
kubernetes.io/change-cause: "deploying v2"
spec:
minReadySeconds: 10
strategy:
type: Recreate
selector:
matchLabels:
app: empapp-service
replicas: 2
template:
metadata:
labels:
app: empapp-service
spec:
containers:
- name: empapp-service
image: rgupta00/empapp:1.2
ports:
- name: "app-port"
containerPort: 8080
strategy:
type: Recreate
Switching from v1 to v2
kubectl rollout history deploy
Ex: 05-deploy-recreate.yaml
Note: temparory pod count get double once all pod are ready old pod are deleted
that is the
meaning of maxSurge of 100%
spec:
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
Switching from v1 to v2
kubectl rollout history deploy
06-deploy-max-surge.yaml
------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
annotations:
kubernetes.io/change-cause: "deploying v2"
spec:
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: "100%"
selector:
matchLabels:
app: empapp-service
replicas: 3
template:
metadata:
labels:
app: empapp-service
spec:
containers:
- name: empapp-service
image: rgupta00/empapp:1.2
ports:
- name: "app-port"
containerPort: 8080
maxUnavailable
----------------
spec:
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
07-deploy-max-unavailable.yaml
-------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
annotations:
kubernetes.io/change-cause: "deploying v2"
spec:
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
selector:
matchLabels:
app: empapp-service
replicas: 3
template:
metadata:
labels:
app: empapp-service
spec:
containers:
- name: empapp-service
image: rgupta00/empapp:1.2
ports:
- name: "app-port"
containerPort: 8080
-----------------------
Service
-----------------------
during rolling update when once pod recread its ip address changes
it create problem if one pod connect to that pod
Type of services
------------------
* clusterIP (within)
* NodePort (External)
* Load balancer(External)
curl localhost:30001/hello-world
http://localhost:30001/hello-world
01-nodeport-service.yaml
---------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
annotations:
kubernetes.io/change-cause: "deploying v1"
spec:
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: "100%"
selector:
matchLabels:
app: empapp-service
replicas: 3
template:
metadata:
labels:
app: empapp-service
spec:
containers:
- name: empapp-service
image: rgupta00/empapp:1.1
ports:
- name: "app-port"
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: empapp-svc
spec:
type: NodePort
selector:
app: empapp-service
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
--------------------
Namespace
---------------------
get all existing namespaces
---------------------------------
kubectl get ns
creating namespace
----------------------
kubectl create ns dev
kubectl create ns qa
01-ns-demo.yaml
-----------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
namespace: dev
annotations:
kubernetes.io/change-cause: "deploying v1"
spec:
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: "100%"
selector:
matchLabels:
app: empapp-service
replicas: 3
template:
metadata:
labels:
app: empapp-service
spec:
containers:
- name: empapp-service
image: rgupta00/empapp:1.1
ports:
- name: "app-port"
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: empapp-svc
spec:
type: NodePort
selector:
app: empapp-service
ports:
- port: 8080
targetPort: 8080
nodePort: 30002
understanding kube-system
-----------------------------
probes:
---------
pods are consider to be live and health as soon as container started by pod
if pod is ready
service will send request to to the pod
rollingUpdate will terminate older pods
we should ensure that our pods are live and ready to avoid any surprises.
01-startup-prob.yaml
--------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
annotations:
kubernetes.io/change-cause: "deploying v1"
spec:
minReadySeconds: 10
selector:
matchLabels:
app: empapp-service
replicas: 2
template:
metadata:
labels:
app: empapp-service
spec:
containers:
- name: empapp-service
image: rgupta00/empapp:1.1
startupProbe:
httpGet:
path: /
port: 8080
periodSeconds: 1
failureThreshold: 3
ports:
- name: "app-port"
containerPort: 8080
port forwarding
kubectl port-forward deploy/empapp-deploy 8080:8080
02-startup-tcpsocket.yaml
------------------------
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: mongo
image: mongo
startupProbe:
tcpSocket:
port: 27017
periodSeconds: 1
failureThreshold: 5
03-startup-exec.yaml
----------------------
# prefer httpGet / tcpSocket
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx
startupProbe:
exec:
command:
- "cat"
- "/usr/share/nginx/html/index.html"
periodSeconds: 1
failureThreshold: 3
# exec:
# command:
# - "/bin/healthcheck"
04-liveness-probe.yaml
------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
annotations:
kubernetes.io/change-cause: "deploying v1"
spec:
minReadySeconds: 10
selector:
matchLabels:
app: empapp-service
replicas: 2
template:
metadata:
labels:
app: empapp-service
spec:
terminationGracePeriodSeconds: 1
containers:
- name: empapp-service
image: rgupta00/empapp:1.1
resources:
limits:
memory: "128Mi"
cpu: "500m"
startupProbe:
httpGet:
path: /
port: 8080
periodSeconds: 1
failureThreshold: 3
livenessProbe:
httpGet:
path: /
port: 8080
periodSeconds: 1
failureThreshold: 3
ports:
- name: "app-port"
containerPort: 8080
05-readiness-probe.yaml
--------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: empapp-deploy
annotations:
kubernetes.io/change-cause: "deploying v1"
spec:
minReadySeconds: 10
selector:
matchLabels:
app: empapp-service
replicas: 2
template:
metadata:
labels:
app: empapp-service
spec:
terminationGracePeriodSeconds: 1
containers:
- name: empapp-service
image: rgupta00/empapp:1.1
resources:
limits:
memory: "128Mi"
cpu: "500m"
startupProbe:
httpGet:
path: /
port: 8080
periodSeconds: 1
failureThreshold: 3
livenessProbe:
httpGet:
path: /
port: 8080
periodSeconds: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 8080
periodSeconds: 1
failureThreshold: 3
ports:
- name: "app-port"
containerPort: 8080
kubectl apply -f
kubectl describe pod/empapp-deploy-6bd885cff7-8wnvw
readness probe failed
ConfigMaps
Used to store non-sensitive configuration data, such as environment
variables or configuration files. ConfigMaps store data as key-value pairs,
and are typically defined in a YAML file. ConfigMaps are a good way to store
configuration data that might change across different deployment environments.
Secrets
Used to store sensitive information, such as passwords,
API keys, or TLS certificates. Secrets store data as base64-encoded data,
and are encrypted at rest. Secrets are created independently of the pods that use
them,
which reduces the risk of the secret being exposed
kubectl get cm
01-simple-cm.yaml
---------------------
apiVersion: v1
kind: ConfigMap
metadata:
name: app-properties
data:
appUrl: "http://my-app-service"
timeout: "30"
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
restartPolicy: Never
containers:
- name: ubuntu
image: ubuntu
resources:
limits:
memory: "128Mi"
cpu: "500m"
env:
- name: "request.timeout"
valueFrom:
configMapKeyRef:
name: app-properties
key: timeout
- name: "application.url"
valueFrom:
configMapKeyRef:
name: app-properties
key: appUrl
args:
- env
02-inject-cm-as-env.yaml
------------------------
now we want to insert all key-value in one go as env
apiVersion: v1
kind: ConfigMap
metadata:
name: app-properties
data:
appUrl: "http://my-app-service"
timeout: "30"
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
restartPolicy: Never
containers:
- name: ubuntu
image: ubuntu
resources:
limits:
memory: "128Mi"
cpu: "500m"
envFrom:
- configMapRef:
name: app-properties
args:
- env
02-inject-cm-as-env.yaml
------------------------
now we want to insert all key-value in one go as env
03-inject-cm-as-file.yaml
-------------------------
how to store multiline string as a file inside the pod
kubectl get cm
kubectl get cm kube-root-ca.crt -o yaml
03-inject-cm-as-file.yaml:
------------------------
apiVersion: v1
kind: ConfigMap
metadata:
name: app-properties
data:
application.properties: |
appUrl=http://my-app-service
timeout=30
a.b.c.d=something
username=raj
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
terminationGracePeriodSeconds: 1
restartPolicy: Never
containers:
- name: ubuntu
image: ubuntu
resources:
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- name: config-volume
mountPath: /usr/share/props
args:
- sleep
- "3600"
volumes:
- name: config-volume
configMap:
name: app-properties
kubectl get all
Secret:
--------------------------
same as configMap but it is for sensitive data
value is base64 encoded
use cases:
ssh key file
basic credentials
etc
Example:
04-simple-secret.yaml
--------------------
apiVersion: v1
kind: Secret
metadata:
name: app-secret
data:
username: cmFq
password: YWRtaW4xMjM=
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
restartPolicy: Never
containers:
- name: ubuntu
image: ubuntu
resources:
limits:
memory: "128Mi"
cpu: "500m"
env:
- name: "app_username"
valueFrom:
secretKeyRef:
name: app-secret
key: username
- name: "app_password"
valueFrom:
secretKeyRef:
name: app-secret
key: password
args:
- env
HPA template:
-------------
HPA stands for Horizontal Pod Autoscaler,
a Kubernetes feature that automatically scales the number of pods in a deployment
or replica set.
HPA is designed to handle varying user traffic by scaling up and down to meet
demand
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-
walkthrough/
hpa-template.yaml
----------------
it is monitoring the deployment and scale the pod the horizontally automatically
till now we are manually update no of pod
but we want to scale automatically in weekday less and in weekend is more
ab -n 2000 -c 5 http://ngnix/
Ingress:
---------
Step 1:
----------
Nginx Ingress Controller
Applying below resource:
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/
provider/kind/deploy.yaml
docker ps
Example1:02-simple-ingress
-------------------
https://www.harness.io/blog/kubernetes-services-explained