0% found this document useful (0 votes)
4 views29 pages

Simplifying Kubernetes Day 2

The document provides an overview of Kubernetes components, including master and worker nodes, services, and controllers, with a focus on how they interact within a cluster. It details the creation and management of various service types such as ClusterIP, NodePort, and LoadBalancer, along with examples of YAML configurations. Additionally, it covers resource limits for pods, the use of namespaces for resource division, and the importance of endpoints in service communication.

Uploaded by

www.lordkrishnan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views29 pages

Simplifying Kubernetes Day 2

The document provides an overview of Kubernetes components, including master and worker nodes, services, and controllers, with a focus on how they interact within a cluster. It details the creation and management of various service types such as ClusterIP, NodePort, and LoadBalancer, along with examples of YAML configurations. Additionally, it covers resource limits for pods, the use of namespaces for resource division, and the importance of endpoints in service communication.

Uploaded by

www.lordkrishnan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Simplifying Kubernetes - Day 2

Kubernetes Components
Kubernetes has the following main components:

 Master node
 Worker node
 Services
 Controllers
 Pods
 Namespaces and quotas
 Network and policies
 Storage

The kube-apiserver is the central hub of the Kubernetes cluster. All


calls, whether internal or external, are handled by it. It is the only
component that connects to ETCD.

The kube-scheduler uses an algorithm to determine which node a


specific pod should be hosted on. It checks the available resources of
the nodes to identify the best node for that pod.

The ETCD stores the cluster’s state, network, and other persistent
information.

The kube-controller-manager is the main controller that interacts


with the kube-apiserver to determine its state. If the state does not
match, the manager will contact the necessary controller to check its
desired state. Various controllers are in use, such as endpoints,
namespace, and replication controllers.

The kubelet interacts with Docker installed on the node and ensures
that the containers that need to be running are indeed running.
The kube-proxy is responsible for managing the network for
containers, including exposing container ports.

The Supervisord is responsible for monitoring and restarting, if


necessary, the kubelet and Docker. For this reason, when there is an
issue with the kubelet, such as using a different cgroup driver than the
one running in Docker, you will notice that it frequently tries to restart
the kubelet.

A Pod is the smallest unit you will deal with in Kubernetes. You can
have more than one container per pod, but keep in mind that they will
share the same resources, such as IP. One good reason to have
multiple containers in a pod is to consolidate logs.

Since a Pod can contain multiple containers, it often resembles a


virtual machine (VM), where you could have multiple services running
while sharing the same IP and other resources.

Services are a way to expose communication through a NodePort or


LoadBalancer to distribute requests across multiple pods of a
deployment. They function as a load balancer.

Container Network Interface

To provide networking for containers, Kubernetes uses the CNI


(Container Network Interface) specification.

CNI is a specification that includes libraries for developing plugins to


configure and manage container networks. It provides a common
interface for various networking solutions in Kubernetes. You can find
plugins for AWS, GCP, Cloud Foundry, and others.

https://github.com/containernetworking/cni
While CNI defines the network for pods, it does not assist with
communication between pods on different nodes.

The basic characteristics of Kubernetes networking are:

 All pods can communicate with each other across different nodes.
 All nodes can communicate with all pods.
 No use of NAT.

All pod and node IPs are routed without using NAT. This is achieved by
using software that helps create an overlay network. Some examples
include:

 Weave
 Flannel
 Canal
 Calico
 Romana
 Nuage
 Contiv

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Services

Creating a ClusterIP Service

kubectl expose deployment nginx


service:/nginx exposed

kubectl get svc


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h


nginx ClusterIP 10.100.186.213 <none> 80/TCP
33s

curl 10.100.186.213
Welcome to nginx!

kubectl logs -f nginx-6f858d4d45-r9zpf


10.32.0.1 - - [11/Jul/2018:03:20:24 +0000] "GET / HTTP/1.1" 200 612
"-" "curl/7.52.1" "-"

10.32.0.1 - - [11/Jul/2018:03:20:32 +0000] "GET / HTTP/1.1" 200 612


"-" "curl/7.52.1" "-"

kubectl delete svc nginx


service "nginx" deleted

Now let’s create our ClusterIP service, but we’ll create a YAML file with
its definitions:

vim primeiro-service-clusterip.yaml
apiVersion: v1

kind: Service

metadata:

labels:

run: nginx

name: nginx-clusterip

namespace: default

spec:
ports:

 port: 80

protocol: TCP

targetPort: 80

selector:

run: nginx

sessionAffinity: None

type: ClusterIP

kubectl create -f primeiro-service-clusterip.yaml


service:/nginx-clusterip created

kubectl get services


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h

nginx-clusterip ClusterIP 10.104.244.201 <none> 80/TCP


9s

kubectl describe service nginx


Name: nginx-clusterip

Namespace: default

Labels: run=nginx

Annotations: <none>
Selector: run=nginx

Type: ClusterIP

IP: 10.104.244.201

Port: <unset> 80/TCP

TargetPort: 80/TCP

Endpoints: 10.44.0.1:80

Session Affinity: None

Events: <none>

kubectl delete -f primeiro-service-clusterip.yaml


service "nginx-clusterip" deleted

Now let’s change a detail in our manifest and play with


sessionAffinity:

vim primeiro-service-clusterip.yaml
apiVersion: v1

kind: Service

metadata:

labels:

run: nginx

name: nginx-clusterip

namespace: default

spec:
ports:

 port: 80

protocol: TCP

targetPort: 80

selector:

run: nginx

sessionAffinity: ClientIP

type: ClusterIP

kubectl create -f primeiro-service-clusterip.yaml


service:/nginx-clusterip created

kubectl get services


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h

nginx-clusterip ClusterIP 10.111.125.37 <none> 80/TCP


12s

kubectl describe service nginx


Name: nginx-clusterip

Namespace: default

Labels: run=nginx

Annotations: <none>
Selector: run=nginx

Type: ClusterIP

IP: 10.111.125.37

Port: <unset> 80/TCP

TargetPort: 80/TCP

Endpoints: 10.44.0.1:80

Session Affinity: ClientIP

Events: <none>

With this, we can now maintain the session, meaning it will keep the
connection to the same pod, respecting the client’s source IP.

Now we can remove the service:

kubectl delete -f primeiro-service-clusterip.yaml


service "nginx-clusterip" deleted

Creating a NodePort Service

kubectl expose deployment nginx --type=NodePort


service:/nginx exposed

kubectl get svc


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h

nginx NodePort 10.103.66.10 <none> 80:31059/TCP


11s
kubectl delete svc nginx
service "nginx" deleted

Now let’s create a NodePort service, but we’ll create a YAML manifest
with its definitions:

vim primeiro-service-nodeport.yaml
apiVersion: v1

kind: Service

metadata:

labels:

run: nginx

name: nginx-nodeport

namespace: default

spec:

externalTrafficPolicy: Cluster

ports:

 nodePort: 31111

port: 80

protocol: TCP

targetPort: 80

selector:

run: nginx
sessionAffinity: None

type: NodePort

kubectl create -f primeiro-service-nodeport.yaml


service:/nginx-nodeport created

kubectl get services


NAME TYPE CLUSTER-IP ... PORT(S) AGE

kubernetes ClusterIP 10.96.0.1 ... 443/TCP 2h

nginx-nodeport NodePort 10.100.250.181 ... 80:31111/TCP


14s

kubectl describe service nginx


Name: nginx-nodeport

Namespace: default

Labels: run=nginx

Annotations: <none>

Selector: run=nginx

Type: NodePort

IP: 10.100.250.181

Port: <unset> 80/TCP

TargetPort: 80/TCP

NodePort: <unset> 31111/TCP

Endpoints: 10.44.0.1:80
Session Affinity: None

External Traffic Policy: Cluster

Events: <none>

kubectl delete -f primeiro-service-nodeport.yaml


service "nginx-nodeport" deleted

Creating a LoadBalancer Service

kubectl expose deployment nginx --type=LoadBalancer


service:/nginx exposed

kubectl get svc


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP


2h

nginx NodePort 10.109.184.120 <pending>


80:31111/TCP 14s

kubectl delete svc nginx


service "nginx" deleted

Now let’s create a LoadBalancer service, but we’ll create a YAML with
its definitions:

vim primeiro-service-loadbalancer.yaml
apiVersion: v1

kind: Service
metadata:

labels:

run: nginx

name: nginx-loadbalancer

namespace: default

spec:

externalTrafficPolicy: Cluster

ports:

 nodePort: 31222

port: 80

protocol: TCP

targetPort: 80

selector:

run: nginx

sessionAffinity: None

type: LoadBalancer

kubectl create -f primeiro-service-loadbalancer.yaml


service:/nginx-loadbalancer created

kubectl get services


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
2h

nginx NodePort 10.96.172.176 <pending>


80:31111/TCP 14s

kubectl describe service nginx


Name: nginx-loadbalancer

Namespace: default

Labels: run=nginx

Annotations: <none>

Selector: run=nginx

Type: LoadBalancer

IP: 10.96.172.176

Port: <unset> 80/TCP

TargetPort: 80/TCP

NodePort: <unset> 31222/TCP

Endpoints: 10.44.0.1:80

Session Affinity: None

External Traffic Policy: Cluster

Events: <none>

kubectl delete -f primeiro-service-loadbalancer.yaml


service "nginx-loadbalancer" deleted
EndPoint
Whenever we create a service, an endpoint is automatically created.
The endpoint is simply the IP of the pod that the service will use. For
example, when we create a ClusterIP service, it has its own IP, right?
When we access that IP, it redirects the connection to the pod through
this IP, the EndPoint.

To list the created endpoints:

kubectl get endpoints


NAME ENDPOINTS AGE

kubernetes 10.142.0.5:6443 4d

Let’s examine this endpoint in more detail:

kubectl describe endpoints kubernetes


Name: kubernetes

Namespace: default

Labels: <none>

Annotations: <none>

Subsets:

Addresses: 10.142.0.5

NotReadyAddresses: <none>

Ports:

Name Port Protocol

---- ---- --------


https 6443 TCP

Events: <none>

Let’s create an example. We’ll create a deployment and then a service


to see the endpoints that will be created in more detail:

kubectl run nginx --image=nginx --port=80 --replicas=3


deployment.apps:/nginx created

kubectl expose deployment nginx


service:/nginx exposed

kubectl get endpoints


NAME ENDPOINTS AGE

kubernetes 10.142.0.5:6443 4d

nginx 10.44.0.1:80,10.44.0.2:80,10.44.0.3:80 2m

kubectl describe endpoints nginx


Name: nginx

Namespace: default

Labels: run=nginx

Annotations: <none>

Subsets:

Addresses: 10.44.0.1,10.44.0.2,10.44.0.3

NotReadyAddresses: <none>

Ports:
Name Port Protocol

---- ---- --------

<unset> 80 TCP

Events: <none>

resourceVersion: ""

selfLink: ""

curl <IP_ENDPOINT>

kubectl delete deployment nginx


deployment.apps:/nginx deleted

kubectl delete service nginx


service "nginx" deleted

Limiting Resources
When we create a pod, we can specify the amount of CPU and memory
(RAM) that each container can consume. When a container has
resource limit configurations, the scheduler is responsible for allocating
that container to the best possible node based on available resources.

We can configure two types of resources: CPU, specified in core units,


and memory, specified in byte units.

Let’s create our first deployment with resource limits. We’ll use the
nginx image and copy the deployment’s YAML:
kubectl run nginx --image=nginx --port=80 --replicas=1

kubectl get deployments


deployment.apps:/nginx created

kubectl get deployment nginx -o yaml > deployment-


limitado.yaml

vim deployment-limitado.yaml
...

spec:

containers:

 image: nginx

imagePullPolicy: Always

name: nginx

ports:

 containerPort: 80

protocol: TCP

resources:

limits:

memory: "256Mi"

cpu: "200m"

requests:

memory: "128Mi"
cpu: "50m"

terminationMessagePath: /dev/termination-log

terminationMessagePolicy: File

Now let’s create our deployment and verify the resources:

kubectl replace -f deployment-limitado.yaml


deployment.extensions:/nginx replaced

Let’s access a container and test the configuration:

kubectl get pod


NAME READY STATUS RESTARTS AGE

nginx-7dcffc9bff-pd46r 1/1 Running 0 9s

kubectl exec -ti nginx-7dcffc9bff-pd46r -- /bin/bash


Now, inside the container, install and run stress to simulate load on
our resources (CPU and memory):

apt-get update && apt-get install -y stress

stress --vm 1 --vm-bytes 128M --cpu 1


stress: info: [221] dispatching hogs: 1 cpu, 0 io, 1 vm, 0 hdd

Here, we’re stressing the container, using 128M of RAM and one CPU
core. Experiment with the limits you set.

If you exceed the configured limit, you’ll receive an error like this, as it
won’t be able to allocate the resources:
stress --vm 1 --vm-bytes 512M --cpu 1
stress: info: [230] dispatching hogs: 1 cpu, 0 io, 1 vm, 0 hdd

stress: FAIL: [230] (415) <-- worker 232 got signal 9

stress: WARN: [230] (417) now reaping child worker processes

stress: FAIL: [230] (451) failed run completed in 0s

kubectl delete deployment nginx


deployment.extensions "nginx" deleted

Namespaces
In Kubernetes, we have something called Namespaces. It’s simply a
virtual cluster within the physical Kubernetes cluster.

Namespaces are a way to divide a cluster’s resources among multiple


environments, teams, or projects.

Let’s create our first namespace:

kubectl create namespace primeiro-namespace


namespace/primeiro-namespace created

Let’s list all namespaces in Kubernetes:

kubectl get namespaces


NAME STATUS AGE

default Active 10d

kube-public Active 10d

kube-system Active 10d

primeiro-namespace Active 3m
Get more information:

kubectl describe namespace primeiro-namespace


Name: primeiro-namespace

Labels: <none>

Annotations: <none>

Status: Active

No resource quota.

No resource limits.

As we can see, our namespace is still raw, without configurations. Let’s


enhance this namespace by adding resource limits using LimitRange.

Let’s create the LimitRange manifest:

vim limitando-recursos.yaml
apiVersion: v1

kind: LimitRange

metadata:

name: limitando-recursos

spec:

limits:

 default:

cpu: 1

memory: 100Mi
defaultRequest:

cpu: 0.5

memory: 80Mi

type: Container

Now let’s add this LimitRange to the namespace:

kubectl create -f limitando-recursos.yaml -n primeiro-


namespace
limitrange/limitando-recursos created

Listing the LimitRange:

kubectl get limitranges


No resources found.

Oops, we didn’t find it, did we? That’s because we forgot to specify the
namespace when listing:

kubectl get limitrange -n primeiro-namespace


NAME CREATED AT

limitando-recursos 2018-07-22T05:25:25Z

Or:

kubectl get limitrange --all-namespaces


NAMESPACE NAME CREATED AT

primeiro-namespace limitando-recursos 2018-07-22T05:25:25Z

Let’s describe the LimitRange:


kubectl describe limitrange -n primeiro-namespace
Name: limitando-recursos

Namespace: primeiro-namespace

Type Resource Min Max Default Request Default Limit Max


Limit/Request Ratio

---- -------- --- --- --------------- ------------- -----------------------

Container cpu - - 500m 1 -

Container memory - - 80Mi 100Mi -

As we can see, we’ve added memory and CPU limits for each container
created in this namespace. If a container is created in this namespace
without LimitRange configurations, it will inherit these default resource
limits.

Let’s create a pod to verify if the limit is applied:

vim pod-limitrange.yaml
apiVersion: v1

kind: Pod

metadata:

name: limit-pod

spec:

containers:

 name: meu-container

image: nginx
Now let’s create a pod outside the limited namespace and another
inside the limited namespace (primeiro-namespace) and observe the
resource limits applied to each container:

kubectl create -f pod-limitrange.yaml


pod/limit-pod created

kubectl create -f pod-limitrange.yaml -n primeiro-namespace


pod/limit-pod created

Let’s list these pods and then view more details:

kubectl get pods --all-namespaces


NAMESPACE NAME READY STATUS RESTARTS AGE

default limit-pod 1/1 Running 0 2m

primeiro-namespace limit-pod 1/1 Running 0 2m

kubectl describe pod limit-pod -n primeiro-namespace


Name: limit-pod

Namespace: primeiro-namespace

Node: elliot-03/10.142.0.6

Start Time: Sun, 22 Jul 2018 05:36:06 +0000

Labels: <none>

Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set:


cpu, memory request for container meu-container; cpu, memory limit
for container meu-container

Status: Running
IP: 10.44.0.2

Containers:

meu-container:

Container ID:
docker://4085b0c1e716f173378a9352213556f298e2caf3bf750919d9f8
03151885e4d6

...

Limits:

cpu: 1

memory: 100Mi

Requests:

cpu: 500m

memory: 80Mi

As we can see, the pod in the primeiro-namespace namespace has


configured resource limits.

Kubectl taint
A taint is nothing more than adding properties to a cluster node to
prevent pods from being allocated to inappropriate nodes.

For example, every master node in the cluster is marked to not receive
pods that are not related to cluster management. The master node is
marked with the NoSchedule taint, so the Kubernetes scheduler does
not allocate pods to the master node and looks for other nodes without
this mark:
kubectl describe node elliot-01 | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule

Let’s test some things and allow the master node to run other pods.

First, let’s run 3 replicas of nginx:

kubectl run nginx --image=nginx --replicas=3


deployment.apps:/nginx created

kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE ... NODE

nginx... 1/1 Running 0 1m ... elliot-02

nginx... 1/1 Running 0 1m ... elliot-02

nginx... 1/1 Running 0 1m ... elliot-02

Let’s add the NoSchedule taint to the slave nodes as well to see how
they behave:

kubectl taint node elliot-02 key1=value1:NoSchedule


node/elliot-02 tainted

kubectl taint node elliot-03 key1=value1:NoSchedule


node/elliot-03 tainted

kubectl describe node elliot-02 | grep -i taint


Taints: key1=value1:NoSchedule

kubectl describe node elliot-03 | grep -i taint


Taints: key1=value1:NoSchedule
Now let’s increase the number of replicas:

kubectl scale deployment nginx --replicas=5


deployment.extensions/nginx scaled

kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE ... NODE

nginx... 1/1 Running 0 6m ... elliot-02

nginx... 1/1 Running 0 6m ... elliot-02

nginx... 0/1 Pending 0 26s ... <none>

nginx... 0/1 Pending 0 26s ... <none>

nginx... 1/1 Running 0 6m ... elliot-03

As we can see, the new replicas are orphaned because there are no
nodes available without the NoSchedule taint.

kubectl scale deployment nginx --replicas=1


deployment.extensions:/nginx scaled

kubectl get pods


NAME READY STATUS RESTARTS AGE

nginx-64f497f8fd-glxlf 0/1 Pending 0 40s

Let’s remove the NoExecute taint from the slave nodes:

kubectl taint node elliot-02 key1:NoExecute-


node/elliot-02 untainted
kubectl taint node elliot-03 key1:NoExecute-
node/elliot-03 untainted

kubectl get pods


NAME READY STATUS RESTARTS AGE

nginx-64f497f8fd-glxlf 1/1 Running 0 1m

Now we have a node operating normally.

But what if our slave nodes become unavailable? Can we run pods on
the master node?

Of course we can. Let’s configure our master node so the scheduler


can allocate pods to it:

kubectl taint nodes --all node-role.kubernetes.io/master-


node/elliot-01 untainted

kubectl describe node elliot-01 | grep -i taint


Taints: <none>

Now let’s increase the number of replicas for our nginx:

kubectl scale deployment nginx --replicas=4


deployment.extensions:/nginx scaled

kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE ... NODE

nginx... 1/1 Running 0 40s ... elliot-02

nginx... 1/1 Running 0 28m ... elliot-02


nginx... 1/1 Running 0 40s ... elliot-01

nginx... 1/1 Running 0 40s ... elliot-03

Let’s add the NoExecute taint to the slave nodes to see what
happens:

kubectl taint node elliot-02 key1=value1:NoExecute


node/elliot-02 tainted

kubectl taint node elliot-03 key1=value1:NoExecute


node/elliot-03 tainted

kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE ... NODE

nginx... 1/1 Running 0 23s ... elliot-01

nginx... 1/1 Running 0 23s ... elliot-01

nginx... 1/1 Running 0 23s ... elliot-01

nginx... 1/1 Running 0 23s ... elliot-01

kubectl delete deployment nginx


deployment.extensions "nginx" deleted

The scheduler allocated everything to the master node. As we can see,


taints can be used to adjust which pods should be allocated to which
nodes.

Let’s allow our scheduler to allocate and run pods on all nodes:
kubectl taint node --all key1:NoSchedule-
node/elliot-01 untainted

node/elliot-02 untainted

node/elliot-03 untainted

kubectl taint node --all key1:NoExecute-


node/elliot-01 untainted

node/elliot-02 untainted

node/elliot-03 untainted

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy