Simplifying Kubernetes Day 2
Simplifying Kubernetes Day 2
Kubernetes Components
Kubernetes has the following main components:
Master node
Worker node
Services
Controllers
Pods
Namespaces and quotas
Network and policies
Storage
The ETCD stores the cluster’s state, network, and other persistent
information.
The kubelet interacts with Docker installed on the node and ensures
that the containers that need to be running are indeed running.
The kube-proxy is responsible for managing the network for
containers, including exposing container ports.
A Pod is the smallest unit you will deal with in Kubernetes. You can
have more than one container per pod, but keep in mind that they will
share the same resources, such as IP. One good reason to have
multiple containers in a pod is to consolidate logs.
https://github.com/containernetworking/cni
While CNI defines the network for pods, it does not assist with
communication between pods on different nodes.
All pods can communicate with each other across different nodes.
All nodes can communicate with all pods.
No use of NAT.
All pod and node IPs are routed without using NAT. This is achieved by
using software that helps create an overlay network. Some examples
include:
Weave
Flannel
Canal
Calico
Romana
Nuage
Contiv
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Services
curl 10.100.186.213
Welcome to nginx!
Now let’s create our ClusterIP service, but we’ll create a YAML file with
its definitions:
vim primeiro-service-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-clusterip
namespace: default
spec:
ports:
port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
sessionAffinity: None
type: ClusterIP
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: ClusterIP
IP: 10.104.244.201
TargetPort: 80/TCP
Endpoints: 10.44.0.1:80
Events: <none>
vim primeiro-service-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-clusterip
namespace: default
spec:
ports:
port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
sessionAffinity: ClientIP
type: ClusterIP
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: ClusterIP
IP: 10.111.125.37
TargetPort: 80/TCP
Endpoints: 10.44.0.1:80
Events: <none>
With this, we can now maintain the session, meaning it will keep the
connection to the same pod, respecting the client’s source IP.
Now let’s create a NodePort service, but we’ll create a YAML manifest
with its definitions:
vim primeiro-service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
nodePort: 31111
port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
sessionAffinity: None
type: NodePort
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: NodePort
IP: 10.100.250.181
TargetPort: 80/TCP
Endpoints: 10.44.0.1:80
Session Affinity: None
Events: <none>
Now let’s create a LoadBalancer service, but we’ll create a YAML with
its definitions:
vim primeiro-service-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-loadbalancer
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
nodePort: 31222
port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
sessionAffinity: None
type: LoadBalancer
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: LoadBalancer
IP: 10.96.172.176
TargetPort: 80/TCP
Endpoints: 10.44.0.1:80
Events: <none>
kubernetes 10.142.0.5:6443 4d
Namespace: default
Labels: <none>
Annotations: <none>
Subsets:
Addresses: 10.142.0.5
NotReadyAddresses: <none>
Ports:
Events: <none>
kubernetes 10.142.0.5:6443 4d
nginx 10.44.0.1:80,10.44.0.2:80,10.44.0.3:80 2m
Namespace: default
Labels: run=nginx
Annotations: <none>
Subsets:
Addresses: 10.44.0.1,10.44.0.2,10.44.0.3
NotReadyAddresses: <none>
Ports:
Name Port Protocol
<unset> 80 TCP
Events: <none>
resourceVersion: ""
selfLink: ""
curl <IP_ENDPOINT>
Limiting Resources
When we create a pod, we can specify the amount of CPU and memory
(RAM) that each container can consume. When a container has
resource limit configurations, the scheduler is responsible for allocating
that container to the best possible node based on available resources.
Let’s create our first deployment with resource limits. We’ll use the
nginx image and copy the deployment’s YAML:
kubectl run nginx --image=nginx --port=80 --replicas=1
vim deployment-limitado.yaml
...
spec:
containers:
image: nginx
imagePullPolicy: Always
name: nginx
ports:
containerPort: 80
protocol: TCP
resources:
limits:
memory: "256Mi"
cpu: "200m"
requests:
memory: "128Mi"
cpu: "50m"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
Here, we’re stressing the container, using 128M of RAM and one CPU
core. Experiment with the limits you set.
If you exceed the configured limit, you’ll receive an error like this, as it
won’t be able to allocate the resources:
stress --vm 1 --vm-bytes 512M --cpu 1
stress: info: [230] dispatching hogs: 1 cpu, 0 io, 1 vm, 0 hdd
Namespaces
In Kubernetes, we have something called Namespaces. It’s simply a
virtual cluster within the physical Kubernetes cluster.
primeiro-namespace Active 3m
Get more information:
Labels: <none>
Annotations: <none>
Status: Active
No resource quota.
No resource limits.
vim limitando-recursos.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: limitando-recursos
spec:
limits:
default:
cpu: 1
memory: 100Mi
defaultRequest:
cpu: 0.5
memory: 80Mi
type: Container
Oops, we didn’t find it, did we? That’s because we forgot to specify the
namespace when listing:
limitando-recursos 2018-07-22T05:25:25Z
Or:
Namespace: primeiro-namespace
As we can see, we’ve added memory and CPU limits for each container
created in this namespace. If a container is created in this namespace
without LimitRange configurations, it will inherit these default resource
limits.
vim pod-limitrange.yaml
apiVersion: v1
kind: Pod
metadata:
name: limit-pod
spec:
containers:
name: meu-container
image: nginx
Now let’s create a pod outside the limited namespace and another
inside the limited namespace (primeiro-namespace) and observe the
resource limits applied to each container:
Namespace: primeiro-namespace
Node: elliot-03/10.142.0.6
Labels: <none>
Status: Running
IP: 10.44.0.2
Containers:
meu-container:
Container ID:
docker://4085b0c1e716f173378a9352213556f298e2caf3bf750919d9f8
03151885e4d6
...
Limits:
cpu: 1
memory: 100Mi
Requests:
cpu: 500m
memory: 80Mi
Kubectl taint
A taint is nothing more than adding properties to a cluster node to
prevent pods from being allocated to inappropriate nodes.
For example, every master node in the cluster is marked to not receive
pods that are not related to cluster management. The master node is
marked with the NoSchedule taint, so the Kubernetes scheduler does
not allocate pods to the master node and looks for other nodes without
this mark:
kubectl describe node elliot-01 | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
Let’s test some things and allow the master node to run other pods.
Let’s add the NoSchedule taint to the slave nodes as well to see how
they behave:
As we can see, the new replicas are orphaned because there are no
nodes available without the NoSchedule taint.
But what if our slave nodes become unavailable? Can we run pods on
the master node?
Let’s add the NoExecute taint to the slave nodes to see what
happens:
Let’s allow our scheduler to allocate and run pods on all nodes:
kubectl taint node --all key1:NoSchedule-
node/elliot-01 untainted
node/elliot-02 untainted
node/elliot-03 untainted
node/elliot-02 untainted
node/elliot-03 untainted