Kubernetes
Kubernetes
• Krushnendu Jena
• Ashraf Shahzad
• Sandeep Kumar D
• Yogesh Sanjay Patil
Agenda
Introduction
Architecture
Key Concepts
Components
Kubernetes resources
VM
• The name of Kubernetes originates from Greek, meaning
Introduction “helmsman” or “pilot”, and is the root of “governor” and
“cybernetic”.
to • Kubernetes, often abbreviated as K8s, is an open-source container
orchestration platform designed to automate the deployment,
Kubernetes scaling, and management of containerized applications.
• It was originally developed by Google and is now maintained by
the Cloud Native Computing Foundation (CNCF).
VM
What is Kubernetes ? Why we need K8s ?
VM
What is Kubernetes ? Why we need
K8s ?
VM
What is Kubernetes ? Why we need
K8s ?
•Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such
as local storages, public cloud providers, and more.
•Automated rollouts and rollbacks You can describe the desired state for your deployed containers using
Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can
automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all
their resources to the new container.
•Secret and configuration management Kubernetes store and manage sensitive information, such as
passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration
without rebuilding your container images, and without exposing secrets in your stack configuration.
•Batch execution In addition to services, Kubernetes can manage your batch and CI workloads, replacing
containers that fail, if desired.
•Horizontal scaling Scale your application up and down with a simple command, with a UI, or automatically
based on CPU usage.
•IPv4/IPv6 dual-stack Allocation of IPv4 and IPv6 addresses to Pods and Services
D
G
VM
Kubernetes
Architecture
VM
Key Concepts:
Node:
A Node is like a computer in a network.
It's a machine, either a physical computer or a virtual machine, that does the actual work of running your applications.
Cluster:
A Kubernetes cluster is a set of nodes that run containerized applications. It includes the Master node for management
and worker nodes for running applications.
Master Node:
The Master Node is like the "boss" or the "brain" of the Kubernetes cluster.
It's in charge of making decisions and managing the whole show.
Worker Node:
A Worker Node is like a "worker" in a team.
It's where the actual applications run.
Each Worker Node can run many applications in little containers.
VM
Master node component
Controller
API Server: Scheduler:
Manager:
Cloud Controller
Etcd Manager
(Optional):
VM
VM
Master node component
VM
Master node component
VM
Worker node component
Kubelet
Kube Proxy:
Container Runtime:
VM
VM
Worker node component
VM
Kubernetes
Resources
Organization 2021-01-
DG 01
Core Kubernetes Resources
• Pod
• Services
• Deployments
• ReplicaSets
• ConfigMaps:
• Secrets:
• Ingress
• PersistentVolumes (PV)
• PersistentVolumeClaims (PVC):
• Namespaces:
• StatefulSets
• DaemonSets:
SP
What is a pod
DG
Pods, Containers and Microservices
DG
Kubernetes Network Model
DG
Pod Networking
DG
Networking: Containers in differnet Pods
Communication between containers is done using the IP from the Pod and the required port.
All Containers in the same host are attached to the same bridge.
DG
All Containers within the same pods expose the same IP address
and share a common network stack.
Networking:
Application Services from each container are exposed through
ports.
Ports can not be shared between containers in the same pod.
Containers Containers inside the Pod Communicate with each other through “
local host: port “
in the same
Pods
DG
Namespace
Namespace:
A Namespace provides a way to divide cluster resources among multiple
users or teams. It allows for resource isolation and management.
Kubernetes using an imperative command, you can use the kubectl create
namespace command:
Here's an example:
kubectl create namespace <mynamespace>
kubectl create namespace eric-ec-apps
list and view
kubectl get namespace or kubectl get ns
kubectl describe ns eric-ec-apps
kubectl get ns eric-ec-apps –o yaml
VM
To create a namespace using the declarative method
with a YAML file
you'll first need to create a YAML file that defines the namespace. Here's an example YAML file
named
mynamespace.yaml
apiVersion: v1
kind: Namespace
metadata: \
name: eric-ec-apps
VM
Managing Kubernetes Controllers
Managing Kubernetes controllers involves understanding and working with different types of controllers that
automate the management of applications and services in a Kubernetes cluster.
Types of Kubernetes Controllers:
• ReplicationController (RC)
• ReplicaSet (RS)
• Deployment
• StatefulSet
• DaemonSet
VM
ReplicationController (RC)
Purpose
•Ensure Pod Availability: The primary function of a ReplicationController is to ensure that a
specified number of pod replicas are running at any given time.
•Self-Healing: If a pod fails or is deleted, the ReplicationController will automatically create a
new one to replace it.
VM
ReplicationController (RC)
Key Features:
1.Replica Management:
1. We can specify the desired number of replicas.
2. The ReplicationController continuously monitors the current state and makes adjustments to
maintain the desired state.
2.Label Selector:
1. Uses label selectors to identify which pods it manages.
2. It can manage multiple pods that match a specified label selector.
3.Scaling:
1. Allows for easy scaling of applications by simply changing the replica count.
2. The ReplicationController will create or delete pods as necessary to match the new replica
count.
VM
ReplicationController (RC)
Components of a ReplicationController:
• spec: Specifies the desired state, including the replica count and the pod template.
example
#replicationcontroller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-replication-controller
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image
ports:
ReplicationController (RC)
Use Cases
•Legacy Applications: Some older Kubernetes setups might still use ReplicationControllers.
•Simple Applications: For straightforward use cases where advanced features of Deployments are not needed.
Transition to ReplicaSets:
• ReplicaSets: ReplicationControllers have largely been replaced by ReplicaSets in newer versions of Kubernetes.
ReplicaSets offer more powerful and flexible label selectors.
• Deployment: For most use cases, Deployments are recommended over directly using ReplicaSets or
ReplicationControllers. Deployments provide additional features such as rolling updates and rollbacks.
ReplicaSet(RS)
ReplicaSet in Kubernetes ensures that a specified number of pod replicas are running at any
given time. If a pod goes down, the ReplicaSet creates another to replace it. This is essential
for maintaining application availability and fault tolerance.
Key Concepts
•Pod: The smallest deployable unit in Kubernetes, which can contain one or more
containers.
•ReplicaSet: A Kubernetes object that manages a set of identical pods, ensuring a specified
number of replicas are running.
VM
ReplicaSet(RS)
VM
ReplicaSet(RS)
VM
ReplicaSet(RS)
Key Fields
VM
ReplicaSet(RS)
NB: ReplicaSets are often used indirectly through Deployments, which provide additional
management features like rolling updates and rollbacks.
VM
Deployment
•Key Concepts
•Pod: The smallest deployable unit in Kubernetes, which can contain one or more
containers.
•ReplicaSet: A Kubernetes object that manages a set of identical pods, ensuring a specified
number of replicas are running.
VM
ReplicaSet(RS)
VM
•A StatefulSet is a Kubernetes workload API object
that is used to manage stateful applications.
StatefulSets are designed to handle applications that
require persistent storage and stable network
identities. This is particularly important for
applications like databases and distributed systems,
where each instance needs to be uniquely identifiable
VM
StatefulSet
•Ordered, Graceful Deployment and Scaling: Pods in a StatefulSet are created and deleted in a
specific order. When scaling up or down, Kubernetes ensures that operations are performed
sequentially. This ensures that the application maintains its expected order and dependencies.
•Persistent Storage: StatefulSets work closely with PersistentVolume Claims (PVCs). Each pod in a
StatefulSet can have its own persistent storage, which remains even if the pod is deleted. This is
crucial for applications that need to retain data between restarts.
•Stable Network Identities: Pods in a StatefulSet get stable network identities. Each pod is
assigned a unique DNS endpoint, which remains consistent even if the pod is rescheduled to a
different node. This helps in maintaining communication within the application and with external
clients.
•Rolling Updates: StatefulSets support rolling updates, allowing you to update your application
without downtime. Kubernetes will update the pods one by one, ensuring that the application
remains available during the update process.
StatefulSet Vs Deployment
Pod interchangeability Pods in a StatefulSet are not interchangeable. All Pods are identical, so they’re interchangeable and
It’s expected that each Pod has a specific role, can be replaced at any time.
such as always running as a primary or read-
only replica for a database application.
Rollout ordering Pods are guaranteed to be created and removed No ordering is supported. When you scale down the
in sequence. When you scale down the Deployment, Kubernetes will terminate a random Pod.
StatefulSet, Kubernetes will terminate the most
recently created Pod.
Storage access Each Pod in the StatefulSet is assigned its All Pods share the same PV and PVC.
own Persistent Volume (PV) and Persistent
Volume Claim (PVC).
Deployment Vs
StatefulSet
1. Assume you deployed a MySQL database in the Kubernetes cluster and scaled
this to three replicas, and a frontend application wants to access the MySQL
cluster to read and write data. The read request will be forwarded to three Pods.
However, the write request will only be forwarded to the first (primary) Pod, and
the data will be synced with the other Pods. You can achieve this by using
StatefulSets.
2. Deleting or scaling down a StatefulSet will not delete the volumes associated
with the stateful application. This gives you your data safety. If you delete the
MySQL Pod or if the MySQL Pod restarts, you can have access to the data in the
same volume.
Summary of StatefulSets
In summary, StatefulSets provide the following advantages when compared to Deployment
objects:
•Ordered numbers for each Pod
•The first Pod can be a primary, which makes it a good choice when creating a replicated
database setup, which handles both reading and writing
•Other Pods act as replicas
•New Pods will only be created if the previous Pod is in running state and will clone the
previous Pod’s data
•Deletion of Pods occurs in reverse order.
StatefulSet
In this example:
•apiVersion: Specifies the Kubernetes API version, such as “apps/v1” for StatefulSets.
•kind: Specifies the type of Kubernetes resource, in this case, “StatefulSets.”
•metadata: Provides metadata for the StatefulSets , including the name, labels, and annotations.
•spec: Defines the desired state of the StatefulSets, including the number of replicas, the pod
template, and any other related specifications. It includes:
•replicas: Specifies the desired number of identical pod replicas to run.
•selector: Specifies the labels that the Replica Set uses to select the pods it should manage.
•template: Contains the pod template used for creating new pods, including container
specifications, image names, and container ports.
StatefulSet.yaml
------------
Stateful
selector:
matchLabels:
app: nginx
Example template:
metadata:
using
labels:
app: nginx
spec:
yaml containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: myclaim
mountPath: /mnt/data
volumeClaimTemplates:
- metadata:
name: myclaim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
•Results using the ymal file
et
DaemonSet
Deployment: When you create a DaemonSet, Kubernetes ensures that the specified pod is
scheduled on all eligible nodes. If new nodes are added to the cluster, the DaemonSet
automatically schedules the pod on those nodes as well.
Management: You can manage DaemonSets using standard Kubernetes tools like kubectl.
This includes updating, scaling, and deleting DaemonSets.
Selective Deployment: You can restrict a DaemonSet to run on specific nodes by using node
selectors, node affinity, or taints and tolerations.
Use Case: DaemonSets are ideal for applications that need to run on all nodes or a subset of
nodes, like log collection, monitoring, and node management tasks.
DaemonSet
DaemonSet YAML file
In this example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: example-daemonset
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: example-container
V
M
image: nginx
Functionality of DaemonSets:
1. Pod Scheduling:
• All Nodes: By default, DaemonSets ensure that a copy of the specified
pod runs on every node in the cluster.
DaemonS •Subset of Nodes: You can use node selectors, node affinity rules, and
tolerations to control which nodes the DaemonSet pods are scheduled
on.
et 2. Automatic Updates:
•When a new node is added to the cluster, the DaemonSet controller
automatically schedules the DaemonSet pod on the new node.
•If a node is removed from the cluster, the DaemonSet controller
ensures that the pod on the removed node is cleaned up.
3. Pod Management:
•DaemonSets handle the creation, scheduling, and deletion of pods to
match the desired state specified in the DaemonSet configuration.
•Rolling updates are supported, allowing you to update the DaemonSet
configuration and roll out changes gradually across the cluster.
DaemonSet
Use Cases:
1. Monitoring Agents:
• Deploy agents like Prometheus Node Exporter or Fluentd on all nodes to collect metrics and logs.
2. Network Services:
• Run network services such as CNI (Container Network Interface) plugins that need to be present on all
nodes for networking purposes.
3. Security Agents:
• Deploy security monitoring or compliance agents on all nodes to ensure cluster-wide security.
4. System Upgrades and Maintenance:
• Use DaemonSets to roll out system updates or perform maintenance tasks uniformly across all nodes.
Commands :
Scheduling:
scheduling refers to the process of assigning pods (containers) to nodes in
a cluster. The scheduler is a component of the Kubernetes control plane
responsible for making these decisions.
Node Selection:
The scheduler takes into account factors like available resources, node
affinities/anti-affinities, taints/tolerations, and other user-defined constraints.
Node Affinity/Anti-Affinity: Allows you to constrain a pod to run on nodes with
certain labels.
Taints/Tolerations: Taints are used to repel pods, while tolerations are used by
pods to indicate their willingness to accept pods with certain taints.
VM
example
• #nginx-pod-with-node-selector.yaml
• #nginx-pod-with-node-name.yaml
apiVersion: v1
apiVersion: v1
kind: Pod
kind: Pod
metadata:
metadata:
name: my-nginx-pod
name: my-nginx-pod
labels:
labels:
app: nginx
app: nginx spec:
spec: nodeSelector:
nodeName: your-node-name disktype: ssd
containers: containers:
- name: nginx-container - name: nginx-container
image: nginx:latest image: nginx:latest
ports: ports:
- containerPort: 80 - containerPort: 80
VM
Example:
• nginx-pod-with-node-affinity.yaml
• nginx-pod-with-node-anti-affinity.yaml
apiVersion: v1
apiVersion: v1
kind: Pod
kind: Pod
metadata:
metadata:
name: my-nginx-pod name: my-nginx-pod
labels: labels:
app: nginx app: nginx
spec: spec:
affinity: affinity:
nodeAffinity: podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms: - labelSelector:
- matchExpressions: matchExpressions:
- key: disktype - key: app
operator: In operator: In
values: values:
- ssd - nginx
containers: topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx-container
- name: nginx-container
image: nginx:latest
image: nginx:latest
ports:
ports:
- containerPort: 80
- containerPort: 80
VM
Example:
• nginx-pod-with- kubectl taint nodes <node-name>
toleration.yaml <key>=<value>:<effect>
apiVersion: v1 Example:
kind: Pod
kubectl taint nodes node-1 example-
metadata: key=value:NoSchedule
name: my-nginx-pod
labels: node-taint.yaml
app: nginx
apiVersion: v1
spec:
kind: Node
containers:
metadata:
- name: nginx-container
image: nginx:latest
name: node-1
ports: labels:
- containerPort: 80 foo: bar
tolerations: spec:
- key: "key" taints:
operator: "Equal" - key: example-key
value: "value" value: value
effect: "NoSchedule" effect: NoSchedule
VM
KUBECONFIG
clusters:
- name: my-cluster
cluster:
server: https://cluster-api-server-url
certificate-authority: /path/to/ca.crt
users:
- name: my-user
user:
client-certificate: /path/to/client.crt
client-key: /path/to/client.key
contexts:
- name: my-context
context:
cluster: my-cluster
user: my-user
namespace: my-namespace
current-context: my-context
VM
Storage
Volume:
A volume is a directory that may be backed by storage. It allows data to persist across the lifetime of a pod.
Storage class:
StorageClass is an abstraction layer that defines the characteristics and provisioning
mechanisms of the underlying storage for Persistent Volumes (PVs). It allows you to dynamically provision
storage resources without having to manually create PVs.
Persistent Volume:
Persistent Volume (PV) is a piece of storage in the cluster that has been manually provisioned
or dynamically provisioned using a StorageClass. PVs are used to store data in a way that allows it to
persist across pod restarts and rescheduling
Persistent Volume (PV) and Persistent Volume Claim (PVC)**:
PV and PVC are abstractions for managing storage in a cluster. A PV is a piece of storage, while a PVC is a request for
storage.
VM
VM
Example:
example-SC.yaml
• example-pvc.yaml • example-pod.yaml
apiVersion: storage.k8s.io/v1
apiVersion: v1
kind: StorageClass apiVersion: v1
metadata: kind: Pod
name: fast kind: PersistentVolumeClaim metadata:
provisioner: kubernetes.io/aws-ebs
parameters:
metadata: name: example-pod
spec:
type: gp2 name: example-pvc containers:
reclaimPolicy: Retain
• example-pv.yaml spec: - name: example-container
apiVersion: v1 image: nginx:latest
accessModes:
kind: PersistentVolume volumeMounts:
metadata: - ReadWriteOnce - name: storage
name: example-pv
resources: mountPath: /usr/share/nginx/html
spec: volumes:
capacity: requests: - name: storage
storage: 1Gi
storage: 1Gi persistentVolumeClaim:
accessModes:
claimName: example-pvc
- ReadWriteOnce
hostPath:
path: /data/example
VM
Security
VM
Cluster Configuration
VM
RBAC concepts in Kubernetes
VM
Example: • # Create a RoleBinding to bind the "pod-
reader" Role to a specific User.apiVersion:
rbac.authorization.k8s.io/v1
• # Define a Role named "pod-reader"
apiVersion: rbac.authorization.k8s.io/v1
that allows "get," "list," and "watch"
actions on pods. kind: RoleBinding
metadata:
apiVersion: rbac.authorization.k8s.io/v1
name: read-pods
kind: Role namespace: default
metadata: subjects:
namespace: default - kind: User
name: pod-reader name: "john" # Replace with the actual
username
rules: apiGroup: rbac.authorization.k8s.io
- apiGroups: [""] roleRef:
resources: ["pods"] kind: Role
verbs: ["get", "list", "watch"] name: pod-reader
apiGroup: rbac.authorization.k8s.io
VM
Example:
• # Create a Role named "pod-reader"
• kubectl create role pod-reader --verb=get,list,watch --
resource=pods --namespace=default
VM
Network Security
VM
Network Policies
• Network Policies: These are resources that
allow you to control the traffic flow between
pods. They provide a way to specify how pods
are allowed to communicate with each other
and other network endpoints
• This helps to isolate pods from each other and
to prevent unauthorized and un-necessary
access.
• In Kubernetes, Network plugins are important.
They are responsible for enforcing the network
policy rules
• Some common network plugins that happily
support network policies include:
Calico
Cilium
Weave Net
VM
Example
• #pod_web.yaml • #pod_other.yaml
# create_ns.yaml
apiVersion: v1 apiVersion: v1
apiVersion: v1
kind: Pod kind: Pod
kind: Namespace
metadata: metadata:
metadata: name: other-pod-1
name: web-pod-1
name: mynamespace namespace: mynamespace
namespace: mynamespace
labels: labels:
spec: spec:
containers: containers:
VM
Example
• # network_policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
# Accessing web pod from other pod (allowed)
metadata:
name: allow-web-traffic kubectl exec -it other-pod-1 --namespace=mynamespace -- sh
namespace: mynamespace
spec: wget -qO- web-pod-1.mynamespace
podSelector:
matchLabels: # Accessing other pod from web pod (denied)
app: web
ingress:
kubectl exec -it web-pod-1 --namespace=mynamespace -- sh
- from:
wget -qO- other-pod-1.mynamespace
- podSelector:
matchLabels:
app: web
VM
Deployment
• A Deployment provides declarative updates for Pods ReplicaSets.
• You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the
desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing
Deployments and adopt all their resources with new Deployments.
• The following are typical use cases for Deployments:
• Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of
the rollout to see if it succeeds or not.
• Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is
created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate.
Each new ReplicaSet updates the revision of the Deployment.
• Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback
updates the revision of the Deployment.
• Scale up the Deployment to facilitate more load.
• Pause the Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
• Use the status of the Deployment as an indicator that a rollout has stuck.
• Clean up older ReplicaSets that you don't need anymore
SP
Deployment
• Deployment – on an imperative approach
• Create deployment:
• kubectl create deployment --image=nginx nginx
• Generate Deployment YAML file (-o yaml). Don't create it(--dry-run):
• kubectl create deployment --image=nginx nginx --dry-run -o yaml
• Generate Deployment with 4 Replicas:
• kubectl create deployment nginx --image=nginx --replicas=4
• Scalling deployment:
• kubectl scale deployment nginx --replicas=4
• Update deployment:
• kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
• Roll back deployment:
• kubectl set image deployment/nginx-deployment nginx=nginx:1.15.1
SP
Deployment
• Deployment – on a
declarative approach
SP
Service
• In Kubernetes, a Service is a method for exposing a network application that is running as one or more Pods in your
cluster. In other word, An abstract way to expose an application running on a set of Pods as a network service
• A key aim of Services in Kubernetes is that you don't need to modify your existing application to use an unfamiliar
service discovery mechanism. You can run code in Pods, whether this is a code designed for a cloud-native world, or an
older app you've containerized. You use a Service to make that set of Pods available on the network so that clients can
interact with it.
• If you use a Deployment to run your app, that Deployment can create and destroy Pods dynamically.
• Each Pod gets its own IP address (Kubernetes expects network plugins to ensure this). For a given Deployment in your
cluster, the set of Pods running in one moment in time could be different from the set of Pods running that application a
moment later.
• Cluster IP - ClusterIP is the default kubernetes service. This service is created inside a cluster and can only be
accessed by other pods in that cluster. So basically, we use this type of service when we want to expose a service to
other pods within the same cluster.
• Nodeport - NodePort opens a specific port on your node/VM and when that port gets traffic, that traffic is forwarded
directly to the service. There are a few limitations and hence its not advised to use NodePort
- only one service per port
- You can only use ports 30000-32767
• Load Balancer - This is the standard way to expose service to the internet. All the traffic on the port is forwarded to
the service. It's designed to assign an external IP to act as a load balancer for the service. There's no filtering, no
routing. LoadBalancer uses cloud service. Few limitations with LoadBalancer:
SP - every service exposed will it's own ip address
Service
• ClusterIP Service
Definition:
• kubectl create -f nginx-svc-ci.yml
• Run below command and You can see the
internal-service in the list with a static IP
address.
• kubectl get svc
• To see all the End Points (IP addresses of
the Pods which are associated with service)
• kubectl describe svc internal-service
• Remove one of the pod and monitor the
Endpoints
• To access the application you need to use the IP
address of service with port number.
• Delete the CI service
• kubectl delete svc internal-service
SP
Service
SP
Service
• LoadBalancer Service
Definition:
• Create LoadBalancer service
• kubectl create -f nginx-svc-lb.yml
• kubectl get svc
• The External IP is to be provided by the load balancer ( more
suitable in cloud based environment) if there is not load
balancer then the external ip is in the pending state.
• kubectl describe svc external-service
• Remove one of the pod and monitor the Endpoints
• To access the application you need to use the IP address of
service with port number.
• You can access the application on browser by providing the IP
address of you any node with port number 31869 ...IP:31869
• kubectl delete svc external-service
SP
Service
SP
Init Containers
• init containers:
specialized
containers that run
before app
containers in a Pod.
Init containers can
contain utilities or
setup scripts not
present in an app
image.
SP
Monitoring, Logging, Debugging,
Troubleshooting
• Kubernetes Pod logs
• Kubectl logs <<Pod Name>>
• To find the Kubernetes cluster events
• Kubectl events
• Docker logs
• docker logs <<container name>>
• To find the events occurred in docker daemon
• docker events -- since <<date>>
• Monitoring
• Use Metric server implementation to monitor how much memory and CPU utilization for a particular pod or worker node.
• There are other 3rd party tools are also available for monitoring ( Prometheus and Garfana)
• Debugging and Troubleshooting
• 1. If the Pod is not getting created successfully then run kubectl describe pod <<podname>> command to check the
events which are occured to created the pod and also check the properties of the pod in case you are getting unexpected
value of any attribute
• 2. If you want to remove a pod under Replication controller or Deployment or a Service then run kubectl get pods -o
wide command and identify the pod which is required to be deleted and delete with the help of kubectl delete pod <<pod
name>> command
• 3. If a Pod is in running state but not working as per expected result, In that case check the log entries of the Pod
• 4. If a Pod is in the Pending state for a long time, In that case, check the worker node statuses by using kubectl get
nodes command and check all nodes are in Ready State or not
SP
• 5. If a Pod is not getting created on a particular node. In that case, check the node is tainted or not.
QA