0% found this document useful (0 votes)
9 views7 pages

All Q

The document outlines a series of Kubernetes tasks including creating ClusterRoles, ServiceAccounts, NetworkPolicies, and PersistentVolumes, along with commands for scaling deployments and monitoring pods. It also includes instructions for upgrading Kubernetes components, managing node states, and creating Ingress resources. Each task is accompanied by specific command-line instructions to achieve the desired configuration and functionality within a Kubernetes cluster.

Uploaded by

Sarojini Samal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views7 pages

All Q

The document outlines a series of Kubernetes tasks including creating ClusterRoles, ServiceAccounts, NetworkPolicies, and PersistentVolumes, along with commands for scaling deployments and monitoring pods. It also includes instructions for upgrading Kubernetes components, managing node states, and creating Ingress resources. Each task is accompanied by specific command-line instructions to achieve the desired configuration and functionality within a Kubernetes cluster.

Uploaded by

Sarojini Samal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 7

Q1.

Create a new ClusterRole named deployment-clusterRole, which only allows to


create the following resource types:

. Deployment
. StatefulSet
. DaemonSet

Create new ServiceAccount named cicd-token in the existing namespace app-team1.


Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-
token, limited to the namespace app-team1

Ans:
k create clusterrole deployment-clusterRole --verb=create --
resource=Deployment,StatefulSet,DaemonSet
k create serachaccount cicd-token -n app-team1
k create clusterrolebinding deployment-clusterrole --clusterrole=deployment-
clusterRole --serviceaccount=app-team1:cicd-token

-->Q2. Set the node named ek8s-node-0 as unavailable and reschedule all the pods
running on it.

Ans:

Q4.a: First, create a snapshot of the existing etcd instance running at


https://127.0.0.1:2379, saving the snapshot to /srv/data/etcd-snapshot.db.

Creating a snapshot of the given instance is expected to complete in seconds.If the


operation seems to hang, something's likely wrong with your command usectrl+c to
cancel the operation and try again.

Q4.b: Next, restore and existing, previous snapshot located at /data/backup/etcd-


snapshot-previous.db.

The following TLS certificates/key are supplied for connecting to the server with
etcdctl:

CA certificate: /opt/KUIN00601/ca.crt
Client certificate:/opt/KUIN00601/etcd-client.crt
Client key: /opt/KUIN00601/ectd-client.key

Q8. Set configuration context:


Student $ kubectl config use-context k8s
Scale the deployment webserver to 6 pods

Ans:
k scale deployment webserver --replicas=6

Q10. Check to see how many nodes are ready (not including nodes tainted NoSchedule)
and write the number to
/opt/KUSC00402/kusc00402.txt.

Ans:
k get nodes
echo XX > /opt/KUSC00402/kusc00402.txt
cat /opt/KUSC00402/kusc00402.txt
k describe nodes node01 | grep -i taints

-->Q14. Monitor the logs of pod bar and :


. Extract log lines corresponding to error unable-to-access-website
. Write them to /opt/KUTR00101/bar

Ans:
k logs bar | grep -i 'unable-to-access-website' > /opt/KUTR00101/bar
cat /opt/KUTR00101/bar

-->Q16.From the pod label name=cpu-loader, find pods running high CPU workloads and
write the name of the pod consumingmost CPU to the file
/opt/KUTR00401/KUTR00401.txt (which already exists)

Hint: Need to write pod name not number

Q17. Student $ kubectl config use-context wk8s

Task: A kubernetes worker node, named wk8s-node-0 is in state NotReady.


Investigate why this is the case, and perform any appropriate steps to bring the
node to a Ready state, ensuring that any changes are made permanent.

You can ssh to the failed node using:


Student $ ssh wk8s-node-0

Ans:Student $ kubectl config use-context wk8s


ssh wk8s-node-0
sudo sudo
systemctl restart kubelet
systemctl enable kubelet
exit
exit
k get nodes

-->Q3.Given an existing kubernetes cluster running version 1.18.8, upgrade all of


the kubernetes control plane and node components on the master node only to version
1.19.0.
You are also expected to upgrade kubelet and kubectl on the master node.

Note: Be sure to drain the master node before upgrading it and uncordon it after
the upgrade. Do not upgrade the worker nodes, etc, the container manager,the CNI
plugin, the DNS service or any other addons.

hint: need to serach kubeadm upgrade

Q5.Create a new NetworkPolicy named allow-port-from-namespace in the existing


namespace fubar.
Ensure that the new NetworkPolicy allows Pods in namespace internal to connect to
port 9200/tcp of Pods in namespace fubar.
Further ensure that the new NetworkPolicy:
. does not allow access to Pods, which don't listen on port 9200/tcp
. does not allow access from Pods, which are not in namespace internal.
hint: kubectl label ns internal project=internal(Need to run first & remember this)

Ans:
vi np.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: internal
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 9200

:wq!
k label ns internal project=internal
k create -f np.yaml

Q6. Reconfigure the existing deployment front-end and add a port specification
named http exposing port 80/tcp of
the existing container nginx.

Create a new service named front-end-svc exposing the container port http.

Configure the new service to also expose the individual Pods via a NodePort on the
nodes on which they are scheduled.

hint: for exposing cmd , kubectl expose deploy

Ans:
k get deployment
vim deployment

ports:
port:80

:wq!

k expose deployment front-end --port=80 --name=front-end-svc --type=NodePort


Q7.Create a new nginx ingress resource as follows:
Name:ping
Namespace:ing-internal
Exposing service hello on path /hello using service port 5678

hint : ingressClassName: nginx-example (delete this)

The availability of service hello can be checked using the following command, which
should return hello.
student $ curl -kL <INTERNAL_IP>
>/hello
Ans:

vim ingress.yaml

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: ping

annotations:

nginx.ingress.kubernetes.io/rewrite-target: /

spec:

ingressClassName: nginx-example

rules:

- http:

paths:

- path: /hello

pathType: Prefix

backend:

service:

name: hello

port:

number: 5678
:wq!
k create -f ingress.yaml -n ing-internal

Q9. set configuration context :


Student $ kubectl config use-context k8s

Task:
Schedule a pod as follows:
Name: nginx-kusc00401
Image: nginx
Node selector: disk=spinning

Ans:

vi nodeselector.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disk: spinning
:wq!
k create -f nodeselector.yaml
k get pods

Q11.Create a pod name kusc4 with a single app container for each of the following
images running inside (there may be between 1 and 4 images specified):
nginx+redis.

Ans:
vi pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: kusc4
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
ports:
- containerPort: 80
:wq!
k create -f pod.yaml

Q12. Create a persistent volume with name app-config, of capacity 1Gi and access
mode ReadWriteMany. The type of volume is hostPath and its
location is /srv/app-config.

hint : in io kube need to search volume ,after nodeAffinity need to refer syntex

Ans:
vi pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
hostPath:
path: /srv/app-config
:wq!
k create -f pv.yaml

Q13.Create a new PersistentVolumeClaim


. Name: pv-volume
. Class: csi-hostpath-sc
. Capacity: 10Mi
Create a new Pod which mounts the PersistentVolumeClaim as a volume:
. Name : web-server
. Image : nginx
. Mount path : /usr/share/nginx/html

Configure the new Pod to have ReadWriteOnce access on the volume.

Finally, Using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a


capacity of 70Mi and record that change.

hint: in io kube need to search persist volume , after many tick syntex will be
there

or need to serach persist volume then choose below option

Configure a Pod to Use a PersistentVolume for Storage

vi pv.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
:wq!
k create pv.yaml
vi pod01.yaml

apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: pv-volume

:wq!
k create -f pod01.yaml

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy