All Q
All Q
. Deployment
. StatefulSet
. DaemonSet
Ans:
k create clusterrole deployment-clusterRole --verb=create --
resource=Deployment,StatefulSet,DaemonSet
k create serachaccount cicd-token -n app-team1
k create clusterrolebinding deployment-clusterrole --clusterrole=deployment-
clusterRole --serviceaccount=app-team1:cicd-token
-->Q2. Set the node named ek8s-node-0 as unavailable and reschedule all the pods
running on it.
Ans:
The following TLS certificates/key are supplied for connecting to the server with
etcdctl:
CA certificate: /opt/KUIN00601/ca.crt
Client certificate:/opt/KUIN00601/etcd-client.crt
Client key: /opt/KUIN00601/ectd-client.key
Ans:
k scale deployment webserver --replicas=6
Q10. Check to see how many nodes are ready (not including nodes tainted NoSchedule)
and write the number to
/opt/KUSC00402/kusc00402.txt.
Ans:
k get nodes
echo XX > /opt/KUSC00402/kusc00402.txt
cat /opt/KUSC00402/kusc00402.txt
k describe nodes node01 | grep -i taints
Ans:
k logs bar | grep -i 'unable-to-access-website' > /opt/KUTR00101/bar
cat /opt/KUTR00101/bar
-->Q16.From the pod label name=cpu-loader, find pods running high CPU workloads and
write the name of the pod consumingmost CPU to the file
/opt/KUTR00401/KUTR00401.txt (which already exists)
Note: Be sure to drain the master node before upgrading it and uncordon it after
the upgrade. Do not upgrade the worker nodes, etc, the container manager,the CNI
plugin, the DNS service or any other addons.
Ans:
vi np.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: internal
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 9200
:wq!
k label ns internal project=internal
k create -f np.yaml
Q6. Reconfigure the existing deployment front-end and add a port specification
named http exposing port 80/tcp of
the existing container nginx.
Create a new service named front-end-svc exposing the container port http.
Configure the new service to also expose the individual Pods via a NodePort on the
nodes on which they are scheduled.
Ans:
k get deployment
vim deployment
ports:
port:80
:wq!
The availability of service hello can be checked using the following command, which
should return hello.
student $ curl -kL <INTERNAL_IP>
>/hello
Ans:
vim ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ping
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678
:wq!
k create -f ingress.yaml -n ing-internal
Task:
Schedule a pod as follows:
Name: nginx-kusc00401
Image: nginx
Node selector: disk=spinning
Ans:
vi nodeselector.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disk: spinning
:wq!
k create -f nodeselector.yaml
k get pods
Q11.Create a pod name kusc4 with a single app container for each of the following
images running inside (there may be between 1 and 4 images specified):
nginx+redis.
Ans:
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: kusc4
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
ports:
- containerPort: 80
:wq!
k create -f pod.yaml
Q12. Create a persistent volume with name app-config, of capacity 1Gi and access
mode ReadWriteMany. The type of volume is hostPath and its
location is /srv/app-config.
hint : in io kube need to search volume ,after nodeAffinity need to refer syntex
Ans:
vi pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
hostPath:
path: /srv/app-config
:wq!
k create -f pv.yaml
hint: in io kube need to search persist volume , after many tick syntex will be
there
vi pv.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
:wq!
k create pv.yaml
vi pod01.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: pv-volume
:wq!
k create -f pod01.yaml