6 Observe Metrics Server Data
6 Observe Metrics Server Data
Resource usage of the nodes is useful but is not what we’re looking for. In this
chapter, we’re focused on auto-scaling Pods. But, before we get there, we
should observe how much memory each of our Pods is using. We’ll start with
those running in the kube-system Namespace.
We can see resource usage (CPU and memory) for each of the Pods currently
running in kube-system . If we do not find better tools, we could use that
information to adjust the requests of those Pods to be more accurate.
However, there are better ways to get that info, so we’ll skip adjustments for
now. Instead, let’s try to get current resource usage of all the Pods, no matter
the Namespace.
That output shows the same information as the previous one, only extended
to all Namespaces. There should be no need to comment on it.
We can see that this time, the output shows each container separately. We can,
for example, observe metrics of the kube-dns-* Pod separated into three
containers ( kubedns , dnsmasq , sidecar ).
The flow of the data to and from the Metrics Server (arrows show directions of data flow)
Q
kubectl command retrieves data from Metrics Server.
COMPLETED 0%
1 of 1
kubectl get \
--raw "/apis/metrics.k8s.io/v1beta1" \
| jq '.'
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "metrics.k8s.io/v1beta1",
"resources": [
{
"name": "nodes",
"singularName": "",
"namespaced": false,
"kind": "NodeMetrics",
"verbs": [
"get",
"list"
]
},
{
"name": "pods",
"singularName": "",
"namespaced": true,
"kind": "PodMetrics",
"verbs": [
"get",
"list"
]
}
]
}
Let’s take a closer look at the pods resource of the metrics API.
kubectl get \
--raw "/apis/metrics.k8s.io/v1beta1/pods" \
| jq '.'
The output is too big to display here, so I’ll leave it up to you to explore it.
You’ll notice that the output is JSON equivalent of what we observed through
the kubectl top pods --all-namespaces --containers command.
Now that we explored Metrics Server , we’ll try to put it to good use and learn
how to auto-scale our Pods based on resource utilization, in the next lesson.