5 Install Metrics Server
5 Install Metrics Server
In this lesson, we will see what Metrics Server is and how to install it?
If you started observing Kubernetes metrics, you might have used Heapster.
It’s been around for a long time, and you likely have it running in your cluster,
even if you don’t know what it is. Both the Metrics server and Heapster serve
the same purpose, with one being deprecated for a while, so let’s clarify things
a bit.
started realizing that a new, better, and, more importantly, a more extensible
design is required. Hence, the Metrics Server was born. Right now, even
though Heapster is still in use, it is considered deprecated, even though today
the Metrics Server is still in beta state.
kubectl -n metrics \
rollout status \
deployment metrics-server
We used Helm to install Metrics Server , and we waited until it rolled out.
data to make decisions. As you will see soon, the usage of the Metrics Server
goes beyond the Scheduler but, for now, the explanation should provide an
image of the basic flow of data.
The basic flow of the data to and from the Metrics Server (arrows show directions of data flow)
Q
Metrics Server will periodically fetch metrics from Kubeletes running
on ______?
COMPLETED 0%
1 of 1
If you were fast, the output should state that metrics are not available yet .
That’s normal. It takes a few minutes before the first iteration of metrics
retrieval is executed. The exception is GKE and AKS that already come with
the Metrics Server baked in.
In this chapter, I’ll show the outputs from Docker For Desktop.
Depending on the Kubernetes flavor you’re using, your outputs will be
different. Still, the logic is the same and you should not have a problem
following along.
My output is as follows.
We can see that I have one node called docker-for-desktop . It is using 248 CPU
milliseconds. Since the node has two cores, that’s 12% of the total available
CPU. Similarly, 1.2GB of RAM is used, which is 63% of the total available
memory of 2GB.
In the next lesson, we will observe how much memory each of our Pods is
using.