Docker and Kubernetes
Docker and Kubernetes
Docker
Docker is a popular open-source platform used for developing, deploying, and
running applications. It achieves this by employing "containers": standardised units
that package your application's code along with all its dependencies needed to run
it properly. This eliminates compatibility issues and makes applications more
portable.
Here's a breakdown of what Docker offers:
● Standardised units: Applications are packaged into containers, ensuring they
run consistently across different environments.
● Isolation: Containers are isolated from each other and the underlying system,
preventing conflicts.
● Portability: Containers can be easily moved between machines without
worrying about dependency issues.
● Faster deployments: Since containers are lightweight, they start up quicker
than traditional virtual machines.
In simpler terms, imagine Docker as a shipping container for your application. It
bundles everything your application needs to run (code, libraries, settings) into a
single unit. This container can then be shipped and deployed to any system that has
Docker installed, ensuring it runs exactly the way you intended it to.
Step 1: Checking Docker Version
● docker --version
This command simply displays the installed version of Docker on your system.
Step 2: Running a Docker Image
● docker run in28min/todo-rest-api-h2:1.0.0.RELEASE
This command instructs Docker to:
● docker run: Run a container from a Docker image.
● in28min/todo-rest-api-h2:1.0.0.RELEASE: This specifies the image
to use. It includes:
○ in28min/todo-rest-api-h2: The name of the image repository
(like a folder on a bookshelf) on Docker Hub (a public registry for
Docker images).
○ 1.0.0.RELEASE: The specific version (or "tag") of the image within
the repository.
Step 3: Important Docker Concepts
This section covers key concepts in Docker:
● Registry: A central location (like Docker Hub) that stores Docker images.
● Repository: A collection of related images within a registry (like a folder
within a library).
● Tag: A way to identify a specific version of an image within a repository (like
chapters in a book).
● Image: A blueprint that contains the instructions for creating a Docker
container.
● Container: An isolated instance of a running image.
Step 4: Playing with Images and Containers
● docker run -p 5000:5000 in28min/todo-rest-api-
h2:1.0.0.RELEASE (This is similar to step 2)
● docker logs 04e52ff9270f...: This command retrieves and displays
the logs generated by the container with the specific ID (which is a long
alphanumeric string).
● docker logs c2ba (This is similar to the previous command, but uses a
shorter container ID)
● docker logs -f c2ba: This command shows the logs from container
c2ba in follow mode, meaning it will continuously display new logs as they
are generated.
● docker container ls: This command lists all the currently running
containers.
● docker run -p -d 5000:5000 in28min/todo-rest-api-
h2:1.0.0.RELEASE: This is similar to the previous docker run commands,
but with two additional options:
○ -p: This flag maps a port on your host machine (5000 in this case) to a
port inside the container (also 5000 by default). This allows you to
access the application running in the container from your host
machine's browser.
○ -d: This flag runs the container in detached mode, meaning the
container will run in the background and the command prompt will
return to you.
Step 5: Understanding Docker Architecture
This section refers to the two main components of Docker:
● Docker Client: The command-line interface (or a graphical user interface) that
you use to interact with Docker.
● Docker Engine: The service that builds, runs, and manages Docker containers.
Step 6: Why Docker is Popular
This section highlights the advantages of Docker containers:
● Easy to run applications
● Cloud neutral (can run on different cloud platforms)
Step 7: Playing with Docker Images
● docker images: This command lists all the Docker images that are
currently available on your local system.
● docker pull mysql (assuming there's no mysql image locally): This
command pulls (downloads) the latest version of the official MySQL image
from Docker Hub.
● docker search mysql: This command searches for Docker images related
to "mysql" on Docker Hub.
● docker image history in28min/hello-world-
java:0.0.1.RELEASE: This command displays the history of changes for
the specified image, showing the layers involved in building it.
● docker image history 100229ba687e (assuming you know the ID):
This is similar to the previous command, but uses the image ID instead of the
name and tag.
● docker image inspect 100229ba687e: This command provides
detailed information about a specific Docker image, including its layers,
configuration, and environment variables.
● docker image remove mysql: This command removes the locally stored
MySQL image (use with caution as downloaded images can be large).
Step 8: Playing with Docker Containers
● docker run -p -d 5000:5000 in28min/todo-rest-api-
h2:0.0.1-SNAPSHOT: This builds upon previous docker run commands by
incorporating additional options:
○ --restart=always (optional): This instructs Docker to
automatically restart the container if it stops or crashes, ensuring the
application remains available.
● docker container rm 3e657ae9bd16: This command forcibly removes
a container with the specified ID. Use with caution as data within the
container will be lost.
● docker container ls -a: This command lists all containers, including
both running and stopped ones.
● docker container pause 832: This pauses a running container with the
ID 832. The container is still consuming resources but isn't actively
processing.
● docker container unpause 832: This resumes a paused container (with
ID 832) allowing it to continue execution.
● docker container stop 832: This stops a running container with the ID
832 gracefully, using SIGTERM signal.
● docker container kill 832: This forcefully terminates a running
container (with ID 832) using SIGKILL signal. Use with caution as it can lead
to data loss.
● docker container inspect ff521fa58db3: This command examines
a container (with ID ff521fa58db3) and displays detailed information about
its configuration, resources, networking, and more.
● docker container prune: This command removes all stopped containers
and any unused networks that are no longer associated with running
containers.
Step 9: Playing with Docker Commands - stats, system
● docker events: This command displays a list of events related to Docker,
including container launches, stops, and other actions.
● docker top 9009722eac4d: This command shows the top processes
running within the container with ID 9009722eac4d.
● docker stats: This command displays statistics for all running containers
in a real-time fashion.
● docker stats 9009722eac4d: This is similar to the previous command,
but focuses on showing stats for a specific container (ID: 9009722eac4d).
● docker system: This command displays information about the Docker
system itself, including its version, status, and configuration.
● docker system df: This command shows disk usage information for
Docker, indicating how much space is used by images, containers, and
networks.
● docker system info: This command provides detailed system
information about the Docker Engine, including its operating system,
resources, and security settings.
● docker system prune -a: This command removes all unused Docker
resources, including networks, volumes, and images that are not associated
with any containers.
● docker container run -p 5000:5000 -d -m 512m
in28min/todo-rest-api-h2:0.0.1-SNAPSHOT: This command
combines various options:
○ -m 512m: This sets the memory limit for the container to 512
megabytes.
● docker container run -p 5000:5000 -d -m 512m --cpu-
quota=50000 in28min/todo-rest-api-h2:0.0.1-SNAPSHOT: This is
similar to the previous command, but additionally includes:
○ --cpu-quota=50000: This limits the CPU resources allocated to the
container, setting a quota of 50,000 CPU shares.
● docker system events: This command displays events related to the
Docker system, providing insights into its overall health and activity.
Distributed Tracing
We’ll Now add distributed tracing in our microservice application.
Distributed tracing is a technique used in microservices architectures to track the
flow of a request across multiple services. It allows you to monitor how long each
service call takes, identify bottlenecks, and troubleshoot issues more effectively.
Zipkin
Zipkin is an open-source distributed tracing system widely used in Spring Boot
applications. It provides tools for:
● Generating trace IDs and span IDs: These unique identifiers link related
requests throughout their journey.
● Collecting trace data: Spring Boot applications send trace data (spans) to the
Zipkin server.
● Storing and analysing trace data: Zipkin servers store and visualise trace
data, allowing you to examine request flows and pinpoint performance
issues.
Launching Zipkin Container using Docker
docker run -p 9411:9411 openzipkin/zipkin
Now your Zipkin is running on:
http://localhost:9411
And it’ll look something like this:
To get the total trace you need to add the following dependency in your proxy:
<!-- Enables tracing of REST API calls made using Feign-->
<dependency>
<groupId>io.github.openfeign</groupId>
<artifactId>feign-micrometer</artifactId>
</dependency>
The dependency feign-micrometer is added to your proxy for distributed tracing of
REST API calls made using Feign
Adding feign-micrometer to your proxy provides automatic tracing of Feign calls
using Micrometer, facilitating distributed tracing and improving observability into
your microservices communication.
Once you add the following dependency in your proxy, you’ll be able to trace the
complete path and log of your microservice:
You’ll also get such data for your service (here currency-exchange):
[
{
"traceId": "9aa54ae3f46c13d512c75dd610772324",
"id": "97f9f6e7b1f5f2d5",
"kind": "CLIENT",
"name": "http get",
"timestamp": 1718082772489029,
"duration": 400183,
"localEndpoint": {
"serviceName": "currency-exchange",
"ipv4": "192.168.1.10"
},
"tags": {
"client.name": "localhost",
"exception": "none",
"http.url": "http://localhost:8761/eureka/apps/",
"method": "GET",
"otel.library.name": "org.springframework.boot",
"otel.library.version": "3.3.0",
"otel.scope.name": "org.springframework.boot",
"otel.scope.version": "3.3.0",
"outcome": "SUCCESS",
"status": "200",
"uri": "/eureka/apps/"
}
}
]
services:
currency-exchange:
image: repo-name/nrvmsv1-currency-exchange-service:0.0.1-
SNAPSHOT
mem_limit: 700m
ports:
- "8000:8000"
networks:
- currency-network
depends_on:
- naming-server
environment:
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://naming-
server:8761/eureka
MANAGEMENT.ZIPKIN.TRACING.ENDPOINT: http://zipkin-
server:9411/api/v2/spans
currency-conversion:
image: repo-name/nrvmsv1-currency-conversion-service:0.0.1-
SNAPSHOT
mem_limit: 700m
ports:
- "8100:8100"
networks:
- currency-network
depends_on:
- naming-server
environment:
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://naming-
server:8761/eureka
MANAGEMENT.ZIPKIN.TRACING.ENDPOINT: http://zipkin-
server:9411/api/v2/spans
api-gateway:
image: repo-name/nrvmsv1-api-gateway:0.0.1-SNAPSHOT
mem_limit: 700m
ports:
- "8765:8765"
networks:
- currency-network
depends_on:
- naming-server
environment:
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://naming-
server:8761/eureka
MANAGEMENT.ZIPKIN.TRACING.ENDPOINT: http://zipkin-
server:9411/api/v2/spans
naming-server:
image: repo-name/nrvmsv1-naming-server:0.0.1-SNAPSHOT
mem_limit: 700m
ports:
- "8761:8761"
networks:
- currency-network
zipkin-server:
image: openzipkin/zipkin:2.23
mem_limit: 300m
ports:
- "9411:9411"
networks:
- currency-network
restart: always #Restart if there is a problem starting up
networks:
currency-network:
Kubernetes with Spring
Kubernetes is a portable, extensible, open source platform for managing
containerized workloads and services, that facilitates both declarative configuration
and automation. It has a large, rapidly growing ecosystem. Kubernetes services,
support, and tools are widely available.
Kubernetes and microservices go hand-in-hand for building and deploying modern,
complex applications. Here's a breakdown of how Kubernetes benefits
microservices architectures:
Microservices and their challenges:
● Microservices applications are composed of independent, smaller services
that work together. This modularity offers advantages like faster
development cycles, easier scaling, and improved fault tolerance.
● However, managing numerous microservices across different servers can
become complex. You need to ensure they can:
○ Discover each other (talk to each other)
○ Scale up or down based on demand
○ Self-heal in case of failures
How Kubernetes helps:
● Kubernetes is a container orchestration platform designed to automate the
deployment, scaling, and management of containerized applications.
Microservices are perfectly suited to be containerized due to their lightweight
and independent nature.
● Kubernetes offers features that streamline microservices management:
○ Service Discovery: Kubernetes provides built-in service discovery
mechanisms. Services can register themselves with Kubernetes,
making them discoverable by other services within the application.
○ Load Balancing: Kubernetes automatically distributes incoming traffic
across multiple instances of a microservice, ensuring high availability
and efficient resource utilization.
○ Self-healing: Kubernetes can automatically restart failed containers
and replace unhealthy instances with healthy ones. This increases the
resilience of your microservices application.
○ Scaling: Kubernetes allows you to easily scale your microservices up
or down based on traffic or resource requirements. You can define
scaling policies that automatically adjust the number of container
replicas running for each service.
○ Resource Management: Kubernetes allocates resources (CPU,
memory) to containers, ensuring efficient utilization and preventing
resource conflicts.
○ Declarative Configuration: You define the desired state of your
microservices deployment using YAML files. Kubernetes takes care of
achieving and maintaining that state.
Benefits of using Kubernetes with microservices:
● Simplified deployment and management: Kubernetes automates many
manual tasks involved in managing microservices, freeing up development
teams to focus on building features.
● Scalability and elasticity: Easily scale your application up or down as needed,
ensuring optimal resource utilization and performance.
● High availability and fault tolerance: Kubernetes helps ensure your
microservices application remains available even if individual containers fail.
● Portability: Deploy your containerized microservices across different
environments (on-premises, cloud) without worrying about infrastructure
specifics.
In essence, Kubernetes provides a robust platform for building, deploying, and
managing complex microservices architectures. It automates many tasks, simplifies
management, and ensures your application remains highly available, scalable, and
fault-tolerant.
Commands:
These steps guide you through deploying a Spring Boot application and exploring
core Kubernetes concepts. Here's a breakdown:
Step 1 - Deploying a Sample Application:
1. Run the application locally:
○ docker run -p 8080:8080 in28min/hello-world-rest-
api:0.0.1.RELEASE: This starts a container running the image
in28min/hello-world-rest-api:0.0.1.RELEASE and maps the container
port 8080 to your local machine port 8080. You can access the
application at http://localhost:8080/ (assuming no conflicts).
2. Deploy to Kubernetes:
○ kubectl create deployment hello-world-rest-api
--image=in28min/hello-world-rest-api:0.0.1.RELEASE: This creates a
Kubernetes deployment named hello-world-rest-api using the
specified image. A deployment manages replica sets to ensure your
application runs consistently.
○ kubectl expose deployment hello-world-rest-api --
type=LoadBalancer --port=8080: This exposes the deployment as a
Kubernetes service. The --type=LoadBalancer creates a cloud load
balancer (specific to your platform) to route external traffic to your
application pods.
Step 2 - Exploring Key Concepts:
● kubectl get pods: Lists all running pods in the cluster.
● kubectl get replicaset: Lists replica sets (which manage the number of pod
replicas for a deployment).
● kubectl get deployment: Lists deployments in the cluster.
● kubectl get service: Lists services (abstractions for accessing pods).
● kubectl scale deployment hello-world-rest-api --replicas=3: Scales the
deployment to have 3 replicas (pods running the application).
Step 3 - Understanding Pods:
● kubectl get pods -o wide: Shows detailed information about pods, including
IP addresses, status, and container information.
● kubectl explain pods: Explains the pod resource in Kubernetes.
● kubectl describe pod hello-world-rest-api-58ff5dd898-9trh2: Provides
detailed information about a specific pod (identified by its name).
Step 4 - Understanding Replica Sets:
● The commands used here list replica sets (kubectl get replicasets, kubectl
get replicaset, kubectl get rs) and demonstrate deleting a pod (which will be
recreated by the replica set to maintain the desired number of replicas).
● kubectl scale deployment hello-world-rest-api --replicas=3: Scales the
deployment again, demonstrating how the replica set ensures the number of
pods remains at 3.
● kubectl get events: Shows events related to your Kubernetes cluster,
including pod creation and deletion.
● kubectl explain replicaset: Explains the replica set resource in Kubernetes.
Steps 5 & 6 - Understanding Deployments and Review:
These steps cover deployments in more detail and revisit key concepts from
previous steps.
Step 7 - Understanding Services:
This section likely involves further explanation of services in Kubernetes and their
role in exposing applications to external traffic.
Step 8 - Quick Review of GKE
This step might provide a brief overview of Google Kubernetes Engine (GKE), a
managed Kubernetes service on Google Cloud Platform.
Step 9 - Understanding Kubernetes Architecture:
This likely explains the architecture of a Kubernetes cluster, including the master
node (responsible for managing the cluster) and worker nodes (where pods run). It
highlights that pods have different IP addresses within the cluster.
Steps 10 & 11 - Setting Up and Exploring Further Microservices:
These steps cover setting up credentials for interacting with a Kubernetes cluster
and potentially introduce additional microservices for a currency exchange
application.
Step 12 - Building and Pushing Docker Images
These commands demonstrate building Docker images for the currency exchange
and conversion microservices and pushing them to a Docker registry.
Step 13 - Deploying Microservices and Service Discovery:
1. Deployment:
○ kubectl create deployment currency-exchange ...: Creates a
deployment named currency-exchange using the built image.
○ Similar commands are used to deploy the currency-conversion service.
2. Expose and View Services:
○ kubectl expose deployment ... --type=LoadBalancer --port=...:
Exposes each deployment as a service with a load balancer.
○ Various kubectl get commands are used to view services, pods, replica
sets, and overall cluster state.
○ kubectl get svc --watch: Monitors changes to services in real-time.
Step 14 - Declarative Configuration with YAML (continued):
● kubectl get deployments: Lists deployments in the cluster.
● kubectl get deployment currency-exchange -o yaml >> deployment.yaml:
Retrieves the YAML configuration for the currency-exchange deployment
and saves it to a file named deployment.yaml.
● kubectl get service currency-exchange -o yaml >> service.yaml: Similar to
the previous command, but retrieves the YAML configuration for the
currency-exchange service.
● kubectl diff -f deployment.yaml: Compares the retrieved YAML configuration
with the one stored in deployment.yaml (useful for identifying any
modifications).
● kubectl apply -f deployment.yaml: Applies the YAML configuration from
deployment.yaml to the cluster. This can be used to update or recreate
deployments based on the specified configuration.
Understanding YAML and Declarative Management:
● Kubernetes relies heavily on YAML files to define desired cluster state. These
files specify how deployments, services, and other resources should be
configured.
● By applying YAML files, you declaratively tell Kubernetes what you want,
and it takes care of achieving and maintaining that state. This simplifies
infrastructure management and enables version control for your
deployments.
Additional Notes:
● Steps 11-13 likely involve concepts specific to the currency exchange
application and might introduce additional commands or configurations
related to these microservices.
● Make sure to consult the documentation for the specific tools and libraries
used in those steps for a deeper understanding.