0% found this document useful (0 votes)
34 views37 pages

Docker

Uploaded by

muhammad afzal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views37 pages

Docker

Uploaded by

muhammad afzal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

docker

Monolithic:
It is an Architecture. For all Services, if we use one server and one database we can called as monolithic

Eg: Ecommerce SAAS application (or) take Paytm App - movie tickets, bookings, etc.., these are called
services

If all these services are included in one server then it will be called monolithic architecture
It is tight coupling i.e. the services highly dependent on each other

Drawback:
If one service is down means we have to shutdown entire application to solve that service. So, user is
facing problems because it is tightly coupled

Microservice:

ER
If every service has its own individual servers then it is called microservices
Every microservice architecture has its own database for each service

CK
Take same above example. For every service if we keep 1-database, 1-server it is microservice
It is loose coupling DO
Drawback:
It is too cost, because we have to maintain so many servers and database. So, maintenance is high
LA

When compared microservice to monolithic. Microservice is good to use. Because, if one service is not
working. So, we can work on it without shutdown the application. That’s the reason microservices is
good and preferable
KA

Why Docker:
IK

Let us assume that we are developing an application, and every application has Frontend, Backend and
database
CH

To overcome the above monolithic architecture we’re using “Docker”

So while creating the application we need to install the dependencies to run the code
EP

So, I installed Java11, reactJS and MongoDB to run the code. After sometime, I need another versions
of java, react and MongoDB for my application to run the code
E

So, it’s really a hectic situation to maintain multiple versions of same tool in our system
ND

To overcome this problem we will use “Virtualization”

Virtualization:
SA
It is used to create a virtual machines inside on our machine. In that virtual machines we can hosts
guest OS in our machine
By using this guest OS we can run multiple application on same machine

Virtualization Architecture
Here, Host OS means our windows machine. Guest OS means virtual machine
Hypervisor is also known as Virtual machine monitor (VMM). It is a component/software and it is

ER
used to create the virtual machines

Drawback:

CK
It is old method
If we use multiple guest OS (or) Virtual machines then the system performance is low
DO
To overcome this virtualization, we are using “Containerization” ie., called Docker

Containerization:
LA

It is used to pack the application along with it’s dependencies to run the application. This process is called
containerization
KA

Container:
IK

It’s the runtime of the application which is created through docker image
Container is nothing but it is a virtual machine, which doesn’t have any OS
CH

With the help of images container will run


Docker is a tool. It is used to create the containers
E EP
ND
SA
Container Architecture
It is similar to virtualization architecture, instead od hypervisor we are having Docker Engine
Through Docker Engine we’re creating the containers
Inside the container we’re having the application
Docker Engine - The software that hosts the container
In one container we can keep only image

Docker Image:
A Docker image is a file used to execute code in a Docker container.
Docker images act as a set of instructions to build a Docker container, like a template.
Docker images also act as the starting point when using Docker.

(or)

It is a template that contains applications, bin/libs, configs, etc., packaged together

ER
Before Docker:

CK
DO
LA
KA
IK
CH

First, get the code from the GitHub and integrate with Jenkins
Integrate maven with Jenkins. So, we get War file
EP

So, that war file we have to deploy in different environments


So, if you want to deploy war file/application we have to install the dependencies
E

This is the process when docker is not there


ND

After Docker:
SA
First, get the code from the GitHub and integrate with Jenkins
Integrate maven with Jenkins. So, we get War file
Here, we’re not going to install dependencies on any server. Because, we’re following
containerization rule

ER
So, here we’re creating image i.e. image is the combination of application and dependencies
image = war + Java+Tomcat+MySql
Now, in this image application and dependencies present. So, overall this process is called

CK
containerization
So, whenever if you want to run your application.
Run that image in the particular environment. No need to install again dependencies. Because
DO
these are already present in image
So, after run the image. In that server that applications and dependencies installed
When we run an image, container will gets created. Inside the container we’re having application
This images are already prebuilted by docker
LA

Container is independent in AMI i.e. If we launch AMI in ubuntu (or) CentOS, any other OS. this
container will work
KA

So, overall after docker, In any environment no need to install dependencies. We can just run the images
in that particular environment. If container is created means application created
IK

Docker
CH

It is an open source centralized platform designed to create, deploy and run applications
Docker is written in the GO language
Docker uses containers on host OS to run applications. It allows applications to use the same linux
EP

kernel as system on the host computer, rather than creating a whole virtual OS
Docker is platform independent. i.e. we can install docker on any OS but the “docker engine” runs
natively on “Linux” distribution
E

Docker performs OS level virtualization also known as containerization


ND

Before Docker, many users face problems that a particular code is running in the developers system
but not in the user system
Docker is a set of PAAS that use OS level virtualization, where as VMware uses hardware level
SA

virtualization
Container have OS files but it’s negligible in size compared to original files of that OS

Docker Architecture:

We are having 4 components

1. Docker Client
a. It is a primary way that many docker users interact with docker. When you use commands such

ER
as docker run, the client sends these commands to docker daemon, which carries them out
b. The docker commands uses the docker API
c. Overall, here we perform the commands

CK
2. Docker Host
a. It contains containers, images, volumes, networks
b. It is also a server, where we install the docker in a system
DO
3. Docker Daemon
a. Docker daemon runs on the host OS.
b. It is responsible for running containers to manage docker services
LA

c. Docker daemon communicates with other daemons


d. It offers various docker objects such as images, containers, networking and storage
4. Docker Registry
KA

a. A Docker registry is a scalable open-source storage and distribution system for docker images
b. It is used for storing and sharing the images
c. Eg: For git we had GitHub. Same like for docker we had Docker registry
IK

Advantages of Docker:
CH

Caching a cluster of containers


Flexible resources sharing
Scalability - Many containers can be placed in a single host
EP

Running your service on network that is much cheaper than standard servers

Note: [points to be followed when you’re using Docker practically]


E
ND

1. You can’t use directly, you need to start/restart first (observe the docker version before and after
restart) (Just like Jenkins)
2. You need a base image for creating a container
SA

3. You can’t enter directly into container, you need to start first
4. If you run an image, By default one container will create
5. Docker client and Host - Both are in same server

Launch normal server in AWS EC2 - t2.micro, normal-sg, 8GB volume

Install Docker in server


yum install docker -y
Check the version
docker --version (or) docker -v
If we want Client details
docker version (or) docker info
Check the docker is running/not
systemctl status docker (or) service docker status
Start the docker
systemctl start docker (or) service docker start

Now, again perform → docker version (or) docker info

Now, we got server details. When the docker is in running state we can see server details. ie.,
when the daemon is in running state we can see the server details. Daemon is not running means
we can see the client details

ER
Create Docker Image

CK
Checking how many images are present
docker images
Create an Image
DO
docker run ubuntu (or) docker pull image
Here, ubuntu is an image. Through this image if we create a container, that container will be
run in ubuntu OS
See the list of images
LA

docker images
To get the image count
KA

docker image/wc -l

Create Docker Container


IK

Checking how many containers are present


CH

docker ps -a (or) docker container ls -a


Here, default containers will present, whenever we create a image automatically default
containers will created
Here, ps means process status
EP

Create own/custom container


docker run -it --name cont-1 ubuntu
Here, ‘-it’ is interactive terminal, used to execute the commands
E

Here, cont-1 is container name and ubuntu is imagename


ND

When you perform the command, you will get ‘root@containerID’ terminal. That means we
are inside the container
if we give ‘ll’, we will get container default files
SA

If you type exit, you are coming out from the container
docker ps -a
you will see the list of containers, but all containers are in exited state
So, here default and normal containers are present.
The main difference between is default and normal containers is, if you run the default containers
also it will be in exited state. Because it doesn’t have ‘-it’
Going inside the container
docker attach cont-1
we got the container terminal, now if you type ‘exit’ we’re came out from the terminal and
our container is in exited state.
So, Without direct exiting the status how to do normal exit. So, if we do normal exit
Just ctrl+p,q in the terminal
Start the container
docker start cont-1
docker ps -a
docker attach cont-1
Now, I want to do exit but not exit state
So, perform ctrl+p,q. If you perform means exit from that user not from the state
docker ps -a
Stop the container
docker stop cont-1

ER
Note: If you want to go from exit state to running state, you have to start the container

CK
See the running containers
docker ps
Delete the container
DO
docker rm containerName/containerID

Note: We can’t delete the running containers. First, we have to stop the containers, then we can delete
LA

Delete multiple containers at a time


docker rm $(docker ps -a -q)
Here, a → all and q → id of the container
KA

Stop multiple containers


docker stop $(docker ps -aq)
If we do restart the docker, all containers will be in exited state
IK

systemctl restart docker


Delete the images
CH

docker rmi imageName


Delete multiple images
docker rmi $(docker images)
Sometimes, some images won’t delete. Because they’re running. So, stop the containers and
EP

delete the images


Delete unused images
E

docker image prune


ND

the image contains ‘none’ we can simply called as unused images


Delete unused containers
docker container prune
It is used to remove the unused containers (or) unwanted thing means exited state
SA

If the containers are in exited state it all deleted


Rename the container name
docker rename oldName newname
See the Latest created containers
docker container ls -n 2
that means we can see the latest 2 containers
docker container ls -n --latest
that means we can see the latest single containers
See the Container ID’s
docker container ls -a -q (or) docker ps -a -q
See the running container ID’s
docker container ls -q (or) docker ps -q
See the container size
docker ps -a -s (or) docker container ls -a -s
Delete unwanted/unused images, containers, networks, volumes at a time
docker system prune

Deploying Web Server in Docker

HTTPD

ER
If you want to deploy a web application we have to take either HTTPD (or) NGINX image
If we run (or) pull the image. So, that image will be downloaded in local

CK
So, whenever you run the image, container created. Inside the container we are having web
application
1. Create the HTTPD image
DO
docker pull httpd
docker images
2. Now, Create the container by using images
docker run -itd --name cont1 -p 8081:80 httpd
LA

Here, 8081 - host port, we can give any number


80 → it is container port
KA

So, here container port is change depends upon the image. So, if it's Apache it's 8080
So, here we are accessing the application through host port
Here, -d is used for detach mode. means it will run in foreground+background
IK

If we don't give '-d' we can't enter into container directly


usually, we give '-it' directly we go into container. but here '-itd' we can't enter into
CH

container

Now, After running the container. Copy the public Ip:8081, you can access the httpd application
EP

Now, if I want to access into the container, we can't perform the 'docker attach' command. Because we're
using '-d'.
E

exec : It is a command used to perform inside the container without going inside the container in detach
ND

mode

So, let's start the container → docker start cont1


SA

So, now I want to see the whole files inside the container
Syntax: docker exec containerName "commands"

See the list of files in a container → docker exec cont1 "ls"


Create the file → docker exec cont1 touch file

Above, we're performing the commands outside the container, but we are not going inside.

So, how we can go inside a container through detach mode

docker exec -it containerName /bin/bash


Here, /bin/bash is docker default path
So, here by default it work in ubuntu image

After performing the above command, we're inside a container

apt update -y
If I want to use any thing we have to install
apt install vim -y
vim index.html → it works

ER
So, here it won't work, for that one we have to use the docker file

Inspect

CK
Through inspect we can see the container full information like source code, etc..,
DO
docker inspect containerName/ContainerID

we can check the particular information in inspect. For that, we can use "grep"

docker inspect containerName | grep -i "wordName"


LA

docker inspect cont1 | grep -i id

Curl
KA

Here, curl means Check URL. Using curl, we are checking the network connections
IK

curl publicIp:8080
That means, if app is running means we can't check in browser. We can check directly in the
CH

server
So, curl tells whether the application is running/not

Limitations to the container


EP

Here, CPU allocate (or) Memory allocate to the container, we can called container limits.
E

Generally, we took t2.micro i.e. 1 CPU, 1GB ram. So, in overall server we have 1 CPU & 1 GB ram
ND

So, right now for server inside created containers, I want to provide 0.25% CPU, 250 MB of ram. So, these
we called as limitations
SA
Note: we have to mention that limits when you're performing a command

docker run -itd --name cont1 -p 8084:80 --memory=250m --cpus="0.25" httpd

Now, check the limits whether it's applied/not. So, for that we have to do inspect the container

docker inspect cont1 | grep -i memory

Container to Image Creation


So, upto now we're creating the container through the image. But, right now we're creating the image
through the container

ER
CK
DO
So, in above diagram, we're creating the container from nginx image. So, we need the same container
again. Generally, we're creating another image and we created the container
LA

But, Instead of above process. Here, In container already files are present. So, if we create the image from
that container. We will get the same files in that image
KA

Now, Instead of creating the separate container. Already, through Httpd image if we run means that all
files directly came to another container.
IK

i.e. automatically same application deployed


CH

docker commit containerName NewImageName


Create the image from the container
docker commit cont1 httpd
EP

Check the images


docker images
E

Through that image create the container


docker run -itd --name cont2 -p 8085:80 httpd
ND

So, Overall we're creating the images in 2 ways


SA

1. Command
2. Docker file

Docker file
It is basically a text file which contains set of instructions (or) commands
To create the docker images, we're using Docker file
Here, we're not having multiple Docker files
i.e. For single directory we're having single Docker file
In Docker file the first letter should be 'D'
In Docker file we're having components
And start components also be capital letter
Here, this is not mandatory. But official/formal way looks means we have to maintain capital
letters

How it works :

First, create a Docker file


In this file, we're writing some commands
If we build the Docker file, we got one image
If we run the image, container will create

ER
i.e. Application started/running

Docker file Components

CK
1. FROM
DO
This is the 1st component in the Docker file, which is used to give/defined that images
(HTTPD,NGINX,UBUNTU)
2. LABEL (or) MAINTAINER
We can give Author details i.e. we're mentioning the author name who wrote the Docker file
LA

3. RUN
It is used to execute the commands, while we build the image
4. COPY
KA

It is used to copy the files from server to container


5. ADD
It is also used to copy the files from server to container. But, it will also download the files
IK

from the internet. Eg: (targz, zip) and send to the container
6. EXPOSE
CH

It is used to publish the port numbers. It is only used for documentation purpose
7. WORKDIR
It is used to create a directory and we will directly go into the particular directory/folder
i.e. inside the containers we're having so many folders. Particularly, create a folder means use
EP

workdir. Eg: /folder


8. CMD
E

It is also used to execute the commands


ND

9. ENTRYPOINT
It is also used to execute the commands
10. ENV
It is used to assign/declare variables. Here, we can't override the values in runtime
SA

11. ARG
It is also used to assign/declare variables. Here, we can override the values in runtime

Difference between RUN & CMD

For RUN

When we build the Docker file through 'RUN' . We're getting one image. This image contains the data

For CMD

ER
When we build the Docker file, We're getting one image. But this image doesn't contain any data
But, when we run that image. We will get containers. In that containers we're having the data

CK
DO
LA
KA
IK

Difference between CMD and ENTRYPOINT


CH

In Docker file if we give CMD, ENTRYPOINT at a time. The 1st preference goes to entrypoint. i.e.
Entrypoint is having high priority
Entrypoint values will overrides the commands/values in CMD
EP

Eg: If we give 'git' in CMD. And in ENTRYPOINT you give 'maven'. So, Docker file chose maven
E

Docker file Practice


ND

Creating a file inside from Docker file

Step-1 : Creating the Docker file


SA
vim Dockerfile

Step-2 : Write the basic code inside Docker file

Step-3 : Build the Docker file

docker build -t AnyImageName .


docker build -t sandeep .
Here '.' represents the path of the Docker file. i.e. Right now, Docker file is in current directory.
So, we gave '.' Otherwise, we give different path
After successfully build, you will get like this in the given below image

ER
CK
DO
Now, we get a image name called 'sandy'

Step-4 : Now, create the container from our created Image


LA

docker build -it --name cont-4 sandy


Now, inside the container, perform 'll' command. You will get file1
KA

This is the way, we can create the Docker file and through that Docker file we can create images. Through
that images we can create containers.
IK

Insert a data into a Docker file


CH
EP

Using COPY and ADD in a Docker file


E
ND

In this copy first file name is from server file and 2nd file name is from container
SA
So, here we're copying server file to inside the containers

Add is also same like COPY but here we're downloading files from internet and copy into a container

Perfect Docker file

Differences among RUN, CMD and ENTRYPOINT in Docker file

1st difference

When you're using RUN, you can directly build a docker file. i.e. when you perform

ER
docker build -t image .
Git will installed

CK
Through this RUN we can't install multiple tools like
docker run image tree httpd DO
LA
KA

When you're using CMD, you have to build the docker file. But the package is not installed. So, you
have to run the image. Then the git package is installed
docker build -t image .
IK

docker run image


CH
EP

When you're using ENTRYPOINT, you have to build the docker file. But the package is not installed.
E

So, you have to run the image. Then the git package is installed
ND

docker build -t image .


docker run image
SA
Here, When you're using ENTRYPOINT, we can install multiple tools in runtime
docker run image tree httpd
If we write code like this

Here, command → docker run image → it won't work


Perform → docker run image java httpd → it works
that means, in Entrypoint, if you don't give any name also, In run time we can install so
many tools
But in CMD it's not possible

2nd difference :

ER
CK
docker build -t image . → showing successfully builded
DO
docker run image
here, git installed because In entrypoint we didn't give anything. So, default we gave 'git' in CMD
it will take git
Now, I will give 'httpd' in run time i.e.
LA

docker run image httpd


Now, only httpd installed. Because it overrides CMD, because entrypoint is having high priority
KA

3rd Difference
IK
CH
E EP

If we build means, at a time all tools installed


ND

Now, try this code instead of 'RUN' use 'CMD' & 'ENTRYPOINT'
SA

Here, multiple cmd's and entrypoint not support


So, multiple tools at a time if we want to run means we have to use 'RUN'

Overall, this is the main differences between RUN, CMD & ENTRYPOINT

Difference between ARG & ENV in Docker file

ENV :

Using ENV we can't override the value. When we build the Docker file, we can see the output in command
line

Output :

ER
CK
DO
LA
KA

ARG :
IK

Using ARG we can override the value in runtime


CH

So, when we build the Docker file on that time only we can change

docker build -t image --build-arg abc=azure


E EP
ND
SA

Output :
Apply Tags in Images
If we don't want to overriding the image. i.e. I want to see the same name for old image and new
image. Without overriding

ER
Because, if we creating image with same name that image will override. So, we lose that data.
So, if we use tags means our image will not override

CK
Almost, we will use tags when we build a Dockerfile
Command is
docker build -t image:1 .
DO
docker build -t image1:2 .

DOCKER VOLUMES
LA

Volumes are a mechanism for storing data outside containers.


Docker volumes provide persistent storage for your containers.
Docker manages the data in your volumes separately to your containers.
KA

All volumes are managed by Docker and stored in a dedicated directory on your host, usually
/var/lib/docker/volumes for Linux systems.
IK

If we update some data inside a container means, I want to get the same update data inside another
container automatically.
CH

So, for that purpose we're using docker volumes

Eg:
EP

vim Dockerfile
E
ND

Build the Dockerfile → docker build -t image .


SA

Create the container → docker run -it --name cont image → ll → we got files
Now, create some files inside container → docker attach cont → touch a b c
Now, create an image from the container. So, perform → docker commit cont image1
Again create the container → docker run -it --name cont1 image1 → I got all files
Now, again I created some files inside the cont1, I want that files in "cont". Usually we won't get

So, if you want to get that replication means we are using concept docker volumes

DIFFERENCE BETWEEN VOLUMES AND DIRECTORY

VOLUMES

If there is any data present in the volume we can share to any other containers
Volume will not gets deleted

DIRECTORY

We can't share this data to another container


directory will gets deleted

Points to be noted:

ER
When we create a container then volume will be created
Volume is imply a directory inside our container
First, we have to declare the directory volume and then share the volume

CK
Even if we stop/delete the container still, we can access the volume and inside the volume data
You can declare directory as a volume, only while creating container
We can't create volume from existing containers
DO
You can share one volume across many number of containers. At a time of creating a container not
for existing container
Volume will not be included, when you update an image
LA

i.e. when you update the image, volume data will not update. Just it will show name volume
If container-1 volume is shared to container-2 the changes made by container-2 will be also available
in the container-1
KA

i.e. In two containers, if we are having same volume. So, if we update the data in one container.
automatically in another container also data got updated
We can share our volume among different containers
IK

Decoupling/remove container from storage


We can map volume in two ways
CH

a. container → container
b. Host → container

We can create the Volumes in 2 ways


EP

1. Command
2. Dockerfile
E
ND

COMMAND ( container → container )

docker run -it --name containerName -v /volumeName imagename


docker run -it --name cont1 -v /sandy ubuntu → ll → cd sandy → touch file{1..5} → ctrl+p,q
SA

Now, we have to share the data. Right now, I have data in "cont1", we need to share in "cont2"
command is → docker run -it --name NewContainerName --privileged=true --volumes-from
VolumeContainerName imagename
docker run -it --name cont2 --privileged=true --volumes-from cont1 ubuntu
Here, --privileged=true means we are sharing the volume
Now check the files inside container → ll → cd sandy → we have files
Now, create files in the "cont2" and check in "cont1". So, you will get the data. that means data is
replicated

How we can access the data, if the container is deleted/stopped ?

First inspect the container → docker inspect cont1


Check mount, and here you see the volume path
cd /var/lib/docker/volumes......./data
Now, go to cd /var/lib/docker/volumes and see the list of volumes → ll
Here, we are having some big folder name → cd foldername → cd data → ll → here, we have our
files

Note :

If we deleted a container, but if we create a files inside volume "/data". Automatically in another

ER
container also changes happened, when it uses the same volume
i.e. from local also, we can access volume data

CK
CREATING MULTIPLE VOLUMES IN A CONTAINER

docker run -it --name cont3 --privileged=true --volumes-from cont2 -v /sandy ubuntu
DO
So, here In cont3, we get a volume from cont2 and another volume we created
So, like this we can maintain multiple volumes in a container

Mapping from HOST → CONTAINER


LA

We can create a volume inside a container from server/local


KA

Eg-1:

docker run -it --name cont -v /home/ec2-user:/sandy --privileged=true ubuntu


IK

here, /home/ec2-user is the server path


now, Go to the path → ll → cd sandy → touch file
CH

Now, check in server


docker attach cont → ll → cd sandy → you have data

So, this process is called mounting the volumes


EP

CREATING MANUAL VOLUMES THROUGH COMMANDS


E

Create a manual/own volume → docker volume create volumeName


Check how many volumes → docker volume ls
ND

Remove the volume


First, stop the container → docker stop $(docker ps -a -q)
docker volume rm volumeName
SA

Remove all volumes at a time


docker volume rm $(docker volume ls)
Remove unused volumes
docker volume prune → it is asking (y/n) → type y

HOW TO ATTACH THE VOLUME TO CONTAINER FROM OUR MANUAL VOLUMES

1. docker volume create sandy


2. docker volume ls
Now, this volume we have to attach to a container
docker run -it --name cont1 -v sandy:/volumeName ubuntu
ls → cd volumeName → touch file
exit
3. Go to docker default path → cd /var/lib/docker/volume/_data → ll → data is present

So, here overall we created volume separately and and that volume we attached to a container

We can't attach to the volume to existing containers. So, we call it as "base voice"

HOW TO ATTACH THE MANUAL VOLUMES TO HOST → CONTAINER

ER
docker run -it --name cont -v /home/ec2-user:/volumeName -v manualVolume:/volumeName
ubuntu
docker run -it --name cont -v /home/ec2-user:/vol1 -v sandy/vol2 ubuntu

CK
ll → so, we're having 2 volumes name as vol1, vol2

2. CREATING VOLUME FROM DOCKER FILE


DO
vi Dockerfile
FROM ubuntu
VOLUME ["/Sandeep"]
LA

VOLUME ["/Chikkala"]
save and exit from the Dockerfile
Build the Dockerfile
KA

docker build -t image .


Now, in this image we are having 2 volumes. When we run this image we get the volumes inside
a container
IK

Creating container
docker run -it --name cont image → ll → we are having Sandeep and Chikkala volumes
CH

DOCKER NETWORKS
Docker Network is used to make a communication between the multiple containers that are running on
EP

same (or) different docker hosts


E

Why Network ?
ND

Let's assume we are having 2 containers like APP and DB container. This App container has to
communicate with DB container. So, the developer will write a code to connect the application to the DB
container.
SA
So, here the IP address of a container is not permanent. If a container is removed due to hardware failure,
a new container will be created with a new IP, which can cause connection issues

To resolve this issue, we are creating our own network. i.e. we are using docker networks to create our
custom/own network

Practice - Deep Dive

Now, Create a container and do inspect. So in inspect you can see the full networks data. i.e. you can see
IP address and everything

See the all networks → docker network ls

Each container will contain multiple networks. So, we have different types of docker networks

Bridge Network :

It is a default network that container will communicate with each other within the same host
Create one container, and inspect the container

ER
CK
DO
Usually, bridge network contains IP address
LA

Host Network :
KA

When you need, your container IP and EC2 instance IP same than we have to use host network
i.e., 172.31.3.321 → host/server private IP
Normally we are getting bridge default IP. But I want to get private IP we are using host network
IK

docker run -it --name cont5 --network host ubuntu


Now, if you perform inspect you can see the host network
CH
E EP
ND
SA
None Network:

When you don't want the containers to get exposed to the world, we use none network.
It will not provide any network to our container
i.e. No IP address
docker run -it --name cont --network none ubuntu

Overlay Network:

ER
If you want to establish the connection between the different containers which are present in
different servers

CK
If we have multiple networks to our container means communication will increased

So, these are the Docker networks. first 3 networks are default. Normally, we're using bridge network
DO
Create Custom Network → docker network create sandeep
see the list of networks → docker network ls
LA

Now, we have to attach the custom network to our container. the command is

docker run -it --name cont9 --network sandeep ubuntu


KA

docker inspect cont9


So, now you got one IP address for "sandeep" network
IK
CH
E EP
ND

Now, if you create a network inside "sandeep"


If we use the different networks inside a same container the IP address range is
IP address will - 172.18.0.1, 172.19.0.1, 172.20.0.1, .... upto 172.256.256.256
SA

if we use the same network in different containers the IP address range is


172.18.0.1, 172.18.0.2, 172.18.0.3, ...... upto 172.256.256.256

Attach the custom network to existing container, the command is

docker network connect customNetwork containerName


docker network connect sandeep cont1

Disconnect the Networks

docker network disconnect networkName containerName


docker network disconnect sandeep cont99

See the Network IP address

To get the IP address of the network


docker inspect sandeep

Delete the unused Networks

If the network is not attached to any container, we can simply called unused network
If you want to delete the network, command is

ER
docker network prune

Delete the custom networks

CK
docker network rm sandeep
if that network is attached to a container we can't delete
DO
DOCKER HUB/ DOCKER REGISTRY
It is used to store the images. Docker hub is the default registry
LA

2 Types of registry
KA

1. Cloud based registry :


when you want to store your images in the cloud like
docker hub
IK

GCR - Google Container Registry


Amazon ECR - Elastic Container Registry
CH

2. Local registry :
Here, we are storing the images in local like
Nexus
JFrog
EP

DTR - Docker trusted registry

So, here cloud based registry is preferrable. Because in docker hub we just do the account creation and
E

store the images. But in local registry like nexus, we have to take t2.medium and do the setup it is little
ND

bit complex.

So, we are using Docker Hub, ECR


SA

Without Docker hub, In another server we can't run the application


i.e. system to system (or) server to server, we can't send the image directly. So, we need one
platform for this.
So, we are using Docker hub for that.
So, Docker hub what actually does means, from server we will push image into docker. From another
server we will pull that image using docker

Note:

First, Go to Google → we need to create Docker Hub account. It will ask username, mail, passwd and
verification happened. then after login. that's it

Docker Hub - Practice

First, we need to login into docker hub. When you want to upload the image in Docker hub.
Command is → docker login
username and password you have to provide

ER
After login it will shows like succeeded

CK
Without login, you can't push the image into docker hub

Step - 1 :
DO
If you want to push you have to tag that image
For that one, just write one sample docker file and run it. You will get custom image. Now, push that
image into docker hub
LA

docker tag imageName dockeruserID/repositoryname


docker tag image chiksand/repo
docker images → we're having the image
KA

Step - 2 :
IK

send the image into Docker hub


docker push chiksand/repo
CH

So, here tagged image will go to docker hub


At a time multiple images, we can't push. Single image only we can push
Go to Docker hub and refresh, we got our image and we have whole docker file
EP

Now, we need to check whether the image is working correct/not. For that we need another server. So,
launch normal server and install the docker in that server
E

Now, perform the command


ND

FYI, the command will be present in Docker hub like below image
SA
docker pull chiksand/repo: latest
Now, through this image, create the container, you can access the application in browser

So, like this we can pull the image into servers and we will do our work

How to Store Multiple Images in Docker Hub

Without overriding the image we can do like this

docker tag image:1 chiksand/repo


docker push chiksand/repo

Now, we will get separate image in docker hub. So, like this you can store multiple images in docker hub

From Private repo

ER
Create a new repo in private, same like first two steps upto build

CK
docker tag image1 chiksand/private repo
docker push chiksand/private repo DO
It's worked, when you already login into docker hub in your server

Suppose, you're logout in another server


if you want to take the pull in private repo, you can't
LA

But, you can take the pull in public repo

OFFICIAL IMAGES in DOCKER HUB


KA

There are so many images that are pre-built in docker hub. So, how the process means we are searching
the particular image.
IK

Eg: Usually, Jenkins setup is little bit hard. So, here we are just pull the Jenkins image
CH

So, in official image they mentioned how to use that image

DOCKER SWARM
EP

Docker Swarm is an Orchestration service (or) group of service


E

It is similar to master-slave concept


ND

Within the docker that allows us to manage and handle multiple containers at the same time
Docker Swarm is a group of servers that runs the docker application
i.e. for running the docker application, in docker swarm we're creating group of servers
We used to manage the multiple containers on multiple servers.
SA

This can be implemented by the "Cluster"


The activities of the cluster are controlled by a "Swarm Manager" and machines that have joined the
cluster is called "Swarm Worker"
Here, it is the example of master and slave

Docker Engine helps to create Docker Swarm


i.e. if you want to implement Docker Swarm. In that system we have to install docker for 2
servers. i.e. Swarm Manager & Swarm Worker
In the cluster we are having 2 nodes
a. Worker nodes

ER
b. Manager nodes
The worker nodes are connected to the manager nodes
So, any scaling i.e. containers increase (or) updates needs to be done means first, it will go to the

CK
manager node
From the manager node, all the things will go to the worker node
Manager nodes are used to divide the work among the worker nodes
DO
Each worker node will work on an individual service for better performance
i.e. 1 - worker node, 1 - service

COMPONENTS in Docker Swarm


LA

1. SERVICE
It represents a part of the feature of an application
KA

2. TASK
A Single part of work (or) Work that we are doing
IK

3. MANAGER
This manages/distributes the work among the different nodes
CH

4. WORKER
which works for a specific purpose of the service

PRACTICAL
EP

1. Take 1 normal server named as manager and inside the server install & restart the docker
2. Initializing Swarm
E

docker swarm init --advertise-addr PrivateIP


ND
SA
Here, you got all details, how to connect with worker node
i.e. in master the token is generated. If we give this token in another server it will be work as a
worker node
Now, take 2 normal servers named as worker-1,2 and install & restart the docker here in 2 servers
Now, copy the token from manager server and paste in 2nd server. It will joined a swarm as a
worker

3. See the list of nodes in manager

docker node ls → you will get 3 nodes like 1-leader, 2-worker

Now, task is - In 2 slave servers at a time we need to create a container from manager
So, here we are creating in the service format. Here, service means container

Create Service/Container

ER
docker service create --name sandy --replicas 3 --publish 8081:80 httpd
here, sandy → service name

CK
replicas → duplicates i.e. if the container is stopped/deleted means automatically another
container will created with the same configuration
3 → duplicate containers, like how many containers you need, just give the number
DO
Now, 1 container is created
See the list of Services
docker service ls
LA
KA

It will work in only manager, not in slaves


Now, if you perform → docker ps -a

Actually we get 3 but here, it will show you 1 container. So, here it acts as a master & slave
IK
CH

Now, check in slave server, you will get the remaining containers
E EP

Now, check it's working (or) not


ND

Go to manager → copy publicIP:8081 in browser → it works

same like check in slave servers, you will get it


SA

This is the basic example for Docker swarm


Now, create another service with 2 replicas
Go to Manager server → docker service create --name devops --replicas 2 --publish 8082:80 httpd
Now, check → docker ps -a → you are having devops.2
Go to Worker -1 server → docker ps -a → you are having devops.1
In Worker-2 we don't have. Because we gave only 2 replicas

Note :

Here, If the worker node contains less containers. Manager will send the containers to that worker node.
It balance the work load

i.e. Now take another service with 2 replicas. This time, the container will add in Worker-2

Task -2 : Create Docker file, and we have to run the image from the Docker file

FROM ubuntu
RUN apt update -y
RUN apt install apache2 -y
RUN echo "hi this is app" > /var/www/html/index.html
CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"]

ER
Build the image → docker build -t image .
Create the service based on image

CK
docker service create --name sandy --replicas 3 --publish 8084:80 image
docker service ls
Here, 3 servers are present inside manager. Not in slave servers. Because, that is local image
DO
Httpd, nginx, ubuntu these are all open images
When you check in worker-1,2. It is not there. that's why when we run this command we're
seeing no such image
So, 3 containers are created inside manager server
LA

So, httpd, nginx, ubuntu these are open images. So, these are pulling from the docker hub
So, present we're pushing our "image" into docker hub
In manager server → docker login
KA

docker tag image chiksand/repo


docker push chiksand/repo
docker images → we are having image "chiksand/repo"
IK

Now, through that image we have to create a service


docker service create --name swarm --replicas 3 --publish 8085:80 chiksand/repo
CH

docker ps -a
Now, we are having 1 container in manager same like in slaves also we're having containers
EP

So, like this through custom image also we can create the services in docker swarm

DOCKER SERVICE COMMANDS


E

1. See the list of services


ND

docker service ls
2. Checking the how many containers inside a service
docker service ps ServiceName
SA

3. If we stop/delete a container in worker node. Automatically another container will created


docker stop contID
docker ps -a → you can see container exited and created
Because of the Auto/self healing
4. Remove Service
docker service rm ServiceName
If we remove the service means, containers also removed
5. Check the logs
docker service logs ServiceName
6. Inspect Service
docker service inspect ServiceName
7. See the service history like Eg: how many containers, IP address, etc..
docker service ps ServiceName
8. removing the manager in docker swarm
docker swarm leave --force

Update Image From the Service

1. Update Dockerfile
2. Build the Dockerfile
3. docker tag image chiksand/repo

ER
4. docker push chiksand/repo
5. So, right now, present service we need to update the image

CK
docker service update --image ImageName ServiceName
docker service update --image chiksand/repo swarm
Check in browser, whether it's working/not
DO
Rollback to Previous image

Here, I want to go back to the previous image means. Without updating the image you can't rollback to
LA

previous image

docker service rollback ServiceName


KA

So, here the common query is we already update the image. How we can get the previous image
It will stores the log files. So, we can rollback to any image
IK

SCALING in Docker Swarm

If you want to increase/decrease the replicas for containers we can use Scaling
CH

Here Scaling is 2 types

1. Container Scaling → using docker


EP

2. Server Scaling → using aws, Based on the users request it will increase
E

In Manager Server → docker service ls


ND

Now, I want to increase upto 5 replicas


docker service scale ServiceName=5
Now, check the containers in Manager & Worker servers
SA
SWARM COMMANDS related to NODE

1. see the list of nodes


In manager → docker node ls
2. If you want to remove the worker from Swarm cluster, Directly Manager can't remove. For that we
have command
docker swarm leave → perform this command in slave servers
3. Check the nodes
docker node ls → here, still we have 3 containers, but the status is changed to down
4. So, now remove the node from the manager
docker node rm NodeID
docker node ls → now we have 2 containers

So, overall if you want to remove the node from the manager means, first, node will leave from that
particular service, then we can perform the command

So, again you want to add the nodes, you have to do it starting process

DOCKER - COMPOSE

ER
In docker swarm, we created 1 container/service in multiple servers using master & Slave (or)
Manager & Worker concept

CK
But In docker compose, we deployed multiple containers in single server
It is complete opposite to docker swarm
Here, multiple containers means full application. i.e. frontend, backend, database containers
DO
present
So, right now these 3 containers will present in single container. For that, we are using Docker
Compose
Here, Manually we can do. But here, we're doing through "compose file"
LA

So, here suppose we have 3 apps. For these, if we have to create container means first, write the
docker file. then after build and get the image. After run the image you got a container. These is
KA

the manual process for all 3 apps


But, here in Docker compose, we took docker files. For all docker files we write one compose file
In this compose file, we are having containers related configuration is present.
IK

i.e. container name, port, volume, networks


So, like this if we write the container related requirements in compose file and if we execute the
CH

file means. In compose files, whatever the containers we're having that all will created
Here, Automate happened. i.e. image build & container creation at a time happened

So, overall, In real time if developers write the code means, we are writing the docker files for that. For
EP

that docker file, Here we are writing the Compose file and will execute that

Def:
E
ND

Docker compose is a tool used to build, run and ship the multiple containers for application
It is used to create multiple containers in a single host/server
It used YAML file to manage multi containers as a single service
i.e. In docker compose file we are writing the container configurations, that should be written in
SA

YAML format and the compose file extends with ".yml"


The compose file provides a way to document and configure all of the applications service
dependencies
like - databases, queues, caches, web service API's, etc..,
In one directory, we can write only one docker-compose file

PRACTICAL

1. Install & Restart the docker in a server


2. Install docker compose
Go to google → search docker compose install on amazon Linux → go to stack overflow

ER
CK
Perform the above steps, you can get docker-compose file in your local
Install docker - compose
DO
sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-
compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
LA

docker-compose version

3. Write the Docker compose file


KA

vim docker-compose.yml
IK
CH
EP

Version → It specifies the version of the compose file, default 3


Services → if you want to create a service, first mention services
paytm → service name
E

Here, service acts as a container


ND

Inside the service we are having containers, and we have to give the image name

4. Execute the compose file


SA

docker-compose up -d → here, d is for detach mode


After performing this command, automatically containers will gets created which are present in
docker compose file
docker ps -a
So, here with the name of paytm, container will be created → root-paytm-1

5. Access the application in browser

publicIP:8081 → you will get the output in the browser

Creating Multiple Services in Docker Compose

ER
CK
DO
LA
KA
IK

AUTOMATE IMAGE BUILD IN DOCKER-COMPOSE


CH

If I update the code in frontend i.e. in the docker file. Again we need to build the image and the new
image will be placed inside the compose file
So, whenever develop the code. We need to build that image, and update in compose.yml
Instead of doing this, whenever developer write the code, automatically it will update in docker-
EP

compose file. For that we're using build component


In docker file you changed data and you built image with existing name. Image will overrided
E

vim docker-compose.yml
ND
SA

Now, build the docker-compose for this image


docker-compose build
Now, here image is builded, then it will update into the container
docker-compose up -d

Now, execute and access in browser. You, will get the output

Here, in docker-compose we have to define 4 components for containers

PIVN
P → ports
I → Image
V → Volume
N → Network
Now, we have to add these components inside a docker-compose

ER
CK
DO
LA
KA

Now, execute the docker-compose


docker-compose up -d
perform inspect in container
IK

docker inspect contID


Now, you will see everything
CH

For defining volumes in docker-compose, give like this


E EP
ND
SA
This is the way we can give all components inside a docker compose file

(FAQ) Suppose, in one service, I got issue in docker compose. how to resolve ?

Here, We're going into particular docker file and update the docker file. Build the compose file. Old
containers are running, new containers will deploy

DOCKER-COMPOSE COMMANDS

1. Create the containers in docker compose


docker-compose up -d
2. Stop and remove the composed containers
docker-compose down
3. Stop the containers from the compose file
docker-compose stop
docker ps -a
start the stopped containers → docker-compose up -d
4. See the list of images in docker-compose file

ER
docker-compose images
5. See the compose containers
docker-compose ps

CK
docker ps -a → we see manual created containers
6. See the logs in docker compose. Inside the logs containers start & end details are present here
docker-compose logs
DO
7. See the code configuration in compose file, not from vim editor
docker-compose config
8. Pause & UnPause in container
docker-compose pause
LA

i.e. no updates will happen in that container. that means container will struck
docker-compose unpause
KA

Used to unpause the container

Usually, we're maximum using this "docker-compose.yml" file name. If we use another name we're
IK

getting this error

Eg:
CH

vim docker-compose.yml (this is the standard way)


vim sandeep.yml (docker won't execute this file, it shows error)
EP

For that, the command is → docker-compose -f sandeep.yml up -d

This is the code we're using for another file names


E
ND

DOCKER STACK
If you want to deploy multiple services in multiple servers we're using docker stack
SA

It is the combination of Docker Swarm + Docker Compose


Here, first we have to initiate the Swarm, otherwise stack doesn't work here
Here, we have to write compose files
Docker stack is used to create multiple services on multiple hosts
i.e. it will create multiple containers on multiple servers with the help of compose file
To use the docker stack we have initialized docker swarm, if we are not using docker swarm, docker
stack will not work
Once we remove the stack automatically all the containers will gets deleted
We can share the containers from manager to worker according to the replicas
In docker stack, we are using overlay network

Eg: Let's assume, if we have 2 servers which is manager and worker. If we deployed a stack with 4
replicas. 2 are present in manager and 2 are present in worker

Here, manager will divide the work based on the load on a server

PRACTICAL

Take 3 servers named as manager, worker-1 & worker-2

Step -1 :

ER
Install & restart the docker

CK
In manager

docker swarm init --advertise-addr privateIP


copy the token, paste in worker-1&2
DO
See the list of nodes → docker node ls

Step -2 :
LA

Write the docker-compose file, for that install the docker compose
KA

vi docker-compose.yml
IK
CH
E EP
ND
SA
Step -3: Execute this file

docker stack deploy --compose-file docker-compose.yml stackName


docker ps -a → containers will be present in worker nodes

Step -4: Access the browser

check in browser → publicIP:8888

Use case Scenario for Docker Stack

Suppose, we have paytm app. Usually, daily 1k people accessing the paytm. Due to festivals, this time
100k people use this application. Due to multiple requests, server can't handle the capacity.

So, that's why i want to run my application in multiple servers. For that thing we're using cluster

Here, if you need multiple containers, we can use replicas for this. replicas used for high availability and
application performs well

COMMANDS

ER
1. create replica for this server
docker service scale serviceName=2

CK
2. See the list of stacks
docker stack ls
3. Remove the stack
DO
docker stack rm stackName
4. See the stack related services
docker stack services stackName
5. See the commands in stack
LA

docker stack

Through Docker compose file, we can do docker stack, first execute this compose file in stack
KA

docker stack deploy --compose-file docker-compose.yml mystack/stackName


docker service ls
IK

for creating replicas it will takes time


CH
E EP
ND
SA
PORTAINER

It is an Docker-GUI. It works with the help of docker stack. i.e. we can create the gui for the entire docker
by using portainer concept

It is container organizer, designed to make tasks easier, whether they're clustered (or) not.
Able to connect multiple clusters, access the containers, migrate stacks between clusters
It is not a testing environment. Mainly used for production routines in large companies
Portainer consists of 2 elements.
The portainer Server
The portainer agent

Change the Container Port

ER
Create container → docker run -itd --name cont -p 8081:80 httpd

curl publicIP:port

CK
I want to change the port number
First, stop the container
So, go to cd /var/lib/docker → ll
DO
cd docker → cd containers → cd contID →
Here, go to hostconfig.json → vi hostconfig.json
LA
KA

Here "HostPort": "8097", change this port number and exit


Now, restart the docker
IK

Start the container


CH

So, like this we can change the port number for container
E EP
ND
SA

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy