Docker
Docker
Monolithic:
It is an Architecture. For all Services, if we use one server and one database we can called as monolithic
Eg: Ecommerce SAAS application (or) take Paytm App - movie tickets, bookings, etc.., these are called
services
If all these services are included in one server then it will be called monolithic architecture
It is tight coupling i.e. the services highly dependent on each other
Drawback:
If one service is down means we have to shutdown entire application to solve that service. So, user is
facing problems because it is tightly coupled
Microservice:
ER
If every service has its own individual servers then it is called microservices
Every microservice architecture has its own database for each service
CK
Take same above example. For every service if we keep 1-database, 1-server it is microservice
It is loose coupling DO
Drawback:
It is too cost, because we have to maintain so many servers and database. So, maintenance is high
LA
When compared microservice to monolithic. Microservice is good to use. Because, if one service is not
working. So, we can work on it without shutdown the application. That’s the reason microservices is
good and preferable
KA
Why Docker:
IK
Let us assume that we are developing an application, and every application has Frontend, Backend and
database
CH
So while creating the application we need to install the dependencies to run the code
EP
So, I installed Java11, reactJS and MongoDB to run the code. After sometime, I need another versions
of java, react and MongoDB for my application to run the code
E
So, it’s really a hectic situation to maintain multiple versions of same tool in our system
ND
Virtualization:
SA
It is used to create a virtual machines inside on our machine. In that virtual machines we can hosts
guest OS in our machine
By using this guest OS we can run multiple application on same machine
Virtualization Architecture
Here, Host OS means our windows machine. Guest OS means virtual machine
Hypervisor is also known as Virtual machine monitor (VMM). It is a component/software and it is
ER
used to create the virtual machines
Drawback:
CK
It is old method
If we use multiple guest OS (or) Virtual machines then the system performance is low
DO
To overcome this virtualization, we are using “Containerization” ie., called Docker
Containerization:
LA
It is used to pack the application along with it’s dependencies to run the application. This process is called
containerization
KA
Container:
IK
It’s the runtime of the application which is created through docker image
Container is nothing but it is a virtual machine, which doesn’t have any OS
CH
Docker Image:
A Docker image is a file used to execute code in a Docker container.
Docker images act as a set of instructions to build a Docker container, like a template.
Docker images also act as the starting point when using Docker.
(or)
ER
Before Docker:
CK
DO
LA
KA
IK
CH
First, get the code from the GitHub and integrate with Jenkins
Integrate maven with Jenkins. So, we get War file
EP
After Docker:
SA
First, get the code from the GitHub and integrate with Jenkins
Integrate maven with Jenkins. So, we get War file
Here, we’re not going to install dependencies on any server. Because, we’re following
containerization rule
ER
So, here we’re creating image i.e. image is the combination of application and dependencies
image = war + Java+Tomcat+MySql
Now, in this image application and dependencies present. So, overall this process is called
CK
containerization
So, whenever if you want to run your application.
Run that image in the particular environment. No need to install again dependencies. Because
DO
these are already present in image
So, after run the image. In that server that applications and dependencies installed
When we run an image, container will gets created. Inside the container we’re having application
This images are already prebuilted by docker
LA
Container is independent in AMI i.e. If we launch AMI in ubuntu (or) CentOS, any other OS. this
container will work
KA
So, overall after docker, In any environment no need to install dependencies. We can just run the images
in that particular environment. If container is created means application created
IK
Docker
CH
It is an open source centralized platform designed to create, deploy and run applications
Docker is written in the GO language
Docker uses containers on host OS to run applications. It allows applications to use the same linux
EP
kernel as system on the host computer, rather than creating a whole virtual OS
Docker is platform independent. i.e. we can install docker on any OS but the “docker engine” runs
natively on “Linux” distribution
E
Before Docker, many users face problems that a particular code is running in the developers system
but not in the user system
Docker is a set of PAAS that use OS level virtualization, where as VMware uses hardware level
SA
virtualization
Container have OS files but it’s negligible in size compared to original files of that OS
Docker Architecture:
1. Docker Client
a. It is a primary way that many docker users interact with docker. When you use commands such
ER
as docker run, the client sends these commands to docker daemon, which carries them out
b. The docker commands uses the docker API
c. Overall, here we perform the commands
CK
2. Docker Host
a. It contains containers, images, volumes, networks
b. It is also a server, where we install the docker in a system
DO
3. Docker Daemon
a. Docker daemon runs on the host OS.
b. It is responsible for running containers to manage docker services
LA
a. A Docker registry is a scalable open-source storage and distribution system for docker images
b. It is used for storing and sharing the images
c. Eg: For git we had GitHub. Same like for docker we had Docker registry
IK
Advantages of Docker:
CH
Running your service on network that is much cheaper than standard servers
1. You can’t use directly, you need to start/restart first (observe the docker version before and after
restart) (Just like Jenkins)
2. You need a base image for creating a container
SA
3. You can’t enter directly into container, you need to start first
4. If you run an image, By default one container will create
5. Docker client and Host - Both are in same server
Now, we got server details. When the docker is in running state we can see server details. ie.,
when the daemon is in running state we can see the server details. Daemon is not running means
we can see the client details
ER
Create Docker Image
CK
Checking how many images are present
docker images
Create an Image
DO
docker run ubuntu (or) docker pull image
Here, ubuntu is an image. Through this image if we create a container, that container will be
run in ubuntu OS
See the list of images
LA
docker images
To get the image count
KA
docker image/wc -l
When you perform the command, you will get ‘root@containerID’ terminal. That means we
are inside the container
if we give ‘ll’, we will get container default files
SA
If you type exit, you are coming out from the container
docker ps -a
you will see the list of containers, but all containers are in exited state
So, here default and normal containers are present.
The main difference between is default and normal containers is, if you run the default containers
also it will be in exited state. Because it doesn’t have ‘-it’
Going inside the container
docker attach cont-1
we got the container terminal, now if you type ‘exit’ we’re came out from the terminal and
our container is in exited state.
So, Without direct exiting the status how to do normal exit. So, if we do normal exit
Just ctrl+p,q in the terminal
Start the container
docker start cont-1
docker ps -a
docker attach cont-1
Now, I want to do exit but not exit state
So, perform ctrl+p,q. If you perform means exit from that user not from the state
docker ps -a
Stop the container
docker stop cont-1
ER
Note: If you want to go from exit state to running state, you have to start the container
CK
See the running containers
docker ps
Delete the container
DO
docker rm containerName/containerID
Note: We can’t delete the running containers. First, we have to stop the containers, then we can delete
LA
HTTPD
ER
If you want to deploy a web application we have to take either HTTPD (or) NGINX image
If we run (or) pull the image. So, that image will be downloaded in local
CK
So, whenever you run the image, container created. Inside the container we are having web
application
1. Create the HTTPD image
DO
docker pull httpd
docker images
2. Now, Create the container by using images
docker run -itd --name cont1 -p 8081:80 httpd
LA
So, here container port is change depends upon the image. So, if it's Apache it's 8080
So, here we are accessing the application through host port
Here, -d is used for detach mode. means it will run in foreground+background
IK
container
Now, After running the container. Copy the public Ip:8081, you can access the httpd application
EP
Now, if I want to access into the container, we can't perform the 'docker attach' command. Because we're
using '-d'.
E
exec : It is a command used to perform inside the container without going inside the container in detach
ND
mode
So, now I want to see the whole files inside the container
Syntax: docker exec containerName "commands"
Above, we're performing the commands outside the container, but we are not going inside.
apt update -y
If I want to use any thing we have to install
apt install vim -y
vim index.html → it works
ER
So, here it won't work, for that one we have to use the docker file
Inspect
CK
Through inspect we can see the container full information like source code, etc..,
DO
docker inspect containerName/ContainerID
we can check the particular information in inspect. For that, we can use "grep"
Curl
KA
Here, curl means Check URL. Using curl, we are checking the network connections
IK
curl publicIp:8080
That means, if app is running means we can't check in browser. We can check directly in the
CH
server
So, curl tells whether the application is running/not
Here, CPU allocate (or) Memory allocate to the container, we can called container limits.
E
Generally, we took t2.micro i.e. 1 CPU, 1GB ram. So, in overall server we have 1 CPU & 1 GB ram
ND
So, right now for server inside created containers, I want to provide 0.25% CPU, 250 MB of ram. So, these
we called as limitations
SA
Note: we have to mention that limits when you're performing a command
Now, check the limits whether it's applied/not. So, for that we have to do inspect the container
ER
CK
DO
So, in above diagram, we're creating the container from nginx image. So, we need the same container
again. Generally, we're creating another image and we created the container
LA
But, Instead of above process. Here, In container already files are present. So, if we create the image from
that container. We will get the same files in that image
KA
Now, Instead of creating the separate container. Already, through Httpd image if we run means that all
files directly came to another container.
IK
1. Command
2. Docker file
Docker file
It is basically a text file which contains set of instructions (or) commands
To create the docker images, we're using Docker file
Here, we're not having multiple Docker files
i.e. For single directory we're having single Docker file
In Docker file the first letter should be 'D'
In Docker file we're having components
And start components also be capital letter
Here, this is not mandatory. But official/formal way looks means we have to maintain capital
letters
How it works :
ER
i.e. Application started/running
CK
1. FROM
DO
This is the 1st component in the Docker file, which is used to give/defined that images
(HTTPD,NGINX,UBUNTU)
2. LABEL (or) MAINTAINER
We can give Author details i.e. we're mentioning the author name who wrote the Docker file
LA
3. RUN
It is used to execute the commands, while we build the image
4. COPY
KA
from the internet. Eg: (targz, zip) and send to the container
6. EXPOSE
CH
It is used to publish the port numbers. It is only used for documentation purpose
7. WORKDIR
It is used to create a directory and we will directly go into the particular directory/folder
i.e. inside the containers we're having so many folders. Particularly, create a folder means use
EP
9. ENTRYPOINT
It is also used to execute the commands
10. ENV
It is used to assign/declare variables. Here, we can't override the values in runtime
SA
11. ARG
It is also used to assign/declare variables. Here, we can override the values in runtime
For RUN
When we build the Docker file through 'RUN' . We're getting one image. This image contains the data
For CMD
ER
When we build the Docker file, We're getting one image. But this image doesn't contain any data
But, when we run that image. We will get containers. In that containers we're having the data
CK
DO
LA
KA
IK
In Docker file if we give CMD, ENTRYPOINT at a time. The 1st preference goes to entrypoint. i.e.
Entrypoint is having high priority
Entrypoint values will overrides the commands/values in CMD
EP
Eg: If we give 'git' in CMD. And in ENTRYPOINT you give 'maven'. So, Docker file chose maven
E
ER
CK
DO
Now, we get a image name called 'sandy'
This is the way, we can create the Docker file and through that Docker file we can create images. Through
that images we can create containers.
IK
In this copy first file name is from server file and 2nd file name is from container
SA
So, here we're copying server file to inside the containers
Add is also same like COPY but here we're downloading files from internet and copy into a container
1st difference
When you're using RUN, you can directly build a docker file. i.e. when you perform
ER
docker build -t image .
Git will installed
CK
Through this RUN we can't install multiple tools like
docker run image tree httpd DO
LA
KA
When you're using CMD, you have to build the docker file. But the package is not installed. So, you
have to run the image. Then the git package is installed
docker build -t image .
IK
When you're using ENTRYPOINT, you have to build the docker file. But the package is not installed.
E
So, you have to run the image. Then the git package is installed
ND
2nd difference :
ER
CK
docker build -t image . → showing successfully builded
DO
docker run image
here, git installed because In entrypoint we didn't give anything. So, default we gave 'git' in CMD
it will take git
Now, I will give 'httpd' in run time i.e.
LA
3rd Difference
IK
CH
E EP
Now, try this code instead of 'RUN' use 'CMD' & 'ENTRYPOINT'
SA
Overall, this is the main differences between RUN, CMD & ENTRYPOINT
ENV :
Using ENV we can't override the value. When we build the Docker file, we can see the output in command
line
Output :
ER
CK
DO
LA
KA
ARG :
IK
So, when we build the Docker file on that time only we can change
Output :
Apply Tags in Images
If we don't want to overriding the image. i.e. I want to see the same name for old image and new
image. Without overriding
ER
Because, if we creating image with same name that image will override. So, we lose that data.
So, if we use tags means our image will not override
CK
Almost, we will use tags when we build a Dockerfile
Command is
docker build -t image:1 .
DO
docker build -t image1:2 .
DOCKER VOLUMES
LA
All volumes are managed by Docker and stored in a dedicated directory on your host, usually
/var/lib/docker/volumes for Linux systems.
IK
If we update some data inside a container means, I want to get the same update data inside another
container automatically.
CH
Eg:
EP
vim Dockerfile
E
ND
Create the container → docker run -it --name cont image → ll → we got files
Now, create some files inside container → docker attach cont → touch a b c
Now, create an image from the container. So, perform → docker commit cont image1
Again create the container → docker run -it --name cont1 image1 → I got all files
Now, again I created some files inside the cont1, I want that files in "cont". Usually we won't get
So, if you want to get that replication means we are using concept docker volumes
VOLUMES
If there is any data present in the volume we can share to any other containers
Volume will not gets deleted
DIRECTORY
Points to be noted:
ER
When we create a container then volume will be created
Volume is imply a directory inside our container
First, we have to declare the directory volume and then share the volume
CK
Even if we stop/delete the container still, we can access the volume and inside the volume data
You can declare directory as a volume, only while creating container
We can't create volume from existing containers
DO
You can share one volume across many number of containers. At a time of creating a container not
for existing container
Volume will not be included, when you update an image
LA
i.e. when you update the image, volume data will not update. Just it will show name volume
If container-1 volume is shared to container-2 the changes made by container-2 will be also available
in the container-1
KA
i.e. In two containers, if we are having same volume. So, if we update the data in one container.
automatically in another container also data got updated
We can share our volume among different containers
IK
a. container → container
b. Host → container
1. Command
2. Dockerfile
E
ND
Now, we have to share the data. Right now, I have data in "cont1", we need to share in "cont2"
command is → docker run -it --name NewContainerName --privileged=true --volumes-from
VolumeContainerName imagename
docker run -it --name cont2 --privileged=true --volumes-from cont1 ubuntu
Here, --privileged=true means we are sharing the volume
Now check the files inside container → ll → cd sandy → we have files
Now, create files in the "cont2" and check in "cont1". So, you will get the data. that means data is
replicated
Note :
If we deleted a container, but if we create a files inside volume "/data". Automatically in another
ER
container also changes happened, when it uses the same volume
i.e. from local also, we can access volume data
CK
CREATING MULTIPLE VOLUMES IN A CONTAINER
docker run -it --name cont3 --privileged=true --volumes-from cont2 -v /sandy ubuntu
DO
So, here In cont3, we get a volume from cont2 and another volume we created
So, like this we can maintain multiple volumes in a container
Eg-1:
So, here overall we created volume separately and and that volume we attached to a container
We can't attach to the volume to existing containers. So, we call it as "base voice"
ER
docker run -it --name cont -v /home/ec2-user:/volumeName -v manualVolume:/volumeName
ubuntu
docker run -it --name cont -v /home/ec2-user:/vol1 -v sandy/vol2 ubuntu
CK
ll → so, we're having 2 volumes name as vol1, vol2
VOLUME ["/Chikkala"]
save and exit from the Dockerfile
Build the Dockerfile
KA
Creating container
docker run -it --name cont image → ll → we are having Sandeep and Chikkala volumes
CH
DOCKER NETWORKS
Docker Network is used to make a communication between the multiple containers that are running on
EP
Why Network ?
ND
Let's assume we are having 2 containers like APP and DB container. This App container has to
communicate with DB container. So, the developer will write a code to connect the application to the DB
container.
SA
So, here the IP address of a container is not permanent. If a container is removed due to hardware failure,
a new container will be created with a new IP, which can cause connection issues
To resolve this issue, we are creating our own network. i.e. we are using docker networks to create our
custom/own network
Now, Create a container and do inspect. So in inspect you can see the full networks data. i.e. you can see
IP address and everything
Each container will contain multiple networks. So, we have different types of docker networks
Bridge Network :
It is a default network that container will communicate with each other within the same host
Create one container, and inspect the container
ER
CK
DO
Usually, bridge network contains IP address
LA
Host Network :
KA
When you need, your container IP and EC2 instance IP same than we have to use host network
i.e., 172.31.3.321 → host/server private IP
Normally we are getting bridge default IP. But I want to get private IP we are using host network
IK
When you don't want the containers to get exposed to the world, we use none network.
It will not provide any network to our container
i.e. No IP address
docker run -it --name cont --network none ubuntu
Overlay Network:
ER
If you want to establish the connection between the different containers which are present in
different servers
CK
If we have multiple networks to our container means communication will increased
So, these are the Docker networks. first 3 networks are default. Normally, we're using bridge network
DO
Create Custom Network → docker network create sandeep
see the list of networks → docker network ls
LA
Now, we have to attach the custom network to our container. the command is
If the network is not attached to any container, we can simply called unused network
If you want to delete the network, command is
ER
docker network prune
CK
docker network rm sandeep
if that network is attached to a container we can't delete
DO
DOCKER HUB/ DOCKER REGISTRY
It is used to store the images. Docker hub is the default registry
LA
2 Types of registry
KA
2. Local registry :
Here, we are storing the images in local like
Nexus
JFrog
EP
So, here cloud based registry is preferrable. Because in docker hub we just do the account creation and
E
store the images. But in local registry like nexus, we have to take t2.medium and do the setup it is little
ND
bit complex.
Note:
First, Go to Google → we need to create Docker Hub account. It will ask username, mail, passwd and
verification happened. then after login. that's it
First, we need to login into docker hub. When you want to upload the image in Docker hub.
Command is → docker login
username and password you have to provide
ER
After login it will shows like succeeded
CK
Without login, you can't push the image into docker hub
Step - 1 :
DO
If you want to push you have to tag that image
For that one, just write one sample docker file and run it. You will get custom image. Now, push that
image into docker hub
LA
Step - 2 :
IK
Now, we need to check whether the image is working correct/not. For that we need another server. So,
launch normal server and install the docker in that server
E
FYI, the command will be present in Docker hub like below image
SA
docker pull chiksand/repo: latest
Now, through this image, create the container, you can access the application in browser
So, like this we can pull the image into servers and we will do our work
Now, we will get separate image in docker hub. So, like this you can store multiple images in docker hub
ER
Create a new repo in private, same like first two steps upto build
CK
docker tag image1 chiksand/private repo
docker push chiksand/private repo DO
It's worked, when you already login into docker hub in your server
There are so many images that are pre-built in docker hub. So, how the process means we are searching
the particular image.
IK
Eg: Usually, Jenkins setup is little bit hard. So, here we are just pull the Jenkins image
CH
DOCKER SWARM
EP
Within the docker that allows us to manage and handle multiple containers at the same time
Docker Swarm is a group of servers that runs the docker application
i.e. for running the docker application, in docker swarm we're creating group of servers
We used to manage the multiple containers on multiple servers.
SA
ER
b. Manager nodes
The worker nodes are connected to the manager nodes
So, any scaling i.e. containers increase (or) updates needs to be done means first, it will go to the
CK
manager node
From the manager node, all the things will go to the worker node
Manager nodes are used to divide the work among the worker nodes
DO
Each worker node will work on an individual service for better performance
i.e. 1 - worker node, 1 - service
1. SERVICE
It represents a part of the feature of an application
KA
2. TASK
A Single part of work (or) Work that we are doing
IK
3. MANAGER
This manages/distributes the work among the different nodes
CH
4. WORKER
which works for a specific purpose of the service
PRACTICAL
EP
1. Take 1 normal server named as manager and inside the server install & restart the docker
2. Initializing Swarm
E
Now, task is - In 2 slave servers at a time we need to create a container from manager
So, here we are creating in the service format. Here, service means container
Create Service/Container
ER
docker service create --name sandy --replicas 3 --publish 8081:80 httpd
here, sandy → service name
CK
replicas → duplicates i.e. if the container is stopped/deleted means automatically another
container will created with the same configuration
3 → duplicate containers, like how many containers you need, just give the number
DO
Now, 1 container is created
See the list of Services
docker service ls
LA
KA
Actually we get 3 but here, it will show you 1 container. So, here it acts as a master & slave
IK
CH
Now, check in slave server, you will get the remaining containers
E EP
Note :
Here, If the worker node contains less containers. Manager will send the containers to that worker node.
It balance the work load
i.e. Now take another service with 2 replicas. This time, the container will add in Worker-2
Task -2 : Create Docker file, and we have to run the image from the Docker file
FROM ubuntu
RUN apt update -y
RUN apt install apache2 -y
RUN echo "hi this is app" > /var/www/html/index.html
CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"]
ER
Build the image → docker build -t image .
Create the service based on image
CK
docker service create --name sandy --replicas 3 --publish 8084:80 image
docker service ls
Here, 3 servers are present inside manager. Not in slave servers. Because, that is local image
DO
Httpd, nginx, ubuntu these are all open images
When you check in worker-1,2. It is not there. that's why when we run this command we're
seeing no such image
So, 3 containers are created inside manager server
LA
So, httpd, nginx, ubuntu these are open images. So, these are pulling from the docker hub
So, present we're pushing our "image" into docker hub
In manager server → docker login
KA
docker ps -a
Now, we are having 1 container in manager same like in slaves also we're having containers
EP
So, like this through custom image also we can create the services in docker swarm
docker service ls
2. Checking the how many containers inside a service
docker service ps ServiceName
SA
1. Update Dockerfile
2. Build the Dockerfile
3. docker tag image chiksand/repo
ER
4. docker push chiksand/repo
5. So, right now, present service we need to update the image
CK
docker service update --image ImageName ServiceName
docker service update --image chiksand/repo swarm
Check in browser, whether it's working/not
DO
Rollback to Previous image
Here, I want to go back to the previous image means. Without updating the image you can't rollback to
LA
previous image
So, here the common query is we already update the image. How we can get the previous image
It will stores the log files. So, we can rollback to any image
IK
If you want to increase/decrease the replicas for containers we can use Scaling
CH
2. Server Scaling → using aws, Based on the users request it will increase
E
So, overall if you want to remove the node from the manager means, first, node will leave from that
particular service, then we can perform the command
So, again you want to add the nodes, you have to do it starting process
DOCKER - COMPOSE
ER
In docker swarm, we created 1 container/service in multiple servers using master & Slave (or)
Manager & Worker concept
CK
But In docker compose, we deployed multiple containers in single server
It is complete opposite to docker swarm
Here, multiple containers means full application. i.e. frontend, backend, database containers
DO
present
So, right now these 3 containers will present in single container. For that, we are using Docker
Compose
Here, Manually we can do. But here, we're doing through "compose file"
LA
So, here suppose we have 3 apps. For these, if we have to create container means first, write the
docker file. then after build and get the image. After run the image you got a container. These is
KA
file means. In compose files, whatever the containers we're having that all will created
Here, Automate happened. i.e. image build & container creation at a time happened
So, overall, In real time if developers write the code means, we are writing the docker files for that. For
EP
that docker file, Here we are writing the Compose file and will execute that
Def:
E
ND
Docker compose is a tool used to build, run and ship the multiple containers for application
It is used to create multiple containers in a single host/server
It used YAML file to manage multi containers as a single service
i.e. In docker compose file we are writing the container configurations, that should be written in
SA
PRACTICAL
ER
CK
Perform the above steps, you can get docker-compose file in your local
Install docker - compose
DO
sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-
compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
LA
docker-compose version
vim docker-compose.yml
IK
CH
EP
Inside the service we are having containers, and we have to give the image name
ER
CK
DO
LA
KA
IK
If I update the code in frontend i.e. in the docker file. Again we need to build the image and the new
image will be placed inside the compose file
So, whenever develop the code. We need to build that image, and update in compose.yml
Instead of doing this, whenever developer write the code, automatically it will update in docker-
EP
vim docker-compose.yml
ND
SA
Now, execute and access in browser. You, will get the output
PIVN
P → ports
I → Image
V → Volume
N → Network
Now, we have to add these components inside a docker-compose
ER
CK
DO
LA
KA
(FAQ) Suppose, in one service, I got issue in docker compose. how to resolve ?
Here, We're going into particular docker file and update the docker file. Build the compose file. Old
containers are running, new containers will deploy
DOCKER-COMPOSE COMMANDS
ER
docker-compose images
5. See the compose containers
docker-compose ps
CK
docker ps -a → we see manual created containers
6. See the logs in docker compose. Inside the logs containers start & end details are present here
docker-compose logs
DO
7. See the code configuration in compose file, not from vim editor
docker-compose config
8. Pause & UnPause in container
docker-compose pause
LA
i.e. no updates will happen in that container. that means container will struck
docker-compose unpause
KA
Usually, we're maximum using this "docker-compose.yml" file name. If we use another name we're
IK
Eg:
CH
DOCKER STACK
If you want to deploy multiple services in multiple servers we're using docker stack
SA
Eg: Let's assume, if we have 2 servers which is manager and worker. If we deployed a stack with 4
replicas. 2 are present in manager and 2 are present in worker
Here, manager will divide the work based on the load on a server
PRACTICAL
Step -1 :
ER
Install & restart the docker
CK
In manager
Step -2 :
LA
Write the docker-compose file, for that install the docker compose
KA
vi docker-compose.yml
IK
CH
E EP
ND
SA
Step -3: Execute this file
Suppose, we have paytm app. Usually, daily 1k people accessing the paytm. Due to festivals, this time
100k people use this application. Due to multiple requests, server can't handle the capacity.
So, that's why i want to run my application in multiple servers. For that thing we're using cluster
Here, if you need multiple containers, we can use replicas for this. replicas used for high availability and
application performs well
COMMANDS
ER
1. create replica for this server
docker service scale serviceName=2
CK
2. See the list of stacks
docker stack ls
3. Remove the stack
DO
docker stack rm stackName
4. See the stack related services
docker stack services stackName
5. See the commands in stack
LA
docker stack
Through Docker compose file, we can do docker stack, first execute this compose file in stack
KA
It is an Docker-GUI. It works with the help of docker stack. i.e. we can create the gui for the entire docker
by using portainer concept
It is container organizer, designed to make tasks easier, whether they're clustered (or) not.
Able to connect multiple clusters, access the containers, migrate stacks between clusters
It is not a testing environment. Mainly used for production routines in large companies
Portainer consists of 2 elements.
The portainer Server
The portainer agent
ER
Create container → docker run -itd --name cont -p 8081:80 httpd
curl publicIP:port
CK
I want to change the port number
First, stop the container
So, go to cd /var/lib/docker → ll
DO
cd docker → cd containers → cd contID →
Here, go to hostconfig.json → vi hostconfig.json
LA
KA
So, like this we can change the port number for container
E EP
ND
SA