Docker
Docker
https://www.linkedin.com/in/vijaykumar-biradar-29b710161/
Separate Operating System for Operating System resources can be shared within Docker
each VM
Q. What is Docker?
• Docker is a containerization platform tool for building an application in which container
contains packaged binary files and its library files with its required dependencies, and
further these containers are easily shipped to run on other machines.
• Each and every application runs on separate containers and has its own set of
dependencies & libraries.
• This makes sure that each application is independent of other applications, giving
developers surety that they can build applications that will not interfere with one another.
• Docker is a tool designed to make it easier to create, deploy and run applications by using
containers.
• In simple words, Docker is a tool which is used to automate the deployment of
applications in lightweight containers so that applications can work efficiently in
different environments
Note: Container is a software package that consists of all the dependencies required to run an
application
What is the purpose of Docker?
• The purpose of Docker is to help developers and dev-ops team in becoming more
productive and less error prone.
• Setup and deployment of new projects becomes much more easier and time efficient with
the help of Docker.
• Consider a scenario where operating system windows is installed in your system and you
have to deploy and test your application in different operating system let's say fedora,
centos and ubuntu. How will you do that? This is where Docker comes to your rescue.
• Again consider a scenario where you have to test your application with different php
versions let’s say php 7.1, php 7.2 and php 7.3 and using different web server
combinations such as nginx and apache. How will you do that? Doesn’t that seem
complicated to you? This is where Docker comes to your rescue.
Let’s use Docker with an example!
Image a situation where you plan to rent a house in Airbnb
But, in the house, there are 3 rooms and only one cupboard and kitchen
And none of the guests are ready to share the cupboard and the kitchen
Because every individual has different preferences when it comes to the cupboard and the
kitchen usage
Let’s use this example with computers, where all the three applications use different frameworks
Problem statement: But, what if a person wants to run all his applications with their suitable
frameworks?
Solution: Docker will help you run applications with their suitable
Likewise,
As a result, Docker makes more efficient use of system resources
Docker Architecture
• The Docker client consists of Docker build, Docker pull, and Docker run.
• The client approaches the Docker daemon that further helps in building, running, and
distributing Docker containers.
• Docker client and Docker daemon can be operated on the same system; otherwise, we can
connect the Docker client to the remote Docker daemon. Both communicate with each
other using the REST API, over UNIX sockets or a network.
The basic architecture in Docker consists of three parts:
• Docker Client
• Docker Host
• Docker Registry
Docker Client
• It is the primary way for many Docker users to interact with Docker.
• It uses command-line utility or other tools that use Docker API to communicate with
the Docker daemon.
• A Docker client can communicate with more than one Docker daemon.
Docker Host
In Docker host, we have Docker daemon and Docker objects such as containers and images. First,
let’s understand the objects on the Docker host, then we will proceed toward the functioning of
the Docker daemon.
• Docker Objects:
o What is a Docker image? A Docker image is a type of recipe/template
that can be used for creating Docker containers. It includes steps for
creating the necessary software.
o What is a Docker container? A type of virtual machine created from
the instructions found within the Docker image. It is a running instance
of a Docker image that consists of the entire package required to run an
application.
• Docker Daemon:
o Docker daemon helps in listening requests for the Docker API and in
managing Docker objects such as images, containers, volumes, etc.
Daemon issues to build an image based on a user’s input and then saves
it in the registry.
o In case we don’t want to create an image, then we can simply pull an
image from the Docker hub (which might be built by some other
user). In case we want to create a running instance of our Docker image,
then we need to issue a run command that would create a Docker
container.
o A Docker daemon can communicate with other daemons to manage
Docker services.
Docker Registry
• Docker registry is a repository for Docker images which is used for creating Docker
containers.
• We can use a local/private registry or the Docker hub, which is the most popular
social example of a Docker repository.
• Redis is running, but cannot access it. The reason is that each container is
sandboxed. If a service needs to be accessible by a process not running in a
container, then the port needs to be exposed via the Host/ Docker port-
forwarding .
• Once exposed, it is possible to access the process as if it were running on the
host OS itself.
Define the port to be used for the remote connection by Docker port-
forwarding .
docker run -p [port_number]:6379 -d redis
Define the host-name or IP
For example, to open and bind to a network port on the host you need to provide the
parameter -p <host-port>:<container-port>.
4dd56fcecf62eb4b7ce4d651c5c5b60d91e428dfde00975106abef1c98276594
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
4dd56fcecf62 redis:latest "docker-entrypoint.s…" 9 seconds ago Up 9 seconds
0.0.0.0:6379->6379/tcp redisHostPort
By default, the port on the host is mapped to 0.0.0.0, which means all IP addresses. You can
specify a particular IP address when you define the port mapping
Port binding
• This method is used for outside the same network.
• To allow communication via the defined ports to containers outside of the same network,
• you need to publish the ports by using -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
• You can do port porting via one of the below ways:
1. Expose a port via Dockerfile by –expose and publish it with the -P flag. It will bind the
exposed port to the Docker host on a random port.
2. Expose a port via Dockerfile by –expose and publish it with the -p 6379: 6379 flag, this
will bind the expose port to Docker host on certain port 6379 with guest 6379.
We can access the Redis containers using the host computer’s IP address and port
number.
http://ip_address:6379
http://ip_address:6380
• when you want to change the protocol from default i.e tcp to udp , use:
#docker run -d -p 8080:80/udp --name=mycontinername myimagename
#docker run -d --name MyWebServer -p 8080:80/udp httpd
lets say when you want to expose your image port to any specific IP address from your docker
host , use:
First up, stop and remove the Container so that we can use the same Container Name i.e.
MyWebServer.
$ docker stop MyWebServer
MyWebServer
$ docker rm MyWebServer
MyWebServer
• Now, let us start the httpd Container with an extra parameter i.e. -P. What this parameter
does is that it “Publish all exposed ports to random ports”. So in our case, the port 80
should get mapped to a random port, which is going to be the public port.
Execute the following command:
$ docker run -d --name MyWebServer -P httpd
60debd0d57bf292b0c3f006e4e52360feaa575e45ae3caea97637bb26b490b10
Next, let us use the port command again to see what has happened:
$ docker port MyWebServer
80/tcp -> 0.0.0.0:32769
We can see that port 80 has got mapped to port 32769. So if we access our web site at
http://<HostIP>/<HostPort>
Specific Port Mapping
So what if we wanted to map it to a port number other than 32769. You can do that via the -p
(note the lowercase) parameter.
$ docker rm MyWebServer
MyWebServer
Forward everything
If you append -P (or --publish-all=true) to docker run, Docker identifies every port the Dockerfile
exposes (you can see which ones by looking at the EXPOSE lines). Docker also finds ports you
expose with --expose 8080 (assuming you want to expose port 8080). Docker maps all of these
ports to a host port within a given epehmeral port range. You can find the configuration for these
ports (usually 32768 to 61000)
Forward selectively
You can also specify ports. When doing so, you don’t need to use ports from the ephemeral port
range. Suppose you want to expose the container’s port 8080 (standard http port) on the host’s
port 80 (assuming that port is not in use). Append -p 80:8080 (or --publish=80:8080) to
your docker run command. For example:
## OR ##
By default, Docker exposes container ports to the IP address 0.0.0.0 (this matches any IP on the
system). If you prefer, you can tell Docker which IP to bind on. To bind on IP address 10.0.0.3,
host port 80, and container port 8080:
12. What are the commands that are available in the Dockerfile?
The followings are the commands that are available in the Dockerfile:
Add
CMD
Entry point
ENV
EXPOSE
FROM
MAINTAINER
RUN
USER
VOLUME
WORKDIR
Now, let us look at another Dockerfile shown below:
FROM ubuntu
MAINTAINER vijay (vijay15.biradar@gmail.com)
date
RUN apt-get update
RUN apt-get install -y nginx
ENTRYPOINT [“/usr/sbin/nginx”,”-g”,”daemon off;”]
EXPOSE 80
Here, what we are building is an image that will run the nginx proxy server for us.
These instructions inform Docker that we want to create an image:
FROM a ubuntu base image with the tag of latest
MAINTAINER Author field of the generated images is vijay
CMD Defining a command to be run after the docker is up is [date]
RUN -running a package update and then installing nginx on newly created operating system.
The ENTRYPOINT is then running the nginx executable
EXPOSE command will open the mentioned port on the docker image to allow access to
outside world to 80
EXPOSE port will be used by default. However, if we want to change the host port then we have
to use -p parameter.
If you build the image and run the container as follows:
docker build -t webserver- myimage:v1 .
docker images | grep webserver- myimage
webserver- myimage v1 d79c7313bba5 6 minutes ago 16.1MB
docker run -d -p 80:80 --name webserver- myimage
You will find that it will have nginx started on port 80. And if you visit the page via the host IP,
you will see the following point:
http://localhost:80
Q) Explain about configure networking in Docker?
Answer:
bridge: The default network driver
host: For stand-alone containers , remove network isolation between the container and the
docker host
Overlay: Overlay networks connect multiple docker daemons
macvlan: for assigning MAC address for container
none: disable all neworking
version: '3'
services:
web:
image: nginx
db:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_USER=user
- MYSQL_PASSWORD=password
- MYSQL_DATABASE=demodb
docker-compose up
docker-compose up -d
docker-compose ps
What is a service?
• A service is a group of containers of the same image:tag.
• Services make it simple to scale your application.
• Services are really just “containers in production.”
• A service only runs one image, but it codifies the way that image runs—what ports it
should use, how many replicas of the container should run so the service has the capacity
it needs, and so on.
• Scaling a service changes the number of container instances running that piece of
software, assigning more computing resources to the service in the process.
• When you create a service, you specify which container image to use and which
commands to execute inside running containers. You also define options for the service
including: the port where the swarm makes the service available outside the swarm an
overlay network for the service to connect to other services in the swarm CPU and
memory limits and reservations a rolling update policy the number of replicas of the
image to run in the swarm
• docker service create --replicas 3 -p 80:80 --name hello-app nginx
• docker service scale hello-app=8
Docker services vs docker container
• The docker run command creates and starts a container on the local docker host.
• A docker "service" is one or more containers with the same configuration running under
docker's swarm mode.
• It's similar to docker run in that you spin up a container.
• The difference is that you now have orchestration
• Docker run will start a single container.
• With docker service you manage a group of containers (from the same image). You can
scale them (start multiple containers) or update them.
Services deployment types in Docker
• There are two types of service deployments Replicated services and global services
• For a replicated service, you specify the number of identical tasks you want to run.
• A global service is a service that runs one task on every node.
• There is no pre-specified number of tasks. Each time you add a node to the swarm, the
orchestrator creates a task and the scheduler assigns the task to the new node.
How swarm mode accepts service create requests and schedules tasks to worker nodes.
Q. Can we run multiple apps on one server with Docker?
Yes, theoretically we can run multiples apps on one Docker server. But in practice, it is better to
run different components on separate containers.
With this we get cleaner environment and it can be used for multiple uses.
Q. What are the main features of Docker-compose?
Some of the main features of Docker-compose are as follows:
Multiple environments on same Host: We can use it to create multiple environments on the
same host server.
Preserve Volume Data on Container Creation: Docker compose also preserves the volume
data when we create a container.
Recreate the changed Containers: We can also use compose to recreate the changed
containers.
Variables in Compose file: Docker compose also supports variables in compose file. In this
way we can
Create variations of our containers.
Q. What is the most popular use of Docker?
The most popular use of Docker is in build pipeline.
With the use of Docker it is much easier to automate the development to deployment process in
build pipeline.
We use Docker for the complete build flow from development work, test run and deployment to
production environment.
Q. What is the role of open source development in the popularity of Docker?
Since Linux was an open source operating system, it opened new opportunities for developers
who want to contribute to open source systems.
One of the very good outcomes of open source software is Docker.
It has very powerful features.
Docker has wide acceptance due to its usability as well as its open source approach of integrating
with different systems.
Q. What is Docker Machine?
We can use Docker Machine to install Docker Engine on virtual hosts.
It also provides commands to manage virtual hosts.
Some of the popular Docker machine commands enable us to start, stop, inspect and restart a
managed host.
Docker Machine provides a Command Line Interface (CLI), which is very useful in managing
multiple hosts.
Q. Why do we use Docker Machine?
There are two main uses of Docker Machine:
Old Desktop : If we have an old desktop and we want to run Docker then we use Docker
Machine to run Docker. It is like installing a virtual machine on an old hardware system to run
Docker engine.
Remote Hosts : Docker Machine is also used to provision Docker hosts on remote systems. By
using Docker Machine you can install Docker Engine on remote hosts and configure clients on
them.
Q. How will you create a Container in Docker?
To create a Container in Docker we have to create a Docker Image. We can also use an existing
Image from Docker Hub Registry.
We can run an Image to create the container.
Q. Do you think Docker is Application-centric or Machine-centric?
Docker is an Application-centric solution.
It is optimized for deployment of an application.
It does not replace a machine by creating a virtual machine. Rather, it focuses on providing ease
of use features to run an application.
Q. Can we run more than one process in a Docker container?
Yes, a Docker Container can provide process management that can be used to run multiple
processes.
There are process supervisors like runit, s6, daemontools etc that can be used to fork additional
processes in a Docker container.
Q. What are the objects created by Docker Cloud in Amazon Web Services (AWS) EC2?
Docker Cloud creates following objects in AWS EC2 instance:
VPC : Docker Cloud creates a Virtual Private Cloud with the tag name dc-vpc. It also creates
Class Less
Inter-Domain Routing (CIDR) with the range of 10.78.0.0/16 .
Subnet : Docker Cloud creates a subnet in each Availability Zone (AZ). In Docker Cloud, each
subnet is tagged with dc-subnet.
Internet Gateway : Docker Cloud also creates an internet gateway with name dc-gateway and
attaches it to the VPC created earlier.
Routing Table : Docker Cloud also creates a routing table named dc-route-table in Virtual
Private Cloud. In this Routing Table Docker Cloud associates the subnet with the Internet
Gateway.
Q. How will you take backup of Docker container volumes in AWS S3?
We can use a utility named Dockup provided by Docker Cloud to take backup of Docker
container volumes in S3.
Q. What are the three main steps of Docker Compose?
Three main steps of Docker Compose are as follows:
Environment : We first define the environment of our application with a Dockerfile. It can be
used to recreate the environment at a later point of time.
Services : Then we define the services that make our app in docker-compose.yml. By using this
file we can define how these services can be run together in an environment.
Run : The last step is to run the Docker Container. We use docker-compose up to start and run
the application.
Q. What is Pluggable Storage Driver architecture in Docker based containers?
Docker storage driver is by default based on a Linux file system. But Docker storage driver also
has provision to plug in any other storage driver that can be used for our environment.
In Pluggable Storage Driver architecture, we can use multiple kinds of file systems in our Docker
Container.
In Docker info command we can see the Storage Driver that is set on a Docker daemon.
We can even plug in shared storage systems with the Pluggable Storage Driver architecture.
Q. What are the main security concerns with Docker based containers?
Docker based containers have following security concerns:
Kernel Sharing: In a container-based system, multiple containers share same Kernel. If one
container causes Kernel to go down, it will take down all the containers. In a virtual machine
environment we do not have this issue.
Container Leakage: If a malicious user gains access to one container, it can try to access the
other containers on the same host. If a container has security vulnerabilities it can allow the user
to access other containers on same host machine.
Denial of Service: If one container occupies the resources of a Kernel then other containers will
starve for resources. It can create a Denial of Service attack like situation.
Tampered Images: Sometimes a container image can be tampered. This can lead to further
security concerns. An attacker can try to run a tampered image to exploit the vulnerabilities in
host machines and other containers.
Secret Sharing: Generally one container can access other services. To access a service it
requires a Key or Secret. A malicious user can gain access to this secret. Since multiple
containers share the secret, it may lead to further security concerns.
Q. How can we check the status of a Container in Docker?
We can use docker ps –a command to get the list of all the containers in Docker. This command
also returns the status of these containers.
Q. What are the main benefits of using Docker?
Docker is a very powerful tool. Some of the main benefits of using Docker are as follows:
Utilize Developer Skills : With Docker we maximize the use of Developer skills. With Docker
there is less need of build or release engineers. Same Developer can create software and wrap it
in one single file.
Standard Application Image : Docker based system allows us to bundle the application
software and Operating system files in a single Application Image that can be deployed
independently.
Uniform deployment : With Docker we can create one package of our software and deploy it on
different platforms seamlessly.
Q. How does Docker simplify Software Development process?
Prior to Docker, Developers would develop software and pass it to QA for testing and then it is
sent to Build & Release team for deployment.
In Docker workflow, Developer builds an Image after developing and testing the software. This
Image is shipped to Registry. From Registry it is available for deployment to any system. The
development process is simpler since steps for QA and Deployment etc take place before the
Image is built. So Developer gets the feedback early.
Q. What is the basic architecture behind Docker?
• Docker is built on client server model.
• Docker server is used to run the images.
• We use Docker client to communicate with Docker server.
• Clients tell Docker server via commands what to do.
• Additionally there is a Registry that stores Docker Images.
• Docker Server can directly contact Registry to download images.
Q. What are the popular tasks that you can do with Docker Command line tool?
• Docker Command Line (DCL) tool is implemented in Go language.
• It can compile and run on most of the common operating systems.
• Some of the tasks that we can do with Docker Command Line tool are as follows:
• We can download images from Registry with DCL.
• We can start, stop or terminate a container on a Docker server by DCL.
• We can retrieve Docker Logs via DCL.
• We can build a Container Image with DCL.
Q. What type of applications- Stateless or Stateful are more suitable for Docker Container?
• Docker was designed for stateless applications and horizontal scalability, with containers
deleted and replaced as needed
• We can create a container out of our application and take out the configurable state
parameters from application.
• Now we can run same container in Production as well as QA environments with different
parameters.
• This helps in reusing the same Image in different scenarios.
• stateless application is much easier to scale with Docker Containers than a stateful
application.
• Databases are not suited for this approach, and Docker is evolving to support the needs of
stateful enterprise apps.
• Docker supports for few database services and it doesn’t supports all of the database
services that you might expect out of your Docker environment.
Q. How can Docker run on different Linux distributions?
• Docker directly works with Linux kernel level libraries.
• In every Linux distribution, the Kernel is same.
• Docker containers share same kernel as the host kernel.
• Since all the distributions share the same Kernel, the container can run on any of these
distributions.
Q. Why do we use Docker on top of a virtual machine?
• Generally, we use Docker on top of a virtual machine to ensure isolation of the
application.
• On a virtual machine we can get the advantage of security provided by hypervisor.
• We can implement different security levels on a virtual machine.
• Docker can make use of this to run the application at different security levels.
Q. How can Docker container share resources?
• We can run multiple Docker containers on same host.
• These containers can share Kernel resources.
• Each container runs on its own Operating System and it has its own user-space and
libraries. So, in a way Docker container does not share resources within its own
namespace. But the resources that are not in isolated namespace are shared between
containers. These are the Kernel resources of host machine that have just one copy.So in
the back-end there is same set of resources that Docker Containers share.
Q. What is Docker Entrypoint?
• We use Docker Entrypoint to set the starting point for a command in a Docker Image. We
can use the entrypoint as a command for running an Image in the container.
E.g. We can define following entrypoint in docker file and run it as following command:
ENTRYPOINT [“mycmd”]
• % docker run mycmd
ENTRYPOINT cannot be overriden at run time with normal commands such as docker
run [args].
• ENTRYPOINT can be overriden with --entrypoint.
• The ENTRYPOINT specifies a command that will always be executed when the
container starts.
• Otherwise, if you want to make an image for general purpose, you can leave
ENTRYPOINT unspecified and use CMD ["/path/dedicated_command"] as you will be
able to override the setting by supplying arguments to docker run
• CMD command mentioned inside Dockerfile file can be overridden via docker run
command while ENTRYPOINT cannot be.
• entrypoint behaves similarly to cmd. And in addition, it allows us to customize the
command executed at startup.
• Like with cmd, in case of multiple entrypoint entries, only the last one is considered.
FROM ubuntu
MAINTAINER vijay
RUN apt-get update
ENTRYPOINT ["echo", "Hello"]
CMD ["World"]
docker build .
Network Drivers
There are mainly 5 network drivers: Bridge, Host, None, Overlay, Macvlan
Docker Networking
Q. How do multiple containers publish on the same port?
• Mapping of replica ports to the same host port denied Without ingress
How do multiple containers publish on the same port?
Mapping of replica ports to the same host port allowed with ingress
• When you create a docker swarm cluster, it automatically creates an ingress network. The
ingress network has a built-in load balancer that redirects traffic from the published port,
which in this case is the port 80.
• All the mapped ports are the port 5000 on each container. Since the ingress network is
created automatically there is no configuration that you have to do.
What is routing mesh under docker swarm mode
• Routing Mesh is a feature which make use of Load Balancer concepts.
• It provides global publish port for a given service.
• The routing mesh uses port-based service discovery and load balancing. So to reach any
service from outside the cluster you need to expose ports and reach them via the
Published Port.
• Docker Engine swarm mode makes it easy to publish ports for services to make them
available to resources outside the swarm.
• All nodes participate in an ingress routing mesh.
• The routing mesh enables each node in the swarm to accept connections on published
ports for any service running in the swarm, even if there’s no task running on the node.
• The routing mesh routes all incoming requests to published ports on available nodes to an
active container.
• To use the ingress network in the swarm, you need to have the following ports open
between the swarm nodes before you enable swarm mode:
How do you access a service that could be started anywhere in your cluster?
• Docker Swarm has a very useful tool to solve this problem called the Swarm routing
mesh.
• The routing mesh manages ingress into your running containers. By default, Swarm
makes all services accessible via their published port on each Docker host.
• The Swarm routing mesh has its pros and cons. This default configuration has its
limitations, but it is designed to make getting started as easy as possible. As your
applications get more complex, the routing mesh can be configured to behave differently,
and different services can be deployed to use different routing configurations
Goals of Docker Networking
Flexibility – Docker provides flexibility by enabling any number of applications on various
platforms to communicate with each other.
Cross-Platform – Docker can be easily used in cross-platform which works across various servers
with the help of Docker Swarm Clusters.
Scalability – Docker is a fully distributed network, which enables applications to grow and scale
individually while ensuring performance.
Decentralized – Docker uses a decentralized network, which enables the capability to have the
applications spread and highly available. In the event that a container or a host is suddenly missing
from your pool of resource, you can either bring up an additional resource or pass over to services
that are still available.
User – Friendly – Docker makes it easy to automate the deployment of services, making them
easy to use in day-to-day life.
Support – Docker offers out-of-the-box supports. So, the ability to use Docker Enterprise Edition
and get all of the functionality very easy and straightforward, makes Docker platform to be very
easy to be used.
Docker – version
You can check the currently used Docker version on your system through this command -
$ docker –version
Docker pull
This command can pull the images from docker’s hub or repository that is hub.docker.com
$ docker pull ubuntu
All the images of the hub will be cached and stored from docker’s hub.
Docker run
You can create a container from the image through this command.
$ docker run –it –d ubuntu
Docker ps
To check the running containers or to know that how many containers are running right now, you
can use this command:
$ docker ps
Docker ps –a
To view all the running and exited containers, you can use this command:
$ docker ps –a
Docker exec
To access the running container, you can use this command:
$ docker exec it <container id> bash
Docker stop
To stop the running container, we can use this command:
$ docker stop <container id>
Docker kill
The containers get killed after getting stopped by this command. In Docker, stop command
container gets the full time to shut down, but when you need to shut down any container
immediately then you can kill the Docker container through kill command.
$ docker kill <container id>
Docker commit
To create a new image of the edited container on the local system, you can use this command:
$ docker commit <container id> <username/imagename>
Docker login
To login the docker hub repository, you can use this command:
$ docker login
Docker push
You can push a new image into Docker hub through this command:
$ docker push <username/image name>
Docker images
All locally stored images in docker hub will be listed through this command:
$ docker images
Docker rm
If you want to delete any stopped container then this command can help you:
$ docker rm <container id>
Docker build
If you want to build an image from a Docker file then you can use this command:
$ docker build <path to docker file>
Apart from the above-listed Docker commands cheat sheet, one can also use other commands for
Docker like ‘docker export’ command that can export a container’s filesystem as an archive file
or ‘docker attach’ that can attach any running container, etc.
Docker network
To view Docker networks, run:
docker network ls
To get further details on networks, run:
docker network inspect
Create the overlay network in a similar manner to the bridge network (network
name my_multi_host_network ):
docker network create --driver overlay <bridge_network_name>
Launch containers on each host; make sure you specify the network name:
docker run -itd -net=<bridge_network_name> my_python_app
we can use host network directly.
docker run -d --name web1 --net=host <image_name>
docker run -d --name web1 --net=host nginx