0% found this document useful (0 votes)
136 views47 pages

Docker Swarm

Docker Swarm allows for clustering of Docker engines into a swarm. Key concepts include initializing a swarm on a manager node, adding worker nodes, deploying services across the swarm, updating services with rolling updates, and draining nodes for maintenance. The swarm uses Raft consensus for fault tolerance, ensuring services continue running during failures.

Uploaded by

Yuri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views47 pages

Docker Swarm

Docker Swarm allows for clustering of Docker engines into a swarm. Key concepts include initializing a swarm on a manager node, adding worker nodes, deploying services across the swarm, updating services with rolling updates, and draining nodes for maintenance. The swarm uses Raft consensus for fault tolerance, ensuring services continue running during failures.

Uploaded by

Yuri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Docker Swarm - Basics

S HA HZA D MA S UD
S R . T ECHN OLOGY S P ECI ALIST – N E TSOL T ECHN OLOGIES
1 7 - MAR -201 7
Introduction
Roadmap
 Key concepts
 Initializing a cluster of Docker Engines in swarm mode
 Adding nodes to the swarm
 Deploying services to the swarm
 Rolling updates
 Draining Nodes
Traditional Model
Swarm
What ?
• Cluster management and orchestration feature embedded in Docker engine
• Cluster of Docker engines or nodes.
• One manager, and rest workers
Swarm Features
• Cluster Management
• Decentralize design
• Declarative service model
• Scaling
• Desired state reconciliation
• Multi-host network
• Service discovery
• Load balancing
• Secure by default
• Rolling updates
Pre-requisites
Docker installed (CE) (https://www.docker.com/community-edition/)
Docker Knowledge of Application and Services running
(https://docs.docker.com/engine/getstarted-voting-app/)
Docker Engine CLI
◦ Docker version
◦ Docker run hello-world
◦ Docker ps -a
Test Machines
Create three different machines using docker-machine create
1. manager1 (docker-machine create –driver virtualbox manager1)
2. Worker1 (docker-machine create –driver virtualbox worker1)
3. Worker2 (docker-machine create –driver virtualbox worker2)

docker-machine ip manager1 (192.168.99.100)


docker-machine ip worker1 (192.168.99.101)
docker-machine ip worker2 (192.168.99.102)
Additional Network Checks
The following ports must be available. On some systems, these ports are open by default.
• TCP port 2377 for cluster management communications
• TCP and UDP port 7946 for communication among nodes
• UDP port 4789 for overlay network traffic
If you are planning on creating an overlay network with encryption (--opt encrypted), you will
also need to ensure ip protocol 50 (ESP) traffic is allowed.
Create a Swarm (1/4)
Make sure the Docker Engine daemon is started on the host machines.
1. Open a terminal and ssh into the machine where you want to run your manager node. If you
use Docker Machine, you can connect to it via SSH using the following command:
Create a Swarm (2/4)
2. Run the following command to create a new swarm:
Create a Swarm (3/4)
3. Run docker info to view the current state of the swarm:
Create a Swarm (4/4)
4. Run the docker node ls command to view information about nodes:

The * next to the node ID indicates that you’re currently connected on this node.

Docker Engine swarm mode automatically names the node for the machine host name.
Add Nodes to Swarm (1/5)
Make sure the Docker Engine daemon is started on the host machines.
1. Open a terminal and ssh into the machine where you want to add node (worker1). If you use
Docker Machine, you can connect to it via SSH using the following command:
Add Nodes to Swarm (2/5)
2. Run the command produced by the docker swarm init output from the create a swarm step 2
to create a worker node joined to the existing swarm:
Add Nodes to Swarm (3/5)
3. If you don’t have the command available, you can run the following command on a manager
node to retrieve the join command for a worker:
Add Nodes to Swarm (4/5)
4. Repeat Step 1, and Step2 for Worker 2
Add Nodes to Swarm (5/5)
5. Open a terminal and ssh into the machine where the manager node runs and run the docker
node ls command to see the worker nodes:

The MANAGER column identifies the manager nodes in the swarm. The empty status in this
column for worker1 and worker2 identifies them as worker nodes.

Swarm management commands like Docker node ls only work on manager nodes.
Deploy a service to Swarm
1. Open a terminal and ssh into the machine where you run your manager node. For example,
use a machine named manager1. (i.e. docker-machine env manager1)
2. Run the following command:

3. Run Docker service ls to see the list of running services:


Inspect the service on the Swarm (1/2)
1. If you haven’t already, open a terminal and ssh into the machine where you run your
manager node. For example, use a machine named manager1.
2. Run docker service inspect --pretty <SERVICE-ID> to display the details about a service in an
easily readable
Inspect the service on the Swarm (2/2)
3. Run docker service ps helloworld to see which nodes are running the service:

4. Run docker ps on the node where the task is running to see details about the container for
the task.
Scale the service in the Swarm (1/2)
1. If you haven’t already, open a terminal and ssh into the machine where you run your
manager node. For example, the tutorial uses a machine named manager1.
2. Run the following command to change the desired scale of the service running in the swarm:

3. Run docker service ps helloworld to see the updated task list:


Scale the service in the Swarm (2/2)
4. Run docker ps to see the containers running on the node where you’re connected.
Delete service running on swarm (1/2)
1. If you haven’t already, open a terminal and ssh into the machine where you run your
manager node. For example, use a machine named manager1.
2. Run docker service rm helloworld to remove the helloworld service.

3. Run docker service inspect <SERVICE-ID> to verify that the swarm manager removed the
service. The CLI returns a message that the service is not found:
Delete service running on Swarm (2/2)
4. Even though the service no longer exists, the task containers take a few seconds to clean up.
You can use docker ps to verify when they are gone.
Rolling Update (1/6)
1. If you haven’t already, open a terminal and ssh into the machine where you run your
manager node. For example, use a machine named manager1.
2. Deploy Redis 3.0.6 to the swarm and configure the swarm with a 10 second update delay:
Rolling Update (2/6)
3. Inspect the redis service:
Rolling Update (3/6)
4. Now you can update the container image for redis. The swarm manager applies the update
to nodes :

The scheduler applies rolling updates as follows by default:


• Stop the first task.
• Schedule update for the stopped task.
• Start the container for the updated task.
• If the update to a task returns RUNNING, wait for the specified delay period then start the next task.
• If, at any time during the update, a task returns FAILED, pause the update.
Rolling Update (4/6)
5. Run docker service inspect --pretty redis to see the new image in the desired state:
Rolling Update (5/6)
6. The output of service inspect shows if your update paused due to failure:

7. To restart a paused update run docker service update <SERVICE-ID>. For example:
Rolling Update (6/6)
8. Run docker service ps <SERVICE-ID> to watch the rolling update:
Drain a Node on the Swarm (1/5)
1. If you haven’t already, open a terminal and ssh into the machine where you run your
manager node. For example, use a machine named manager1.
2. Verify that all your nodes are actively available.

3. Create a radis service with 3 replicas and update delay of 10 seconds


Drain a Node on the Swarm (2/5)
4. Run docker service ps redis to see how the swarm manager assigned the tasks to different
nodes:

In this case the swarm manager distributed one task to each node. You may see the tasks
distributed differently among the nodes in your environment.
5. Run docker node update --availability drain <NODE-ID> to drain a node that had a task
assigned to it:
Drain a Node on the Swarm (3/5)
6. Inspect the node to check its availability:
Drain a Node on the Swarm (4/5)
7. Run docker service ps redis to see how the swarm manager updated the task assignments for
the redis service:
Drain a Node on the Swarm (5/5)
8. Run docker node update --availability active <NODE-ID> to return the drained node to an
active state:

9. Inspect the node to see the updated state


When you set the node back to Active availability, it can receive new tasks:
◦ during a service update to scale up
◦ during a rolling update
◦ when you set another node to Drain availability
◦ when a task fails on another active node
On Failure (Node)
On Failure (Node) - Rescheduling
On Failure (Manager)

Node Failure
On Failure (Manager) – Backup Managers
On Failure (Manager) - Failed Manager
On Failed (Manager) – Swap Manager
Raft Consensus Algorithm for Swap
The implementation of the consensus algorithm in swarm mode means it features the properties
inherent to distributed systems:
1. agreement on values in a fault tolerant system. (Refer to FLP impossibility theorem and the Raft
Consensus Algorithm paper)
2. mutual exclusion through the leader election process
3. cluster membership management
4. globally consistent object sequencing and CAS (compare-and-swap) primitives

Raft tolerates up to (N-1)/2 failures and requires a majority or quorum of (N/2)+1 members to agree
on values proposed to the cluster. This means that in a cluster of 5 Managers running Raft, if 3 nodes
are unavailable, the system will not process any more requests to schedule additional tasks. The
existing tasks will keep running but the scheduler will not be able to rebalance tasks to cope with
failures if when the manager set is not healthy.
Upcoming Topics
1. Manage Sensitive data with Dockers Secret
2. Locking Swarm
3. Attaching services to an overlay network
4. Swarm Administration
5. Raft Consensus in Swarm mode
Useful links
1. http://docs.dockers.com
2. http://www.play-with-docker.com
3. https://github.com/boot2docker/boot2docker/
4. http://github.com/docker/swarm
Thank you – Questions

https://www.facebook.com/shahzadmasud

https://www.linkedin.com/in/shahzadmasud/

@shahzadmasud

shahzadmasud@hotmail.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy