Introduction
Docker Swarm Mode is a powerful tool that allows you to create and manage clusters of Docker nodes, providing a scalable and fault-tolerant environment for deploying and orchestrating applications. In this step-by-step guide, we will walk you through the process of how to installand configure Docker Swarm Mode on CentOS 7, enabling you to harness the full potential of containerization for your projects.
Docker swarm mode is introduced in the Docker 1.12. Some of the key benefits of Docker swarm mode are container self-healing, load balancing, container scale up and scale down, service discovery and rolling updates.
Table of Contents
- Introduction
- Prerequisites
- Conclusion
Prerequisites
Before you begin the installation process, ensure you have the following prerequisites in place:
- A CentOS 7 server with root access or a user with sudo privileges.
- A basic understanding of Docker concepts.
- Basic familiarity with the Linux command line.
Before starting, you will need to configure /etc/hosts file on each node, so each node can communicate with each other by hostname. You can update the /etc/hosts file on each node as shown below:
[samm@docker-mngr ~]$ sudo vi /etc/hosts
172.32.1.104 docker-mngr
172.32.1.105 docker-01
172.32.1.106 docker-02
Step 1: Update System Packages
The first step is to ensure your system is up-to-date. Open a terminal and run the following commands:
Step 2: Install Docker
If you don’t have Docker installed, you can install Docker on Ubuntu by using our previous guides below :
Step 3: Config Firewall on Manager and Workers
Open the following ports in the OS firewall on Docker Manager using below commands :
[samm@docker-mngr ~]$ sudo firewall-cmd --zone=public --permanent --add-port=80/tcp
[samm@docker-mngr ~]$ sudo firewall-cmd --zone=public --permanent --add-port=2376/tcp
[samm@docker-mngr ~]$ sudo firewall-cmd --zone=public --permanent --add-port=2377/tcp
[samm@docker-mngr ~]$ sudo firewall-cmd --zone=public --permanent --add-port=7946/tcp
[samm@docker-mngr ~]$ sudo firewall-cmd --zone=public --permanent --add-port=7946/udp
[samm@docker-mngr ~]$ sudo firewall-cmd --zone=public --permanent --add-port=4789/udp
[samm@docker-mngr ~]$ sudo systemctl restart firewalld
Restart the docker service on docker manager
[samm@docker-mngr ~]$ sudo systemctl restart docker
Open the following ports on each worker node and restart the docker service
[samm@docker-01 ~]$ sudo firewall-cmd --zone=public --permanent --add-port=80/tcp
[samm@docker-01 ~]$ sudo firewall-cmd --zone=public --permanent --add-port=2376/tcp
[samm@docker-01 ~]$ sudo firewall-cmd --zone=public --permanent --add-port=2377/tcp
[samm@docker-01 ~]$ sudo firewall-cmd --zone=public --permanent --add-port=7946/tcp
[samm@docker-01 ~]$ sudo firewall-cmd --zone=public --permanent --add-port=7946/udp
[samm@docker-01 ~]$ sudo firewall-cmd --zone=public --permanent --add-port=4789/udp
[samm@docker-01 ~]$ sudo systemctl restart firewalld
Restart the docker service on docker master
[samm@docker-01 ~]$ sudo systemctl restart docker
Step 4: Initialize the Swarm or Cluster
To create a Docker Swarm, you need at least one manager node. Choose one of your CentOS 7 machines as the manager and execute:
[samm@docker-mngr ~]$ docker swarm init --advertise-addr <MANAGER_NODE_IP>
Swarm initialized: current node (xq8qa9ogy6704q6yles2n7bda) is now a manager.
Replace <MANAGER_NODE_IP>
with the IP address of your manager node. The command will output a token that you will use to add worker nodes to the swarm.
Run the below command to verify the manager status and to view list of nodes in your cluster
[samm@docker-mngr ~]$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
xq8qa9ogy6704q6yles2n7bda * docker-mngr Ready Active Leader 20.10.8
We can also use the “docker info” command to verify the status of swarm
[samm@docker-mngr ~]$ docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
scan: Docker Scan (Docker Inc., v0.8.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 20.10.8
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: xq8qa9ogy6704q6yles2n7bda
Is Manager: true
ClusterID: iz3usdhdx21bjjn0glw02s277
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 172.32.1.104
Manager Addresses:
172.32.1.104:2377
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: e25210fe30a0a703442421b0f60afac609f950a3
runc version: v1.0.1-0-g4144b63
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-1160.36.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.7GiB
Name: docker-mngr
ID: UEO5:HPFP:KHV4:E4BK:3S7I:VGUV:RPFO:NEGL:S4DT:3AAM:4HAK:YDZL
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Step 5: Join Worker Nodes
To add worker nodes to the swarm, execute the following command on each worker node:
[samm@docker-worker1~]$ sudo docker swarm join --token <TOKEN> <MANAGER_NODE_IP>:2377
Replace <TOKEN>
with the token from the manager node’s initialization output and <MANAGER_NODE_IP>
with the IP address of the manager node.
To add Worker nodes to the swarm or cluster run the command that we get when we initialize the swarm.
[samm@docker-01 ~]$ docker swarm join --token SWMTKN-1-5hvwc2dxn0abdxl9mv8ixtjuh8n43tazv59fbdhox5l0hnpwft-bmhrj4yl7ughiu9x507ncejsu 172.32.1.104:2377
This node joined a swarm as a worker.
[samm@docker-02 ~]$ docker swarm join --token SWMTKN-1-5hvwc2dxn0abdxl9mv8ixtjuh8n43tazv59fbdhox5l0hnpwft-bmhrj4yl7ughiu9x507ncejsu 172.32.1.104:2377
This node joined a swarm as a worker.
Verify the node status using command “docker node ls” from docker manager
[samm@docker-mngr ~]$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
65c1br9vw1k5qlbq0d4y15dsc docker-01 Ready Active 20.10.8
0xrzfuiih1g4qaurnj4ber5e9 docker-02 Ready Active 20.10.8
xq8qa9ogy6704q6yles2n7bda * docker-mngr Ready Active Leader 20.10.8
At this point of time our docker swarm mode or cluster is up and running with two worker nodes. In the next step we will see how to define a service.
If at any time, you lost your join token. You can be retrieved by running the following on Manager node:
docker swarm join-token manager -q
Step 6: Deploy Services to the Swarm
With your swarm up and running, you can start deploying services. A service is a declarative way to run a containerized application across the swarm.
[samm@docker-mngr ~]$ sudo docker service create --replicas <REPLICAS> --name <SERVICE_NAME> <IMAGE_NAME>
Replace <REPLICAS>
with the number of replicas you want to create, <SERVICE_NAME>
with the desired name for your service, and <IMAGE_NAME>
with the Docker image you want to deploy.
In Docker swarm mode containers are replaced with word tasks and tasks (or containers) are launched and deployed as service and Let’s assume I want to create a service with the name “webserver” with five containers and want to make sure desired state of containers inside the service is five.
Run below commands from Docker Manager only.
[root@dkmanager ~]# docker service create -p 80:80 --name webserver --replicas 5 httpd
7hqezhyak8jbt8idkkke8wizi
Above command will create a service with name “webserver”, in which desired state of containers or task is 5 and containers will be launched from docker image “httpd“. Containers will be deployed over the cluster nodes i.e dkmanager, workernode1 and workernode2
List the Docker service with below command
[root@dkmanager ~]# docker service ls
ID NAME MODE REPLICAS IMAGE
7hqezhyak8jb webserver replicated 5/5 httpd:latest
Execute the below command to view status of your service “webserver”
[root@dkmanager ~]# docker service ps webserver
docker-service-ps-command-output
As per above output we can see containers are deployed across the cluster nodes including manager node. Now we can access web page from any of worker node and Docker Manager using the following URLs :
http:// 172.168.10.70 or http://172.168.10.80 or http://172.168.10.90
access-webpage-container
Step 7: Test Container Self Healing
Container self healing is the important feature of docker swarm mode. As the name suggest if anything goes wrong with container , manager will make sure at least 5 container must be running for the service “webserver”. Let’s remove the container from workernode2 and see whether a new container is launched or not.
[root@workernode2 ~]# docker ps
[root@workernode2 ~]# docker rm a9c3d2172670 -f
removing-container-centos7
Now verify the Service from docker manager and see whether a new container is launched or not
[root@dkmanager ~]# docker service ps webserver
Docker-selfhealing-feature
As per above output we can see a new container is launched on dkmanager node because one of the container on workernode2 is removed
Step 8: Scale up and Scale down Containers Associated to a Service
In Docker swarm mode we can scale up and scale down containers or tasks. Let’s scale up the containers to 7 for the service ‘webserver‘
[root@dkmanager ~]# docker service scale webserver=7
webserver scaled to 7
Verify the Service status again with following commands
service-scaleup-docker-swarm
Let’s Scale down container to 4 for the service webserver
[root@dkmanager ~]# docker service scale webserver=4
webserver scaled to 4
Verify the Service again with beneath commands
service-scaledown-docker-swarm
High Availability with Swarm Mode
Docker Swarm Mode ensures high availability by distributing replicas across nodes. If a node fails, the service continues running on other healthy nodes.
Conclusion
Congratulations! You’ve successfully installed and configured Docker Swarm Mode on CentOS 7. You’ve learned how to initialize a swarm, add worker nodes, deploy services, and achieve high availability. With Docker Swarm Mode, you can now efficiently manage and scale your containerized applications with ease. This orchestration tool is a vital asset in simplifying the complexities of container management and is well-suited for various use cases, from small projects to large-scale deployments.
Also Read Our Other Guides :
- How To Install Docker CE on Centos 7
- How To Install and Use Docker Compose on Centos 7
- How To Install Docker CE on Rocky Linux 9
- How To Install and Use Docker CE on Ubuntu 22.04
Finally, now you have learned how to install and configure Docker Swam on Centos 7.