Docker Swarm vs Kubernetes

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Docker Swarm
Docker Swarm

436
422
+ 1
207
Kubernetes
Kubernetes

8K
6.4K
+ 1
543
Add tool

Docker Swarm or Kubernetes — Help me decide


High Availability

Both Docker Swarm and Kubernetes provide High Availability and both support the inclusion of multiple Manager nodes, required to ensure high availability should the Leader Manager node fail.

Docker Swarm recommends an odd number of Manager nodes (1 of 3, 3 of 5 etc.) with no more than seven in a given swarm. Each Manager node can be configured to run application services, though this should be applied with caution. Kubernetes Manager nodes run additional containers, known as Pods, that facilitate core Kubernetes functionality. As such, it is considered bad practice to run application services on Kubernetes Manager nodes. Kubernetes Manager nodes are marked with a NoShedule taint to prevent accidental deployment of services to these nodes.

Both Docker Swarm and Kubernetes utilise Ingress networking, which allow the routing of external traffic to internal services. However, while Kubernetes Ingress is standardised, flexible and extensive, the Docker Swarm implementation aims for simplicity and ease of use.

Kubernetes can be configured to use a local Ingress load balancer, such as Ingress-Nginx, a simple exposed public port using NodePort, or a third-party load balancer, such as Amazon’s ELB.

Docker Swarm uses Raft Consensus to manage cluster state. Each Manager node is kept up-to-date with state information so that, should the Leader Manager fail, another Manager can be delegated as Leader and continue the role without significant changes to application stability. This is the reason for a maximum number of Managers. Maintaining state consistency between large numbers of nodes without a third-party store is problematic.

Kubernetes uses Etcd to manage cluster state. An Etcd Pod is deployed to each Manager node as replicated units. Data stored within one Pod instance is distributed to each of the other instances. As Etcd is an open-source data store, it can be utilised by the deployed application to store custom application configuration data.

Kubernetes provides greater configuration for management of service deployment to nodes when scaling or when nodes fail. As such, the decision to choose Kubernetes over Docker Swarm might be relevant when building applications that require fine tuning in order to ensure optimal use of available resources.

Service Discovery

Both Docker Swarm and Kubernetes provide DNS based service discovery. Docker Swarm utilises a DNS service integrated within the Docker Engine. Each service host domain name is equivalent to the provided service name. There is no notion of namespaces. In order to separate connectivity between services, each service can belong to one or more overlay networks. Direct calls to containers can be achieved by using the container alias as the domain or by using the containers assigned IP address.

Illustration of Swarm Overlay Networks

Kubernetes service domains are maintained by the KubeDNS Pod. Each Pod and service receive an IP address relative to its parent. Thus, Pods within a service can communicate by IP address, too. Kubernetes provides numerous additional means to reference Pods and services through metadata, such as labels and annotations. Like Docker Swarm, Kubernetes services may be referenced by its simple service name. However, Kubernetes also provides for namespacing and use of Fully Qualified Domain Names (FQDN).

Illustration of Kubernetes Pods and Services

The Kubernetes platform also provides a number of environment variables for use in referencing Pods and services in a cluster.

Communicating between services in Docker Swarm is relatively easy when you are aware of the service relationships during orchestration. However, when dealing with runtime service discovery and cross-cluster service discovery, Kubernetes provides more options.

Deployment

Docker Swarm deployments are managed using the Docker utility via the Stack commands. A Stack is a collection of services that form a deployment unit in Docker Swarm.

A Stack deployment is described using the docker-compose specification in YAML files. These files define the services to be deployed within a stack, their source repositories and their relationships to one another. The YAML files also detail the overlay network configurations and which services to assign to them, allowing for compartmentalization and security. A collection of services can be deployed using a single docker-compose.yaml file and additional files are often created to modify values for other deployments, such as testing, staging and production.

Kubernetes deployments are also described using YAML, but can be described using JSON if preferred. The deployment specification is extensive, but minimally requires definitions for Pods and Services. Deployments in Kubernetes are typically much more verbose than those of Docker Swarm, due to the increased configurability.

Both Docker Swarm and Kubernetes provide a means to apply rolling updates and for rolling back those updates as required. If an update deployment fails in Docker Swarm, the update is automatically rolled back to the previous version. Docker Swarm provides no endpoints to query an update status. A failed Kubernetes update will result in both the failed Pods and the original Pods existing side by side. A status endpoint does exist in Kubernetes. Once identified, rollbacks need to be requested explicitly.

As Kubernetes provides a means to select Pods, Services and other assets in a deployment using labels and annotations, running staging containers alongside production ones in the wild is easier in Kubernetes. This way, DevOps can roll out a single unit and test how it works in the production environment before committing to a cluster wide update. Performing the same feat in Docker Swarm is not so easy and is not typically done for this reason.

Docker Swarm deployments are considered much more user-friendly than Kubernetes. A misconception of Docker Swarm is that it cannot support large volumes of nodes and containers. It has been proven to support one thousand nodes and tens of thousands of containers in a single cluster. However, it is important to note that Docker Swarm does have a task limit. Kubernetes, on the other hand, has been documented as supporting ten times this number. With a recommended maximum of seven Managers in a cluster, Docker Swarm might not be sufficient for everyone’s needs, but this should only matter in the very largest of projects.

Performance

Comparing Docker Swarm and Kubernetes for performance is interesting, as both platforms utilise Docker containers and, for the most part, only manage the orchestration aspect of applications. Therefore, applications deployed with Docker Swarm and Kubernetes will operate with similar speed and efficiency.

When analysing orchestration performance, it has been documented that Docker Swarm will deploy and start containers as much as five times more quickly than Kubernetes when under heavy load, mainly due to the fewer moving parts within Docker Swarm and because of its tight integration with the Docker Engine.

Graph of Container Deployment Latency

This same investigation also highlights Docker Swarms ability to respond to API calls much more quickly. While the responsiveness differences are negligible, the investigation does show that Docker Swarm performs without degradation at up to around 90% load, while Kubernetes degrades once it reaches only around 50% load.

Graph of API Responsiveness

In a minimal Kubernetes installation, several containers are deployed to each Manager node, including:

- Etcd manager Pod
- API server Pod
- Kube Controller Pod
- KubeDNS Pod
- A network Pod, such as [Flannel](https://github.com/coreos/flannel) (optional)
- Kube Proxy Pod
- Scheduler Pod

On the worker nodes, Kubernetes will install the Kube Proxy and Kubelet Pods.

Each of these Pods use resources from the machine they are deployed to. In comparison, Docker Swarm doesn’t require the deployment of any containers as the Swarm functionality is built into the Docker service itself.

For most web applications, this should not pose a problem. The resources used by the default Kubernetes Pods is minimal and doesn’t perform much of an impact on the running application. However, this may indeed pose a problem for resource restricted environments, or environments that run on low power, such as Internet-of-Things (IoT) projects.

By Lee Sylvester

Docker Swarm vs Kubernetes: What are the differences?

What is Docker Swarm? Native clustering for Docker. Turn a pool of Docker hosts into a single, virtual host. Swarm serves the standard Docker API, so any tool which already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts: Dokku, Compose, Krane, Deis, DockerUI, Shipyard, Drone, Jenkins... and, of course, the Docker client itself.

What is Kubernetes? Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops. Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.

Docker Swarm and Kubernetes can be primarily classified as "Container" tools.

"Docker friendly" is the top reason why over 44 developers like Docker Swarm, while over 134 developers mention "Leading docker container management solution" as the leading cause for choosing Kubernetes.

Docker Swarm and Kubernetes are both open source tools. It seems that Kubernetes with 55K GitHub stars and 19.1K forks on GitHub has more adoption than Docker Swarm with 5.63K GitHub stars and 1.11K GitHub forks.

According to the StackShare community, Kubernetes has a broader approval, being mentioned in 1046 company stacks & 1096 developers stacks; compared to Docker Swarm, which is listed in 82 company stacks and 38 developer stacks.

What is Docker Swarm?

Swarm serves the standard Docker API, so any tool which already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts: Dokku, Compose, Krane, Deis, DockerUI, Shipyard, Drone, Jenkins... and, of course, the Docker client itself.

What is Kubernetes?

Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Docker Swarm?
Why do developers choose Kubernetes?

Sign up to add, upvote and see more prosMake informed product decisions

Jobs that mention Docker Swarm and Kubernetes as a desired skillset
What companies use Docker Swarm?
What companies use Kubernetes?

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with Docker Swarm?
What tools integrate with Kubernetes?

Sign up to get full access to all the tool integrationsMake informed product decisions

What are some alternatives to Docker Swarm and Kubernetes?
Docker Compose
With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.
Rancher
Rancher is an open source container management platform that includes full distributions of Kubernetes, Apache Mesos and Docker Swarm, and makes it simple to operate container clusters on any cloud or infrastructure platform.
Ansible
Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. Ansible’s goals are foremost those of simplicity and maximum ease of use.
Apache Mesos
Apache Mesos is a cluster manager that simplifies the complexity of running applications on a shared pool of servers.
CoreOS
It is designed for security, consistency, and reliability. Instead of installing packages via yum or apt, it uses Linux containers to manage your services at a higher level of abstraction. A single service's code and all dependencies are packaged within a container that can be run on one or many machines.
See all alternatives
Decisions about Docker Swarm and Kubernetes
Yshay Yaacobi
Yshay Yaacobi
Software Engineer · | 28 upvotes · 360.9K views
atSolutoSoluto
Docker Swarm
Docker Swarm
.NET
.NET
F#
F#
C#
C#
JavaScript
JavaScript
TypeScript
TypeScript
Go
Go
Visual Studio Code
Visual Studio Code
Kubernetes
Kubernetes

Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

See more
Sebastian Gębski
Sebastian Gębski
CTO at Shedul/Fresha · | 6 upvotes · 60.1K views
atFresha EngineeringFresha Engineering
Docker
Docker
Docker Compose
Docker Compose
Kubernetes
Kubernetes
Terraform
Terraform
Ansible
Ansible
Amazon EC2
Amazon EC2
Amazon EKS
Amazon EKS
Amazon S3
Amazon S3
Amazon RDS
Amazon RDS

Heroku was a decent choice to start a business, but at some point our platform was too big, too complex & too heterogenic, so Heroku started to be a constraint, not a benefit. First, we've started containerizing our apps with Docker to eliminate "works in my machine" syndrome & uniformize the environment setup. The first orchestration was composed with Docker Compose , but at some point it made sense to move it to Kubernetes. Fortunately, we've made a very good technical decision when starting our work with containers - all the container configuration & provisions HAD (since the beginning) to be done in code (Infrastructure as Code) - we've used Terraform & Ansible for that (correspondingly). This general trend of containerisation was accompanied by another, parallel & equally big project: migrating environments from Heroku to AWS: using Amazon EC2 , Amazon EKS, Amazon S3 & Amazon RDS.

See more
Emanuel Evans
Emanuel Evans
Senior Architect at Rainforest QA · | 12 upvotes · 160.9K views
atRainforest QARainforest QA
Heroku
Heroku
Kubernetes
Kubernetes
Google Kubernetes Engine
Google Kubernetes Engine
Google Cloud SQL for PostgreSQL
Google Cloud SQL for PostgreSQL
PostgreSQL
PostgreSQL
Google Cloud Memorystore
Google Cloud Memorystore
Redis
Redis
CircleCI
CircleCI
Google Cloud Build
Google Cloud Build
Helm
Helm
Terraform
Terraform

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

See more