Kubernetes vs Salt: What are the differences?
Developers describe Kubernetes as "Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops". Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. On the other hand, Salt is detailed as "Fast, scalable and flexible software for data center automation". Salt is a new approach to infrastructure management. Easy enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with them in seconds Salt delivers a dynamic communication bus for infrastructures that can be used for orchestration, remote execution, configuration management and much more..
Kubernetes can be classified as a tool in the "Container Tools" category, while Salt is grouped under "Server Configuration and Automation".
Some of the features offered by Kubernetes are:
- Lightweight, simple and accessible
- Built for a multi-cloud world, public, private or hybrid
- Highly modular, designed so that all of its components are easily swappable
On the other hand, Salt provides the following key features:
- Remote execution is the core function of Salt. Running pre-defined or arbitrary commands on remote hosts.
- Salt modules are the core of remote execution. They provide functionality such as installing packages, restarting a service, running a remote command, transferring files, and infinitely more
- Building on the remote execution core is a robust and flexible configuration management framework. Execution happens on the minions allowing effortless, simultaneous configuration of tens of thousands of hosts.
"Leading docker container management solution" is the top reason why over 131 developers like Kubernetes, while over 41 developers mention "Flexible" as the leading cause for choosing Salt.
Kubernetes and Salt are both open source tools. It seems that Kubernetes with 54.2K GitHub stars and 18.8K forks on GitHub has more adoption than Salt with 10K GitHub stars and 4.58K GitHub forks.
According to the StackShare community, Kubernetes has a broader approval, being mentioned in 1018 company stacks & 1060 developers stacks; compared to Salt, which is listed in 108 company stacks and 19 developer stacks.
What is Kubernetes?
What is Salt?
Need advice about which tool to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
What are the cons of using Salt?
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
By 2014, the DevOps team at Lyft decided to port their infrastructure code from Puppet to Salt. At that point, the Puppet code based included around "10,000 lines of spaghetti-code,” which was unfamiliar and challenging to the relatively new members of the DevOps team.
“The DevOps team felt that the Puppet infrastructure was too difficult to pick up quickly and would be impossible to introduce to [their] developers as the tool they’d use to manage their own services.”
To determine a path forward, the team assessed both Ansible and Salt, exploring four key areas: simplicity/ease of use, maturity, performance, and community.
They found that “Salt’s execution and state module support is more mature than Ansible’s, overall,” and that “Salt was faster than Ansible for state/playbook runs.” And while both have high levels of community support, Salt exceeded expectations in terms of friendless and responsiveness to opened issues.
Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.
After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...
Heroku was a decent choice to start a business, but at some point our platform was too big, too complex & too heterogenic, so Heroku started to be a constraint, not a benefit. First, we've started containerizing our apps with Docker to eliminate "works in my machine" syndrome & uniformize the environment setup. The first orchestration was composed with Docker Compose , but at some point it made sense to move it to Kubernetes. Fortunately, we've made a very good technical decision when starting our work with containers - all the container configuration & provisions HAD (since the beginning) to be done in code (Infrastructure as Code) - we've used Terraform & Ansible for that (correspondingly). This general trend of containerisation was accompanied by another, parallel & equally big project: migrating environments from Heroku to AWS: using Amazon EC2 , Amazon EKS, Amazon S3 & Amazon RDS.
We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).
We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .
Read the blog post to go more in depth.
Recently I have been working on an open source stack to help people consolidate their personal health data in a single database so that AI and analytics apps can be run against it to find personalized treatments. We chose to go with a #containerized approach leveraging Docker #containers with a local development environment setup with Docker Compose and nginx for container routing. For the production environment we chose to pull code from GitHub and build/push images using Jenkins and using Kubernetes to deploy to Amazon EC2.
We also implemented a dashboard app to handle user authentication/authorization, as well as a custom SSO server that runs on Heroku which allows experts to easily visit more than one instance without having to login repeatedly. The #Backend was implemented using my favorite #Stack which consists of FeathersJS on top of Node.js and ExpressJS with PostgreSQL as the main database. The #Frontend was implemented using React, Redux.js, Semantic UI React and the FeathersJS client. Though testing was light on this project, we chose to use AVA as well as ESLint to keep the codebase clean and consistent.
Kubernetes powers our #backend services as it is very easy in terms of #devops (the managed version). We deploy everything using @helm charts as it provides us to manage deployments the same way we manage our code on GitHub . On every commit a CircleCI job is triggered to run the tests, build Docker images and deploy them to the registry. Finally on every master commit CircleCI also deploys the relevant service using Helm chart to our Kubernetes cluster
We began our hosting journey, as many do, on Heroku because they make it easy to deploy your application and automate some of the routine tasks associated with deployments, etc. However, as our team grew and our product matured, our needs have outgrown Heroku. I will dive into the history and reasons for this in a future blog post.
We decided to migrate our infrastructure to Kubernetes running on Amazon EKS. Although Google Kubernetes Engine has a slightly more mature Kubernetes offering and is more user-friendly; we decided to go with EKS because we already using other AWS services (including a previous migration from Heroku Postgres to AWS RDS). We are still in the process of moving our main website workloads to EKS, however we have successfully migrate all our staging and testing PR apps to run in a staging cluster. We developed a Slack chatops application (also running in the cluster) which automates all the common tasks of spinning up and managing a production-like cluster for a pull request. This allows our engineering team to iterate quickly and safely test code in a full production environment. Helm plays a central role when deploying our staging apps into the cluster. We use CircleCI to build docker containers for each PR push, which are then published to Amazon EC2 Container Service (ECR). An
upgrade-operator process watches the ECR repository for new containers and then uses Helm to rollout updates to the staging environments. All this happens automatically and makes it really easy for developers to get code onto servers quickly. The immutable and isolated nature of our staging environments means that we can do anything we want in that environment and quickly re-create or restore the environment to start over.
The next step in our journey is to migrate our production workloads to an EKS cluster and build out the CD workflows to get our containers promoted to that cluster after our QA testing is complete in our staging environments.
Our backend consists of two major pools of machines. One pool hosts the systems that run our site, manage jobs, and send notifications. These services are deployed within Docker containers orchestrated in Kubernetes. Due to Kubernetes’ ecosystem and toolchain, it was an obvious choice for our fairly statically-defined processes: the rate of change of job types or how many we may need in our internal stack is relatively low.
The other pool of machines is for running our users’ jobs. Because we cannot dynamically predict demand, what types of jobs our users need to have run, nor the resources required for each of those jobs, we found that Nomad excelled over Kubernetes in this area.
We’re also using Helm to make it easier to deploy new services into Kubernetes. We create a chart (i.e. package) for each service. This lets us easily roll back new software and gives us an audit trail of what was installed or upgraded.
For automating deployment or system admin tasks, Shell/Perl are more than enough. Specially Perl one liners, that I use heavily, even to make changes in xml files. But quite often the need is to just check the state of system and run scripts without fear. Thats where I actually needed some scripting language with "state mechanism" associated with it. Salt provided me above similar kind of experience. I tested salt first on a small scenario. Installation of 60 RPMS on a machine. I was pleased that I could achieve that in around 25 lines of code using salt. And eventually I was also able to keep data and code separate. This was another plus point. henceforth I was able to use salt to deploy a large potion Datacenter (apps deployment). I am still working towards orchestration and finding it quite promising. The use of pure python whenever needed to deal with more complex scenario is awesome.
It's a little bit complex to onboard, but once you grasp all the different concepts the platform is really powerful, and infrastructure stops being an issue.
Service discovery, auto-recovery, scaling and orchestration are just a few of the features you get.
Just tinkering with it for personal use at this stage based on positive experience using it at work. Plan to use it for high traffic distributed systems if not using a managed hosting service like Heroku, AWS Lambda, or Google Cloud Functions. Reasons for using instead of these alternatives would be cheaper cost at higher scale.
When it comes to provisioning tens to hundreds of servers, you need a tool that can handle the load, as well as being extremely customisable. Fortunately, Salt has held that gauntlet for us consistently through any kind of issue you can throw at it.
Good existential question. Kubernetes is painful in the extreme - especially when combined with Ansible. The layers of indirection are truly mind altering. But hey - containers are kewl!
We've built something using SaltStack and Debian Linux to help us deploy and administer at scale the servers we provide for our part- and fully-managed hosting customers.
Our developer experience system is on Kubernetes (Google Kubernetes Engine at the moment). We would like to expand our Kubernetes clusters over other Kubernetes engine.
Kubernetes is used for managing microclusters within our AWS infrastructure. This allows us to deploy new infrastructure in seconds.
minor experience with kubernetes. helped a client setup a kubernetes infrastructure. love the elegance of the system.