Buildkite vs Kubernetes: What are the differences?
What is Buildkite? Fast, secure and scalable CI/CD for all your software projects. CI and build automation tool that combines the power of your own build infrastructure with the convenience of a managed, centralized web UI. Used by Shopify, Basecamp, Digital Ocean, Venmo, Cochlear, Bugsnag and more.
What is Kubernetes? Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops. Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.
Buildkite and Kubernetes are primarily classified as "Continuous Integration" and "Container" tools respectively.
Some of the features offered by Buildkite are:
- Fast and stable builds
- Open source agent runs on almost any machine and architecture
- Freedom to use your own internal or pre-release tools and services
On the other hand, Kubernetes provides the following key features:
- Lightweight, simple and accessible
- Built for a multi-cloud world, public, private or hybrid
- Highly modular, designed so that all of its components are easily swappable
"Great customer support" is the primary reason why developers consider Buildkite over the competitors, whereas "Leading docker container management solution" was stated as the key factor in picking Kubernetes.
Kubernetes is an open source tool with 55.1K GitHub stars and 19.1K GitHub forks. Here's a link to Kubernetes's open source repository on GitHub.
According to the StackShare community, Kubernetes has a broader approval, being mentioned in 1048 company stacks & 1096 developers stacks; compared to Buildkite, which is listed in 38 company stacks and 8 developer stacks.
What is Buildkite?
What is Kubernetes?
Need advice about which tool to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
What are the cons of using Buildkite?
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.
After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...
I use Buildkite because it's the only CI system I've found that strikes a reasonable balance between flexibility and usability. You can accomplish pretty much any standard CI task out of the box with a simple, easy to understand interface and sane defaults, while the hooks and pipeline systems offer enough configurability to rival Jenkins - but without the misery that using Jenkins is certain to entail.
I use Buildkite because it balances enormous flexibility with easy out of the box configuration. We use it for everything from automating builds (which involves running huge test suites where the parallel builds feature is invaluable) to doing nightly batch jobs for our production environment. Super flexible, amazing tool
Heroku was a decent choice to start a business, but at some point our platform was too big, too complex & too heterogenic, so Heroku started to be a constraint, not a benefit. First, we've started containerizing our apps with Docker to eliminate "works in my machine" syndrome & uniformize the environment setup. The first orchestration was composed with Docker Compose , but at some point it made sense to move it to Kubernetes. Fortunately, we've made a very good technical decision when starting our work with containers - all the container configuration & provisions HAD (since the beginning) to be done in code (Infrastructure as Code) - we've used Terraform & Ansible for that (correspondingly). This general trend of containerisation was accompanied by another, parallel & equally big project: migrating environments from Heroku to AWS: using Amazon EC2 , Amazon EKS, Amazon S3 & Amazon RDS.
We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).
We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .
Read the blog post to go more in depth.
Recently I have been working on an open source stack to help people consolidate their personal health data in a single database so that AI and analytics apps can be run against it to find personalized treatments. We chose to go with a #containerized approach leveraging Docker #containers with a local development environment setup with Docker Compose and nginx for container routing. For the production environment we chose to pull code from GitHub and build/push images using Jenkins and using Kubernetes to deploy to Amazon EC2.
We also implemented a dashboard app to handle user authentication/authorization, as well as a custom SSO server that runs on Heroku which allows experts to easily visit more than one instance without having to login repeatedly. The #Backend was implemented using my favorite #Stack which consists of FeathersJS on top of Node.js and ExpressJS with PostgreSQL as the main database. The #Frontend was implemented using React, Redux.js, Semantic UI React and the FeathersJS client. Though testing was light on this project, we chose to use AVA as well as ESLint to keep the codebase clean and consistent.
Kubernetes powers our #backend services as it is very easy in terms of #devops (the managed version). We deploy everything using @helm charts as it provides us to manage deployments the same way we manage our code on GitHub . On every commit a CircleCI job is triggered to run the tests, build Docker images and deploy them to the registry. Finally on every master commit CircleCI also deploys the relevant service using Helm chart to our Kubernetes cluster
We began our hosting journey, as many do, on Heroku because they make it easy to deploy your application and automate some of the routine tasks associated with deployments, etc. However, as our team grew and our product matured, our needs have outgrown Heroku. I will dive into the history and reasons for this in a future blog post.
We decided to migrate our infrastructure to Kubernetes running on Amazon EKS. Although Google Kubernetes Engine has a slightly more mature Kubernetes offering and is more user-friendly; we decided to go with EKS because we already using other AWS services (including a previous migration from Heroku Postgres to AWS RDS). We are still in the process of moving our main website workloads to EKS, however we have successfully migrate all our staging and testing PR apps to run in a staging cluster. We developed a Slack chatops application (also running in the cluster) which automates all the common tasks of spinning up and managing a production-like cluster for a pull request. This allows our engineering team to iterate quickly and safely test code in a full production environment. Helm plays a central role when deploying our staging apps into the cluster. We use CircleCI to build docker containers for each PR push, which are then published to Amazon EC2 Container Service (ECR). An
upgrade-operator process watches the ECR repository for new containers and then uses Helm to rollout updates to the staging environments. All this happens automatically and makes it really easy for developers to get code onto servers quickly. The immutable and isolated nature of our staging environments means that we can do anything we want in that environment and quickly re-create or restore the environment to start over.
The next step in our journey is to migrate our production workloads to an EKS cluster and build out the CD workflows to get our containers promoted to that cluster after our QA testing is complete in our staging environments.
Our backend consists of two major pools of machines. One pool hosts the systems that run our site, manage jobs, and send notifications. These services are deployed within Docker containers orchestrated in Kubernetes. Due to Kubernetes’ ecosystem and toolchain, it was an obvious choice for our fairly statically-defined processes: the rate of change of job types or how many we may need in our internal stack is relatively low.
The other pool of machines is for running our users’ jobs. Because we cannot dynamically predict demand, what types of jobs our users need to have run, nor the resources required for each of those jobs, we found that Nomad excelled over Kubernetes in this area.
We’re also using Helm to make it easier to deploy new services into Kubernetes. We create a chart (i.e. package) for each service. This lets us easily roll back new software and gives us an audit trail of what was installed or upgraded.
It's a little bit complex to onboard, but once you grasp all the different concepts the platform is really powerful, and infrastructure stops being an issue.
Service discovery, auto-recovery, scaling and orchestration are just a few of the features you get.
Just tinkering with it for personal use at this stage based on positive experience using it at work. Plan to use it for high traffic distributed systems if not using a managed hosting service like Heroku, AWS Lambda, or Google Cloud Functions. Reasons for using instead of these alternatives would be cheaper cost at higher scale.
Buildkite provides testing of the Rails application via a Docker container with Docker Compose. The Buildkite Agent runs on a small Digital Ocean droplet, runs the integration and model tests, and marks the GitHub commit for Heroku to perform their auto-deployments once the build is green.
Good existential question. Kubernetes is painful in the extreme - especially when combined with Ansible. The layers of indirection are truly mind altering. But hey - containers are kewl!
Our developer experience system is on Kubernetes (Google Kubernetes Engine at the moment). We would like to expand our Kubernetes clusters over other Kubernetes engine.
Kubernetes is used for managing microclusters within our AWS infrastructure. This allows us to deploy new infrastructure in seconds.