Kubernetes vs LXD

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Kubernetes
Kubernetes

8K
6.4K
+ 1
543
LXD
LXD

50
61
+ 1
25
Add tool

Kubernetes vs LXD: What are the differences?

Kubernetes: Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops. Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions; LXD: Daemon based on liblxc offering a REST API to manage containers. LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.

Kubernetes and LXD are primarily classified as "Container" and "Virtual Machine Platforms & Containers" tools respectively.

"Leading docker container management solution" is the top reason why over 134 developers like Kubernetes, while over 4 developers mention "More simple" as the leading cause for choosing LXD.

Kubernetes and LXD are both open source tools. It seems that Kubernetes with 55.1K GitHub stars and 19.1K forks on GitHub has more adoption than LXD with 2.28K GitHub stars and 531 GitHub forks.

What is Kubernetes?

Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.

What is LXD?

LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Kubernetes?
Why do developers choose LXD?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
    Jobs that mention Kubernetes and LXD as a desired skillset
    What companies use Kubernetes?
    What companies use LXD?

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Kubernetes?
    What tools integrate with LXD?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to Kubernetes and LXD?
    Docker Swarm
    Swarm serves the standard Docker API, so any tool which already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts: Dokku, Compose, Krane, Deis, DockerUI, Shipyard, Drone, Jenkins... and, of course, the Docker client itself.
    Nomad
    Nomad is a cluster manager, designed for both long lived services and short lived batch processing workloads. Developers use a declarative job specification to submit work, and Nomad ensures constraints are satisfied and resource utilization is optimized by efficient task packing. Nomad supports all major operating systems and virtualized, containerized, or standalone applications.
    OpenStack
    OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.
    Rancher
    Rancher is an open source container management platform that includes full distributions of Kubernetes, Apache Mesos and Docker Swarm, and makes it simple to operate container clusters on any cloud or infrastructure platform.
    Docker Compose
    With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.
    See all alternatives
    Decisions about Kubernetes and LXD
    Yshay Yaacobi
    Yshay Yaacobi
    Software Engineer · | 28 upvotes · 362.2K views
    atSolutoSoluto
    Docker Swarm
    Docker Swarm
    .NET
    .NET
    F#
    F#
    C#
    C#
    JavaScript
    JavaScript
    TypeScript
    TypeScript
    Go
    Go
    Visual Studio Code
    Visual Studio Code
    Kubernetes
    Kubernetes

    Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

    Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

    After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

    See more
    Sebastian Gębski
    Sebastian Gębski
    CTO at Shedul/Fresha · | 6 upvotes · 60.2K views
    atFresha EngineeringFresha Engineering
    Docker
    Docker
    Docker Compose
    Docker Compose
    Kubernetes
    Kubernetes
    Terraform
    Terraform
    Ansible
    Ansible
    Amazon EC2
    Amazon EC2
    Amazon EKS
    Amazon EKS
    Amazon S3
    Amazon S3
    Amazon RDS
    Amazon RDS

    Heroku was a decent choice to start a business, but at some point our platform was too big, too complex & too heterogenic, so Heroku started to be a constraint, not a benefit. First, we've started containerizing our apps with Docker to eliminate "works in my machine" syndrome & uniformize the environment setup. The first orchestration was composed with Docker Compose , but at some point it made sense to move it to Kubernetes. Fortunately, we've made a very good technical decision when starting our work with containers - all the container configuration & provisions HAD (since the beginning) to be done in code (Infrastructure as Code) - we've used Terraform & Ansible for that (correspondingly). This general trend of containerisation was accompanied by another, parallel & equally big project: migrating environments from Heroku to AWS: using Amazon EC2 , Amazon EKS, Amazon S3 & Amazon RDS.

    See more
    Emanuel Evans
    Emanuel Evans
    Senior Architect at Rainforest QA · | 12 upvotes · 161.4K views
    atRainforest QARainforest QA
    Heroku
    Heroku
    Kubernetes
    Kubernetes
    Google Kubernetes Engine
    Google Kubernetes Engine
    Google Cloud SQL for PostgreSQL
    Google Cloud SQL for PostgreSQL
    PostgreSQL
    PostgreSQL
    Google Cloud Memorystore
    Google Cloud Memorystore
    Redis
    Redis
    CircleCI
    CircleCI
    Google Cloud Build
    Google Cloud Build
    Helm
    Helm
    Terraform
    Terraform

    We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

    We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

    Read the blog post to go more in depth.

    See more
    Docker
    Docker
    Docker Compose
    Docker Compose
    Jenkins
    Jenkins
    Kubernetes
    Kubernetes
    Amazon EC2
    Amazon EC2
    Heroku
    Heroku
    FeathersJS
    FeathersJS
    Node.js
    Node.js
    ExpressJS
    ExpressJS
    PostgreSQL
    PostgreSQL
    React
    React
    Redux
    Redux
    Semantic UI React
    Semantic UI React
    AVA
    AVA
    ESLint
    ESLint
    nginx
    nginx
    GitHub
    GitHub
    #Containerized
    #Containers
    #Backend
    #Stack
    #Frontend

    Recently I have been working on an open source stack to help people consolidate their personal health data in a single database so that AI and analytics apps can be run against it to find personalized treatments. We chose to go with a #containerized approach leveraging Docker #containers with a local development environment setup with Docker Compose and nginx for container routing. For the production environment we chose to pull code from GitHub and build/push images using Jenkins and using Kubernetes to deploy to Amazon EC2.

    We also implemented a dashboard app to handle user authentication/authorization, as well as a custom SSO server that runs on Heroku which allows experts to easily visit more than one instance without having to login repeatedly. The #Backend was implemented using my favorite #Stack which consists of FeathersJS on top of Node.js and ExpressJS with PostgreSQL as the main database. The #Frontend was implemented using React, Redux.js, Semantic UI React and the FeathersJS client. Though testing was light on this project, we chose to use AVA as well as ESLint to keep the codebase clean and consistent.

    See more
    Ido Shamun
    Ido Shamun
    at The Elegant Monkeys · | 6 upvotes · 68.7K views
    atDailyDaily
    Kubernetes
    Kubernetes
    GitHub
    GitHub
    CircleCI
    CircleCI
    Docker
    Docker
    Helm
    Helm

    Kubernetes powers our #backend services as it is very easy in terms of #devops (the managed version). We deploy everything using @helm charts as it provides us to manage deployments the same way we manage our code on GitHub . On every commit a CircleCI job is triggered to run the tests, build Docker images and deploy them to the registry. Finally on every master commit CircleCI also deploys the relevant service using Helm chart to our Kubernetes cluster

    See more
    Russel Werner
    Russel Werner
    Lead Engineer at StackShare · | 0 upvotes · 3.8K views
    atStackShareStackShare
    Heroku
    Heroku
    Kubernetes
    Kubernetes
    Amazon EKS
    Amazon EKS
    Google Kubernetes Engine
    Google Kubernetes Engine
    Slack
    Slack
    Helm
    Helm
    CircleCI
    CircleCI
    Amazon EC2 Container Service
    Amazon EC2 Container Service

    We began our hosting journey, as many do, on Heroku because they make it easy to deploy your application and automate some of the routine tasks associated with deployments, etc. However, as our team grew and our product matured, our needs have outgrown Heroku. I will dive into the history and reasons for this in a future blog post.

    We decided to migrate our infrastructure to Kubernetes running on Amazon EKS. Although Google Kubernetes Engine has a slightly more mature Kubernetes offering and is more user-friendly; we decided to go with EKS because we already using other AWS services (including a previous migration from Heroku Postgres to AWS RDS). We are still in the process of moving our main website workloads to EKS, however we have successfully migrate all our staging and testing PR apps to run in a staging cluster. We developed a Slack chatops application (also running in the cluster) which automates all the common tasks of spinning up and managing a production-like cluster for a pull request. This allows our engineering team to iterate quickly and safely test code in a full production environment. Helm plays a central role when deploying our staging apps into the cluster. We use CircleCI to build docker containers for each PR push, which are then published to Amazon EC2 Container Service (ECR). An upgrade-operator process watches the ECR repository for new containers and then uses Helm to rollout updates to the staging environments. All this happens automatically and makes it really easy for developers to get code onto servers quickly. The immutable and isolated nature of our staging environments means that we can do anything we want in that environment and quickly re-create or restore the environment to start over.

    The next step in our journey is to migrate our production workloads to an EKS cluster and build out the CD workflows to get our containers promoted to that cluster after our QA testing is complete in our staging environments.

    See more
    Robert Zuber
    Robert Zuber
    CTO at CircleCI · | 6 upvotes · 17.5K views
    atCircleCICircleCI
    Docker
    Docker
    Kubernetes
    Kubernetes
    Nomad
    Nomad
    Helm
    Helm

    Our backend consists of two major pools of machines. One pool hosts the systems that run our site, manage jobs, and send notifications. These services are deployed within Docker containers orchestrated in Kubernetes. Due to Kubernetes’ ecosystem and toolchain, it was an obvious choice for our fairly statically-defined processes: the rate of change of job types or how many we may need in our internal stack is relatively low.

    The other pool of machines is for running our users’ jobs. Because we cannot dynamically predict demand, what types of jobs our users need to have run, nor the resources required for each of those jobs, we found that Nomad excelled over Kubernetes in this area.

    We’re also using Helm to make it easier to deploy new services into Kubernetes. We create a chart (i.e. package) for each service. This lets us easily roll back new software and gives us an audit trail of what was installed or upgraded.

    See more
    Interest over time
    Reviews of Kubernetes and LXD
    Review ofKubernetesKubernetes

    It's a little bit complex to onboard, but once you grasp all the different concepts the platform is really powerful, and infrastructure stops being an issue.

    Service discovery, auto-recovery, scaling and orchestration are just a few of the features you get.

    How developers use Kubernetes and LXD
    Avatar of Matt Welke
    Matt Welke uses KubernetesKubernetes

    Just tinkering with it for personal use at this stage based on positive experience using it at work. Plan to use it for high traffic distributed systems if not using a managed hosting service like Heroku, AWS Lambda, or Google Cloud Functions. Reasons for using instead of these alternatives would be cheaper cost at higher scale.

    Avatar of realcloudratics
    realcloudratics uses KubernetesKubernetes

    Good existential question. Kubernetes is painful in the extreme - especially when combined with Ansible. The layers of indirection are truly mind altering. But hey - containers are kewl!

    Avatar of Japan Digital Design
    Japan Digital Design uses KubernetesKubernetes

    Our developer experience system is on Kubernetes (Google Kubernetes Engine at the moment). We would like to expand our Kubernetes clusters over other Kubernetes engine.

    Avatar of ShareThis
    ShareThis uses KubernetesKubernetes

    Kubernetes is used for managing microclusters within our AWS infrastructure. This allows us to deploy new infrastructure in seconds.

    Avatar of papaver
    papaver uses KubernetesKubernetes

    minor experience with kubernetes. helped a client setup a kubernetes infrastructure. love the elegance of the system.

    How much does Kubernetes cost?
    How much does LXD cost?
    Pricing unavailable
    Pricing unavailable
    News about LXD
    More news

    Related Comparis