AWS OpsWorks vs Kubernetes

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

AWS OpsWorks
AWS OpsWorks

165
84
+ 1
42
Kubernetes
Kubernetes

7.6K
6.1K
+ 1
542
Add tool

AWS OpsWorks vs Kubernetes: What are the differences?

Developers describe AWS OpsWorks as "Model and manage your entire application from load balancers to databases using Chef". Start from templates for common technologies like Ruby, Node.JS, PHP, and Java, or build your own using Chef recipes to install software packages and perform any task that you can script. AWS OpsWorks can scale your application using automatic load-based or time-based scaling and maintain the health of your application by detecting failed instances and replacing them. You have full control of deployments and automation of each component . On the other hand, Kubernetes is detailed as "Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops". Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.

AWS OpsWorks can be classified as a tool in the "Server Configuration and Automation" category, while Kubernetes is grouped under "Container Tools".

Some of the features offered by AWS OpsWorks are:

  • AWS OpsWorks lets you model the different components of your application as layers in a stack, and maps your logical architecture to a physical architecture. You can see all resources associated with your application, and their status, in one place.
  • AWS OpsWorks provides an event-driven configuration system with rich deployment tools that allow you to efficiently manage your applications over their lifetime, including support for customizable deployments, rollback, partial deployments, patch management, automatic instance scaling, and auto healing.
  • AWS OpsWorks lets you define template configurations for your entire environment in a format that you can maintain and version just like your application source code.

On the other hand, Kubernetes provides the following key features:

  • Lightweight, simple and accessible
  • Built for a multi-cloud world, public, private or hybrid
  • Highly modular, designed so that all of its components are easily swappable

"Devops" is the primary reason why developers consider AWS OpsWorks over the competitors, whereas "Leading docker container management solution" was stated as the key factor in picking Kubernetes.

Kubernetes is an open source tool with 54.2K GitHub stars and 18.8K GitHub forks. Here's a link to Kubernetes's open source repository on GitHub.

Slack, Shopify, and Starbucks are some of the popular companies that use Kubernetes, whereas AWS OpsWorks is used by DeveloperTown, Third Iron, and TENDIGI, LLC. Kubernetes has a broader approval, being mentioned in 1018 company stacks & 1060 developers stacks; compared to AWS OpsWorks, which is listed in 73 company stacks and 18 developer stacks.

No Stats
- No public GitHub repository available -

What is AWS OpsWorks?

Start from templates for common technologies like Ruby, Node.JS, PHP, and Java, or build your own using Chef recipes to install software packages and perform any task that you can script. AWS OpsWorks can scale your application using automatic load-based or time-based scaling and maintain the health of your application by detecting failed instances and replacing them. You have full control of deployments and automation of each component 

What is Kubernetes?

Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose AWS OpsWorks?
Why do developers choose Kubernetes?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
    Jobs that mention AWS OpsWorks and Kubernetes as a desired skillset
    What companies use AWS OpsWorks?
    What companies use Kubernetes?

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with AWS OpsWorks?
    What tools integrate with Kubernetes?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to AWS OpsWorks and Kubernetes?
    AWS Elastic Beanstalk
    Once you upload your application, Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
    Chef
    Chef enables you to manage and scale cloud infrastructure with no downtime or interruptions. Freely move applications and configurations from one cloud to another. Chef is integrated with all major cloud providers including Amazon EC2, VMWare, IBM Smartcloud, Rackspace, OpenStack, Windows Azure, HP Cloud, Google Compute Engine, Joyent Cloud and others.
    AWS Config
    AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. With AWS Config you can discover existing AWS resources, export a complete inventory of your AWS resources with all configuration details, and determine how a resource was configured at any point in time. These capabilities enable compliance auditing, security analysis, resource change tracking, and troubleshooting.
    AWS CloudFormation
    You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don’t need to figure out the order in which AWS services need to be provisioned or the subtleties of how to make those dependencies work.
    AWS CodeDeploy
    AWS CodeDeploy is a service that automates code deployments to Amazon EC2 instances. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications.
    See all alternatives
    Decisions about AWS OpsWorks and Kubernetes
    Jake Stein
    Jake Stein
    CEO at Stitch · | 13 upvotes · 94.7K views
    atStitchStitch
    Go
    Go
    Clojure
    Clojure
    JavaScript
    JavaScript
    Python
    Python
    Kubernetes
    Kubernetes
    AWS OpsWorks
    AWS OpsWorks
    Amazon EC2
    Amazon EC2
    Amazon Redshift
    Amazon Redshift
    Amazon S3
    Amazon S3
    Amazon RDS
    Amazon RDS

    Stitch is run entirely on AWS. All of our transactional databases are run with Amazon RDS, and we rely on Amazon S3 for data persistence in various stages of our pipeline. Our product integrates with Amazon Redshift as a data destination, and we also use Redshift as an internal data warehouse (powered by Stitch, of course).

    The majority of our services run on stateless Amazon EC2 instances that are managed by AWS OpsWorks. We recently introduced Kubernetes into our infrastructure to run the scheduled jobs that execute Singer code to extract data from various sources. Although we tend to be wary of shiny new toys, Kubernetes has proven to be a good fit for this problem, and its stability, strong community and helpful tooling have made it easy for us to incorporate into our operations.

    While we continue to be happy with Clojure for our internal services, we felt that its relatively narrow adoption could impede Singer's growth. We chose Python both because it is well suited to the task, and it seems to have reached critical mass among data engineers. All that being said, the Singer spec is language agnostic, and integrations and libraries have been developed in JavaScript, Go, and Clojure.

    See more
    Yshay Yaacobi
    Yshay Yaacobi
    Software Engineer · | 27 upvotes · 272K views
    atSolutoSoluto
    Docker Swarm
    Docker Swarm
    Kubernetes
    Kubernetes
    Visual Studio Code
    Visual Studio Code
    Go
    Go
    TypeScript
    TypeScript
    JavaScript
    JavaScript
    C#
    C#
    F#
    F#
    .NET
    .NET

    Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

    Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

    After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

    See more
    Sebastian Gębski
    Sebastian Gębski
    CTO at Shedul/Fresha · | 6 upvotes · 48.5K views
    atFresha EngineeringFresha Engineering
    Amazon RDS
    Amazon RDS
    Amazon S3
    Amazon S3
    Amazon EKS
    Amazon EKS
    Amazon EC2
    Amazon EC2
    Ansible
    Ansible
    Terraform
    Terraform
    Kubernetes
    Kubernetes
    Docker Compose
    Docker Compose
    Docker
    Docker

    Heroku was a decent choice to start a business, but at some point our platform was too big, too complex & too heterogenic, so Heroku started to be a constraint, not a benefit. First, we've started containerizing our apps with Docker to eliminate "works in my machine" syndrome & uniformize the environment setup. The first orchestration was composed with Docker Compose , but at some point it made sense to move it to Kubernetes. Fortunately, we've made a very good technical decision when starting our work with containers - all the container configuration & provisions HAD (since the beginning) to be done in code (Infrastructure as Code) - we've used Terraform & Ansible for that (correspondingly). This general trend of containerisation was accompanied by another, parallel & equally big project: migrating environments from Heroku to AWS: using Amazon EC2 , Amazon EKS, Amazon S3 & Amazon RDS.

    See more
    Emanuel Evans
    Emanuel Evans
    Senior Architect at Rainforest QA · | 12 upvotes · 123.7K views
    atRainforest QARainforest QA
    Terraform
    Terraform
    Helm
    Helm
    Google Cloud Build
    Google Cloud Build
    CircleCI
    CircleCI
    Redis
    Redis
    Google Cloud Memorystore
    Google Cloud Memorystore
    PostgreSQL
    PostgreSQL
    Google Cloud SQL for PostgreSQL
    Google Cloud SQL for PostgreSQL
    Google Kubernetes Engine
    Google Kubernetes Engine
    Kubernetes
    Kubernetes
    Heroku
    Heroku

    We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

    We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

    Read the blog post to go more in depth.

    See more
    GitHub
    GitHub
    nginx
    nginx
    ESLint
    ESLint
    AVA
    AVA
    Semantic UI React
    Semantic UI React
    Redux
    Redux
    React
    React
    PostgreSQL
    PostgreSQL
    ExpressJS
    ExpressJS
    Node.js
    Node.js
    FeathersJS
    FeathersJS
    Heroku
    Heroku
    Amazon EC2
    Amazon EC2
    Kubernetes
    Kubernetes
    Jenkins
    Jenkins
    Docker Compose
    Docker Compose
    Docker
    Docker
    #Frontend
    #Stack
    #Backend
    #Containers
    #Containerized

    Recently I have been working on an open source stack to help people consolidate their personal health data in a single database so that AI and analytics apps can be run against it to find personalized treatments. We chose to go with a #containerized approach leveraging Docker #containers with a local development environment setup with Docker Compose and nginx for container routing. For the production environment we chose to pull code from GitHub and build/push images using Jenkins and using Kubernetes to deploy to Amazon EC2.

    We also implemented a dashboard app to handle user authentication/authorization, as well as a custom SSO server that runs on Heroku which allows experts to easily visit more than one instance without having to login repeatedly. The #Backend was implemented using my favorite #Stack which consists of FeathersJS on top of Node.js and ExpressJS with PostgreSQL as the main database. The #Frontend was implemented using React, Redux.js, Semantic UI React and the FeathersJS client. Though testing was light on this project, we chose to use AVA as well as ESLint to keep the codebase clean and consistent.

    See more
    Ido Shamun
    Ido Shamun
    at The Elegant Monkeys · | 6 upvotes · 44.4K views
    atDailyDaily
    Helm
    Helm
    Docker
    Docker
    CircleCI
    CircleCI
    GitHub
    GitHub
    Kubernetes
    Kubernetes

    Kubernetes powers our #backend services as it is very easy in terms of #devops (the managed version). We deploy everything using @helm charts as it provides us to manage deployments the same way we manage our code on GitHub . On every commit a CircleCI job is triggered to run the tests, build Docker images and deploy them to the registry. Finally on every master commit CircleCI also deploys the relevant service using Helm chart to our Kubernetes cluster

    See more
    Russel Werner
    Russel Werner
    Lead Engineer at StackShare · | 0 upvotes · 5K views
    atStackShareStackShare
    Amazon EC2 Container Service
    Amazon EC2 Container Service
    CircleCI
    CircleCI
    Helm
    Helm
    Slack
    Slack
    Google Kubernetes Engine
    Google Kubernetes Engine
    Amazon EKS
    Amazon EKS
    Kubernetes
    Kubernetes
    Heroku
    Heroku

    We began our hosting journey, as many do, on Heroku because they make it easy to deploy your application and automate some of the routine tasks associated with deployments, etc. However, as our team grew and our product matured, our needs have outgrown Heroku. I will dive into the history and reasons for this in a future blog post.

    We decided to migrate our infrastructure to Kubernetes running on Amazon EKS. Although Google Kubernetes Engine has a slightly more mature Kubernetes offering and is more user-friendly; we decided to go with EKS because we already using other AWS services (including a previous migration from Heroku Postgres to AWS RDS). We are still in the process of moving our main website workloads to EKS, however we have successfully migrate all our staging and testing PR apps to run in a staging cluster. We developed a Slack chatops application (also running in the cluster) which automates all the common tasks of spinning up and managing a production-like cluster for a pull request. This allows our engineering team to iterate quickly and safely test code in a full production environment. Helm plays a central role when deploying our staging apps into the cluster. We use CircleCI to build docker containers for each PR push, which are then published to Amazon EC2 Container Service (ECR). An upgrade-operator process watches the ECR repository for new containers and then uses Helm to rollout updates to the staging environments. All this happens automatically and makes it really easy for developers to get code onto servers quickly. The immutable and isolated nature of our staging environments means that we can do anything we want in that environment and quickly re-create or restore the environment to start over.

    The next step in our journey is to migrate our production workloads to an EKS cluster and build out the CD workflows to get our containers promoted to that cluster after our QA testing is complete in our staging environments.

    See more
    Robert Zuber
    Robert Zuber
    CTO at CircleCI · | 6 upvotes · 13.4K views
    atCircleCICircleCI
    Helm
    Helm
    Nomad
    Nomad
    Kubernetes
    Kubernetes
    Docker
    Docker

    Our backend consists of two major pools of machines. One pool hosts the systems that run our site, manage jobs, and send notifications. These services are deployed within Docker containers orchestrated in Kubernetes. Due to Kubernetes’ ecosystem and toolchain, it was an obvious choice for our fairly statically-defined processes: the rate of change of job types or how many we may need in our internal stack is relatively low.

    The other pool of machines is for running our users’ jobs. Because we cannot dynamically predict demand, what types of jobs our users need to have run, nor the resources required for each of those jobs, we found that Nomad excelled over Kubernetes in this area.

    We’re also using Helm to make it easier to deploy new services into Kubernetes. We create a chart (i.e. package) for each service. This lets us easily roll back new software and gives us an audit trail of what was installed or upgraded.

    See more
    Interest over time
    Reviews of AWS OpsWorks and Kubernetes
    Review ofKubernetesKubernetes

    It's a little bit complex to onboard, but once you grasp all the different concepts the platform is really powerful, and infrastructure stops being an issue.

    Service discovery, auto-recovery, scaling and orchestration are just a few of the features you get.

    How developers use AWS OpsWorks and Kubernetes
    Avatar of Matt Welke
    Matt Welke uses KubernetesKubernetes

    Just tinkering with it for personal use at this stage based on positive experience using it at work. Plan to use it for high traffic distributed systems if not using a managed hosting service like Heroku, AWS Lambda, or Google Cloud Functions. Reasons for using instead of these alternatives would be cheaper cost at higher scale.

    Avatar of realcloudratics
    realcloudratics uses KubernetesKubernetes

    Good existential question. Kubernetes is painful in the extreme - especially when combined with Ansible. The layers of indirection are truly mind altering. But hey - containers are kewl!

    Avatar of Japan Digital Design
    Japan Digital Design uses KubernetesKubernetes

    Our developer experience system is on Kubernetes (Google Kubernetes Engine at the moment). We would like to expand our Kubernetes clusters over other Kubernetes engine.

    Avatar of ShareThis
    ShareThis uses KubernetesKubernetes

    Kubernetes is used for managing microclusters within our AWS infrastructure. This allows us to deploy new infrastructure in seconds.

    Avatar of papaver
    papaver uses KubernetesKubernetes

    minor experience with kubernetes. helped a client setup a kubernetes infrastructure. love the elegance of the system.

    Avatar of thanawatsenawat
    thanawatsenawat uses AWS OpsWorksAWS OpsWorks

    Automate deploy application without SMTP.

    Avatar of Hund
    Hund uses AWS OpsWorksAWS OpsWorks

    Chef server deployments.

    How much does AWS OpsWorks cost?
    How much does Kubernetes cost?
    Pricing unavailable
    Pricing unavailable
    News about AWS OpsWorks
    More news