Kubernetes

Kubernetes

DevOps / Build, Test, Deploy / Container Tools
Senior Architect at Rainforest QA·

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

READ MORE
Why Rainforest QA Moved from Heroku to Google Kubernetes Engine (rainforestqa.com)
20 upvotes·1 comment·1.2M views
Dev Suryawanshi
Dev Suryawanshi
·
January 19th 2020 at 10:11AM

Great information

·
Reply
Lead Engineer at StackShare·

We began our hosting journey, as many do, on Heroku because they make it easy to deploy your application and automate some of the routine tasks associated with deployments, etc. However, as our team grew and our product matured, our needs have outgrown Heroku. I will dive into the history and reasons for this in a future blog post.

We decided to migrate our infrastructure to Kubernetes running on Amazon EKS. Although Google Kubernetes Engine has a slightly more mature Kubernetes offering and is more user-friendly; we decided to go with EKS because we already using other AWS services (including a previous migration from Heroku Postgres to AWS RDS). We are still in the process of moving our main website workloads to EKS, however we have successfully migrate all our staging and testing PR apps to run in a staging cluster. We developed a Slack chatops application (also running in the cluster) which automates all the common tasks of spinning up and managing a production-like cluster for a pull request. This allows our engineering team to iterate quickly and safely test code in a full production environment. Helm plays a central role when deploying our staging apps into the cluster. We use CircleCI to build docker containers for each PR push, which are then published to Amazon EC2 Container Service (ECR). An upgrade-operator process watches the ECR repository for new containers and then uses Helm to rollout updates to the staging environments. All this happens automatically and makes it really easy for developers to get code onto servers quickly. The immutable and isolated nature of our staging environments means that we can do anything we want in that environment and quickly re-create or restore the environment to start over.

The next step in our journey is to migrate our production workloads to an EKS cluster and build out the CD workflows to get our containers promoted to that cluster after our QA testing is complete in our staging environments.

READ MORE
7 upvotes·299.5K views
Needs advice
on
PrometheusPrometheus
and
SysdigSysdig

We have Prometheus as a monitoring engine as a part of our stack which contains Kubernetes cluster, container images and other open source tools. Also, I am aware that Sysdig can be integrated with Prometheus but I really wanted to know whether Sysdig or sysdig+prometheus will make better monitoring solution.

READ MORE
11 upvotes·696.8K views
CEO at Scrayos UG (haftungsbeschränkt)·

We primarily use Prometheus to gather metrics and statistics to display them in Grafana. Aside from that we poll Prometheus for our orchestration-solution "JCOverseer" to determine, which host is least occupied at the moment.

While there are existing orchestration softwares/suites like Kubernetes, that we also plan to adopt in the future, we're of the opinion that those solutions do not fit our special environment within minecraft and our own solution will outperform them in the limited scope that it needs to cover.

READ MORE
1 upvote·156.9K views
Tech Lead, Big Data Platform at Pinterest·

To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

#BigData #AWS #DataScience #DataEngineering

READ MORE
Presto at Pinterest - Pinterest Engineering Blog - Medium (medium.com)
38 upvotes·1 comment·1.3M views
Kaibo Hao
Kaibo Hao
·
January 28th 2020 at 12:46AM

ECS on AWS will reduce your cost on EC2 and Kubernetes. Athena may be another tool for reducing your cost by replacing the Presto. It takes advantage of the S3 as the storage and provided the serverless management for your infrastructure.

·
Reply

We are building a product that runs both on-prem and on our Google Kubernetes Engine clusters, and I am working on building a monitoring solution.

Our app is dockerized and usually deployed using Kubernetes.

I am currently looking into tools for centralized logging, but there is a catch. Some of our customers do not allow exporting the logs to our cloud solution; so basically, I am looking for a solution that will work for all 3 use cases:

  1. Cloud clusters

  2. On-prem which can report to our central cloud logging solution

  3. On-prem which can be only accessed locally

We are currently using GCP Logging since it's pretty easy to get started with, but if it does not answer our use case, we are fine with replacing it.

I was considering ELK, but in my experience, it can be pretty complicated to manage.

Are there other recommended solutions?

READ MORE
5 upvotes·80K views
Head of Community at CTO.ai·
Needs advice
on
CTO.aiCTO.ai
and
GitHub ActionsGitHub Actions

Curious to get feedback from the community on automating developer workflows for DevOps and other admin tasks. Looking to use one of these for everything from delivery metrics to Kubernetes, and everything in between.

READ MORE
2 upvotes·1.1K views
Needs advice
on
Amazon SageMakerAmazon SageMaker
and
KubeflowKubeflow

Amazon SageMaker constricts the use of their own mxnet package and does not offer a strong Kubernetes backbone. At the same time, Kubeflow is still quite buggy and cumbersome to use. Which tool is a better pick for MLOps pipelines (both from the perspective of scalability and depth)?

READ MORE
2 upvotes·175K views
Replies (1)
Recommends
on
Kubeflow

Depends. I think two factors should drive your decision.

Is your core value proposition in the area of ML? If yes, you'll want to customize training, inference and orchestration. You'll hit the golden Cage of SageMaker fairly quickly. If it isn't you are probably ok with the reasonable - albeit limiting - defaults of SageMaker.

Secondly, is your organization invested in Kubernetes and Open Source? Are you single cloud? If so, how strongly committed are you to AWS?

We used SageMaker for 6 months, then pivoted completely to Kubernetes and KubeFlow. The orchestration layer in KF Pipelines is great, since it allows self service for data scientists (python and fairly simple api) and well Depends. I think two factors should drive your decision.

Is your core value proposition in the area of ML? If yes, you'll want to customize training, inference and orchestration. You'll hit the golden Cage of SageMaker fairly quickly. If it isn't you are probably ok with the reasonable - albeit limiting - defaults of SageMaker.

Secondly, is your organization invested in Kubernetes and Open Source? Are you single cloud? If so, how strongly committed are you to AWS?

We used SageMaker for 6 months, then pivoted completely to Kubernetes and KubeFlow. The orchestration layer in KF Pipelines is great, since it allows self service for data scientists (python and fairly simple api) and well for Ops, since everything is based on reproducible, containerized steps.

READ MORE
Kineo.ai (kineo.ai)
3 upvotes·521 views
Senior Fullstack Developer at QUANTUSflow Software GmbH·

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
READ MORE
30 upvotes·2 comments·5.6M views
Larry Gryziak
Larry Gryziak
·
April 30th 2020 at 6:34PM

So why is your deployment different for your (Heroku) test/dev and your stage/production?

·
Reply
Simon Reymann
Simon Reymann
·
May 1st 2020 at 10:32AM

When it comes to testing our web app we do not demand great computational resources and need a very simple, convenient and fast PaaS solution for deploying the app to our testers. In production though, the demand of great computational resources can rise very fast. With Amazon we are able to control that in better way.

·
Reply
Needs advice
on
NGINXNGINXTraefikTraefik
and
ZuulZuul

Which gateway/reverse proxy should I use? We are using a microservices architecture. we will be also using Kubernetes.

READ MORE
3 upvotes·479 views
Replies (1)
Recommends
on
Kong

We're using Kong Ingress with K8s and so far for our microservices and it's been super stable and also very easy to setup. Main reason we picked it over other products was built-in API key auth for their open source offering.

READ MORE
2 upvotes·380 views