Feed powered byStream Blue Logo Copy 5
Kubernetes

Kubernetes

DevOps / Build, Test, Deploy / Container Tools

Decision at Soluto about Docker Swarm, Kubernetes, Visual Studio Code, Go, TypeScript, JavaScript, C#, F#, .NET

Avatar of Yshayy
Software Engineer
Docker SwarmDocker Swarm
KubernetesKubernetes
Visual Studio CodeVisual Studio Code
GoGo
TypeScriptTypeScript
JavaScriptJavaScript
C#C#
F#F#
.NET.NET

Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

22 upvotes2 comments52K views

Decision at Dubsmash about Kubernetes, Amazon EC2, Heroku, Python, ContainerTools, PlatformAsAService

Avatar of tspecht
鈥嶤o-Founder and CTO at Dubsmash
KubernetesKubernetes
Amazon EC2Amazon EC2
HerokuHeroku
PythonPython
#ContainerTools
#PlatformAsAService

Since we deployed our very first lines of Python code more than 2 years ago we are happy users of Heroku. It lets us focus on building features rather than maintaining infrastructure, has super-easy scaling capabilities, and the support team is always happy to help (in the rare case you need them).

We played with the thought of moving our computational needs over to barebone Amazon EC2 instances or a container-management solution like Kubernetes a couple of times, but the added costs of maintaining this architecture and the ease-of-use of Heroku have kept us from moving forward so far.

Running independent services for different needs of our features gives us the flexibility to choose whatever data storage is best for the given task.

#PlatformAsAService #ContainerTools

14 upvotes3.4K views

Decision at Shopify about Memcached, Redis, MySQL, Google Kubernetes Engine, Kubernetes, Docker

Avatar of kirs
Production Engineer at Shopify
MemcachedMemcached
RedisRedis
MySQLMySQL
Google Kubernetes EngineGoogle Kubernetes Engine
KubernetesKubernetes
DockerDocker

At Shopify, over the years, we moved from shards to the concept of "pods". A pod is a fully isolated instance of Shopify with its own datastores like MySQL, Redis, Memcached. A pod can be spawned in any region. This approach has helped us eliminate global outages. As of today, we have more than a hundred pods, and since moving to this architecture we haven't had any major outages that affected all of Shopify. An outage today only affects a single pod or region.

As we grew into hundreds of shards and pods, it became clear that we needed a solution to orchestrate those deployments. Today, we use Docker, Kubernetes, and Google Kubernetes Engine to make it easy to bootstrap resources for new Shopify Pods.

13 upvotes6.9K views

Decision at Stitch about Go, Clojure, JavaScript, Python, Kubernetes, AWS OpsWorks, Amazon EC2, Amazon Redshift, Amazon S3, Amazon RDS

Avatar of jakestein
CEO at Stitch
GoGo
ClojureClojure
JavaScriptJavaScript
PythonPython
KubernetesKubernetes
AWS OpsWorksAWS OpsWorks
Amazon EC2Amazon EC2
Amazon RedshiftAmazon Redshift
Amazon S3Amazon S3
Amazon RDSAmazon RDS

Stitch is run entirely on AWS. All of our transactional databases are run with Amazon RDS, and we rely on Amazon S3 for data persistence in various stages of our pipeline. Our product integrates with Amazon Redshift as a data destination, and we also use Redshift as an internal data warehouse (powered by Stitch, of course).

The majority of our services run on stateless Amazon EC2 instances that are managed by AWS OpsWorks. We recently introduced Kubernetes into our infrastructure to run the scheduled jobs that execute Singer code to extract data from various sources. Although we tend to be wary of shiny new toys, Kubernetes has proven to be a good fit for this problem, and its stability, strong community and helpful tooling have made it easy for us to incorporate into our operations.

While we continue to be happy with Clojure for our internal services, we felt that its relatively narrow adoption could impede Singer's growth. We chose Python both because it is well suited to the task, and it seems to have reached critical mass among data engineers. All that being said, the Singer spec is language agnostic, and integrations and libraries have been developed in JavaScript, Go, and Clojure.

13 upvotes6.6K views

Decision at CodeFactor about Google Cloud Functions, Azure Functions, AWS Lambda, Docker, Google Compute Engine, Microsoft Azure, Amazon EC2, CodeFactor.io, Kubernetes, Devops, AI, Machinelearning, Automation, Startup, Autoscale, Containerization, IAAS, SAAS

Avatar of kaskas
Entrepreneur & Engineer
Google Cloud FunctionsGoogle Cloud Functions
Azure FunctionsAzure Functions
AWS LambdaAWS Lambda
DockerDocker
Google Compute EngineGoogle Compute Engine
Microsoft AzureMicrosoft Azure
Amazon EC2Amazon EC2
CodeFactor.ioCodeFactor.io
KubernetesKubernetes
#Devops
#AI
#Machinelearning
#Automation
#Startup
#Autoscale
#Containerization
#IAAS
#SAAS

CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

10 upvotes12K views

Decision at AppAttack about Kubernetes, DigitalOcean, CloudHosting

Avatar of ctbucha
Founder/CEO at AppAttack
KubernetesKubernetes
DigitalOceanDigitalOcean
#CloudHosting

I use DigitalOcean because of the simplicity of using their basic offerings, such as droplets. In AppAttack, we need low-level control of our infrastructure so we can rapidly deploy a custom training web application on-demand for each training session, and building a Kubernetes cluster on top of DigitalOcean droplets allowed us to do exactly that.

#CloudHosting

9 upvotes35.9K views

Decision at Magalix about Python, Go, Amazon EC2, Google Kubernetes Engine, Microsoft Azure, Kubernetes, Autopilot

Avatar of mehilba
Co-Founder and COO at Magalix
PythonPython
GoGo
Amazon EC2Amazon EC2
Google Kubernetes EngineGoogle Kubernetes Engine
Microsoft AzureMicrosoft Azure
KubernetesKubernetes
#Autopilot

We are hardcore Kubernetes users and contributors. We loved the automation it provides. However, as our team grew and added more clusters and microservices, capacity and resources management becomes a massive pain to us. We started suffering from a lot of outages and unexpected behavior as we promote our code from dev to production environments. Luckily we were working on our AI-powered tools to understand different dependencies, predict usage, and calculate the right resources and configurations that should be applied to our infrastructure and microservices. We dogfooded our agent (http://github.com/magalixcorp/magalix-agent) and were able to stabilize as the #autopilot continuously recovered any miscalculations we made or because of unexpected changes in workloads. We are open sourcing our agent in a few days. Check it out and let us know what you think! We run workloads on Microsoft Azure Google Kubernetes Engine and Amazon EC2 and we're all about Go and Python!

8 upvotes2 comments16.6K views

Decision at The New York Times about Kubernetes, Google Kubernetes Engine, Google App Engine, Amazon EC2, Migration, Cloudmigration, AWStoGCPmigration, GCP, AWS

Avatar of nsrockwell
CTO at NY Times
KubernetesKubernetes
Google Kubernetes EngineGoogle Kubernetes Engine
Google App EngineGoogle App Engine
Amazon EC2Amazon EC2
#Migration
#Cloudmigration
#AWStoGCPmigration
#GCP
#AWS

So, the shift from Amazon EC2 to Google App Engine and generally #AWS to #GCP was a long decision and in the end, it's one that we've taken with eyes open and that we reserve the right to modify at any time. And to be clear, we continue to do a lot of stuff with AWS. But, by default, the content of the decision was, for our consumer-facing products, we're going to use GCP first. And if there's some reason why we don't think that's going to work out great, then we'll happily use AWS. In practice, that hasn't really happened. We've been able to meet almost 100% of our needs in GCP.

So it's basically mostly Google Kubernetes Engine , we're mostly running stuff on Kubernetes right now.

#AWStoGCPmigration #cloudmigration #migration

8 upvotes2.5K views

Decision at Uber Technologies about Apache Spark, C#, OpenShift, JavaScript, Kubernetes, C++, Go, Node.js, Java, Python, Jaeger

Avatar of conor
Tech Brand Mgr, Office of CTO at Uber
Apache SparkApache Spark
C#C#
OpenShiftOpenShift
JavaScriptJavaScript
KubernetesKubernetes
C++C++
GoGo
Node.jsNode.js
JavaJava
PythonPython
JaegerJaeger

How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

https://eng.uber.com/distributed-tracing/

(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

7 upvotes1 comment25.5K views

Decision at Redash about Kubernetes, Amazon EC2 Container Service

Avatar of arikfr
KubernetesKubernetes
Amazon EC2 Container ServiceAmazon EC2 Container Service

We started using Amazon EC2 Container Service 3 years ago because it was the easiest containers orchestration tool to start with. At the time it was missing a lot of features compared to other tools, but it was still the fastest way to deploy a container on AWS. As with any AWS product, over time they caught up and improved it significantly. Today it probably one of the best tools in its category. It might not have all the feature Kubernetes has, but it also has less complexity. And it definitely has all the features a small company/team needs.

6 upvotes630 views