Alternatives to Google App Engine logo

Alternatives to Google App Engine

Heroku, DigitalOcean, AWS Lambda, Kubernetes, and AWS Elastic Beanstalk are the most popular alternatives and competitors to Google App Engine.
10.1K
7.8K
+ 1
610

What is Google App Engine and what are its top alternatives?

Google has a reputation for highly reliable, high performance infrastructure. With App Engine you can take advantage of the 10 years of knowledge Google has in running massively scalable, performance driven systems. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow.
Google App Engine is a tool in the Platform as a Service category of a tech stack.

Top Alternatives to Google App Engine

  • Heroku
    Heroku

    Heroku is a cloud application platform – a new way of building and deploying web apps. Heroku lets app developers spend 100% of their time on their application code, not managing servers, deployment, ongoing operations, or scaling. ...

  • DigitalOcean
    DigitalOcean

    We take the complexities out of cloud hosting by offering blazing fast, on-demand SSD cloud servers, straightforward pricing, a simple API, and an easy-to-use control panel. ...

  • AWS Lambda
    AWS Lambda

    AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security. ...

  • Kubernetes
    Kubernetes

    Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. ...

  • AWS Elastic Beanstalk
    AWS Elastic Beanstalk

    Once you upload your application, Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. ...

  • Amazon EC2
    Amazon EC2

    It is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. ...

  • Azure App Service
    Azure App Service

    Quickly build, deploy, and scale web apps created with popular frameworks .NET, .NET Core, Node.js, Java, PHP, Ruby, or Python, in containers or running on any operating system. Meet rigorous, enterprise-grade performance, security, and compliance requirements by using the fully managed platform for your operational and monitoring tasks. ...

  • Red Hat OpenShift
    Red Hat OpenShift

    OpenShift is Red Hat's Cloud Computing Platform as a Service (PaaS) offering. OpenShift is an application platform in the cloud where application developers and teams can build, test, deploy, and run their applications. ...

Google App Engine alternatives & related posts

Heroku logo

Heroku

25.4K
20.1K
3.2K
Build, deliver, monitor and scale web apps and APIs with a trail blazing developer experience.
25.4K
20.1K
+ 1
3.2K
PROS OF HEROKU
  • 703
    Easy deployment
  • 459
    Free for side projects
  • 374
    Huge time-saver
  • 348
    Simple scaling
  • 261
    Low devops skills required
  • 190
    Easy setup
  • 174
    Add-ons for almost everything
  • 153
    Beginner friendly
  • 150
    Better for startups
  • 133
    Low learning curve
  • 48
    Postgres hosting
  • 41
    Easy to add collaborators
  • 30
    Faster development
  • 24
    Awesome documentation
  • 19
    Simple rollback
  • 19
    Focus on product, not deployment
  • 15
    Natural companion for rails development
  • 15
    Easy integration
  • 12
    Great customer support
  • 8
    GitHub integration
  • 6
    Painless & well documented
  • 6
    No-ops
  • 4
    I love that they make it free to launch a side project
  • 4
    Free
  • 3
    Great UI
  • 3
    Just works
  • 2
    PostgreSQL forking and following
  • 2
    MySQL extension
  • 1
    Security
  • 1
    Able to host stuff good like Discord Bot
  • 0
    Sec
CONS OF HEROKU
  • 27
    Super expensive
  • 9
    Not a whole lot of flexibility
  • 7
    No usable MySQL option
  • 7
    Storage
  • 5
    Low performance on free tier
  • 2
    24/7 support is $1,000 per month

related Heroku posts

Russel Werner
Lead Engineer at StackShare · | 32 upvotes · 1.9M views

StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

#StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

See more
Simon Reymann
Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 8.9M views

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
See more
DigitalOcean logo

DigitalOcean

17.7K
12.9K
2.6K
Deploy an SSD cloud server in less than 55 seconds with a dedicated IP and root access.
17.7K
12.9K
+ 1
2.6K
PROS OF DIGITALOCEAN
  • 560
    Great value for money
  • 364
    Simple dashboard
  • 362
    Good pricing
  • 300
    Ssds
  • 250
    Nice ui
  • 191
    Easy configuration
  • 156
    Great documentation
  • 138
    Ssh access
  • 135
    Great community
  • 24
    Ubuntu
  • 13
    Docker
  • 12
    IPv6 support
  • 10
    Private networking
  • 8
    99.99% uptime SLA
  • 7
    Simple API
  • 7
    Great tutorials
  • 6
    55 Second Provisioning
  • 5
    One Click Applications
  • 4
    Dokku
  • 4
    Node.js
  • 4
    LAMP
  • 4
    Debian
  • 4
    CoreOS
  • 3
    1Gb/sec Servers
  • 3
    Word Press
  • 3
    LEMP
  • 3
    Simple Control Panel
  • 3
    Mean
  • 3
    Ghost
  • 2
    Runs CoreOS
  • 2
    Quick and no nonsense service
  • 2
    Django
  • 2
    Good Tutorials
  • 2
    Speed
  • 2
    Ruby on Rails
  • 2
    GitLab
  • 2
    Hex Core machines with dedicated ECC Ram and RAID SSD s
  • 1
    CentOS
  • 1
    Spaces
  • 1
    KVM Virtualization
  • 1
    Amazing Hardware
  • 1
    Transfer Globally
  • 1
    Fedora
  • 1
    FreeBSD
  • 1
    Drupal
  • 1
    FreeBSD Amp
  • 1
    Magento
  • 1
    ownCloud
  • 1
    RedMine
  • 1
    My go to server provider
  • 1
    Ease and simplicity
  • 1
    Nice
  • 1
    Find it superfitting with my requirements (SSD, ssh.
  • 1
    Easy Setup
  • 1
    Cheap
  • 1
    Static IP
  • 1
    It's the easiest to get started for small projects
  • 1
    Automatic Backup
  • 1
    Great support
  • 1
    Quick and easy to set up
  • 1
    Servers on demand - literally
  • 1
    Reliability
  • 0
    Variety of services
  • 0
    Managed Kubernetes
CONS OF DIGITALOCEAN
  • 3
    No live support chat
  • 3
    Pricing

related DigitalOcean posts

Hello, I'm currently writing an e-commerce website with Laravel and Laravel Nova (as an admin panel). I want to start deploying the app and created a DigitalOcean account. After some searches about the deployment process, I saw that the setup via DigitalOcean (using Droplets) isn't very easy for beginners. Now I'm not sure how to deploy my app. I am in between Laravel Forge and DigitalOcean (?Apps Platform or Droplets?). I've read that Heroku and Laravel Vapor are a bit expensive. That's why I didn't consider them yet. I'd be happy to read your opinions on that topic!

See more

Hi, I'm a beginner at using MySQL, I currently deployed my crud app on Heroku using the ClearDB add-on. I didn't see that coming, but the increased value of the primary key instead of being 1 is set to 10, and I cannot find a way to change it. Now I`m considering switching and deploying the full app and MySql to DigitalOcean any advice on that? Will I get the same issue? Thanks in advance!

See more
AWS Lambda logo

AWS Lambda

25.4K
18.3K
432
Automatically run code in response to modifications to objects in Amazon S3 buckets, messages in Kinesis streams, or...
25.4K
18.3K
+ 1
432
PROS OF AWS LAMBDA
  • 129
    No infrastructure
  • 83
    Cheap
  • 70
    Quick
  • 59
    Stateless
  • 47
    No deploy, no server, great sleep
  • 12
    AWS Lambda went down taking many sites with it
  • 6
    Event Driven Governance
  • 6
    Extensive API
  • 6
    Auto scale and cost effective
  • 6
    Easy to deploy
  • 5
    VPC Support
  • 3
    Integrated with various AWS services
CONS OF AWS LAMBDA
  • 7
    Cant execute ruby or go
  • 3
    Compute time limited
  • 1
    Can't execute PHP w/o significant effort

related AWS Lambda posts

Jeyabalaji Subramanian

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Tim Nolet

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

See more
Kubernetes logo

Kubernetes

58.5K
50.3K
677
Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops
58.5K
50.3K
+ 1
677
PROS OF KUBERNETES
  • 164
    Leading docker container management solution
  • 128
    Simple and powerful
  • 106
    Open source
  • 76
    Backed by google
  • 58
    The right abstractions
  • 25
    Scale services
  • 20
    Replication controller
  • 11
    Permission managment
  • 9
    Supports autoscaling
  • 8
    Cheap
  • 8
    Simple
  • 6
    Self-healing
  • 5
    No cloud platform lock-in
  • 5
    Promotes modern/good infrascture practice
  • 5
    Open, powerful, stable
  • 5
    Reliable
  • 4
    Scalable
  • 4
    Quick cloud setup
  • 3
    Cloud Agnostic
  • 3
    Captain of Container Ship
  • 3
    A self healing environment with rich metadata
  • 3
    Runs on azure
  • 3
    Backed by Red Hat
  • 3
    Custom and extensibility
  • 2
    Sfg
  • 2
    Gke
  • 2
    Everything of CaaS
  • 2
    Golang
  • 2
    Easy setup
  • 2
    Expandable
CONS OF KUBERNETES
  • 16
    Steep learning curve
  • 15
    Poor workflow for development
  • 8
    Orchestrates only infrastructure
  • 4
    High resource requirements for on-prem clusters
  • 2
    Too heavy for simple systems
  • 1
    Additional vendor lock-in (Docker)
  • 1
    More moving parts to secure
  • 1
    Additional Technology Overhead

related Kubernetes posts

Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 9.5M views

How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

https://eng.uber.com/distributed-tracing/

(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

See more
Yshay Yaacobi

Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

See more
AWS Elastic Beanstalk logo

AWS Elastic Beanstalk

2.2K
1.8K
241
Quickly deploy and manage applications in the AWS cloud.
2.2K
1.8K
+ 1
241
PROS OF AWS ELASTIC BEANSTALK
  • 77
    Integrates with other aws services
  • 65
    Simple deployment
  • 44
    Fast
  • 28
    Painless
  • 16
    Free
  • 4
    Well-documented
  • 3
    Independend app container
  • 2
    Postgres hosting
  • 2
    Ability to be customized
CONS OF AWS ELASTIC BEANSTALK
  • 2
    Charges appear automatically after exceeding free quota
  • 1
    Lots of moving parts and config
  • 0
    Slow deployments

related AWS Elastic Beanstalk posts

Julien DeFrance
Principal Software Engineer at Tophatter · | 16 upvotes · 3.1M views

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

See more

We initially started out with Heroku as our PaaS provider due to a desire to use it by our original developer for our Ruby on Rails application/website at the time. We were finding response times slow, it was painfully slow, sometimes taking 10 seconds to start loading the main page. Moving up to the next "compute" level was going to be very expensive.

We moved our site over to AWS Elastic Beanstalk , not only did response times on the site practically become instant, our cloud bill for the application was cut in half.

In database world we are currently using Amazon RDS for PostgreSQL also, we have both MariaDB and Microsoft SQL Server both hosted on Amazon RDS. The plan is to migrate to AWS Aurora Serverless for all 3 of those database systems.

Additional services we use for our public applications: AWS Lambda, Python, Redis, Memcached, AWS Elastic Load Balancing (ELB), Amazon Elasticsearch Service, Amazon ElastiCache

See more
Amazon EC2 logo

Amazon EC2

47.8K
35.1K
2.5K
Scalable, pay-as-you-go compute capacity in the cloud
47.8K
35.1K
+ 1
2.5K
PROS OF AMAZON EC2
  • 647
    Quick and reliable cloud servers
  • 515
    Scalability
  • 393
    Easy management
  • 277
    Low cost
  • 271
    Auto-scaling
  • 89
    Market leader
  • 80
    Backed by amazon
  • 79
    Reliable
  • 67
    Free tier
  • 58
    Easy management, scalability
  • 13
    Flexible
  • 10
    Easy to Start
  • 9
    Elastic
  • 9
    Web-scale
  • 9
    Widely used
  • 7
    Node.js API
  • 5
    Industry Standard
  • 4
    Lots of configuration options
  • 2
    GPU instances
  • 1
    Simpler to understand and learn
  • 1
    Extremely simple to use
  • 1
    Amazing for individuals
  • 1
    All the Open Source CLI tools you could want.
CONS OF AMAZON EC2
  • 13
    Ui could use a lot of work
  • 6
    High learning curve when compared to PaaS
  • 3
    Extremely poor CPU performance

related Amazon EC2 posts

Ashish Singh
Tech Lead, Big Data Platform at Pinterest · | 38 upvotes · 2.8M views

To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

#BigData #AWS #DataScience #DataEngineering

See more
Simon Reymann
Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 8.9M views

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
See more
Azure App Service logo

Azure App Service

299
369
11
Build, deploy, and scale web apps on a fully managed platform
299
369
+ 1
11
PROS OF AZURE APP SERVICE
  • 6
    .Net Framework
  • 5
    Visual studio
CONS OF AZURE APP SERVICE
    Be the first to leave a con

    related Azure App Service posts

    Mehdi Baaboura
    Managing Director at Gigadrive · | 2 upvotes · 19.3K views

    Easier setup and integration for PHP based applications. Azure App Service requires a lot of extra configuration, while AWS Elastic Beanstalk has most things set-up out of the box. On top of this, Azure is much more expensive.

    See more
    Red Hat OpenShift logo

    Red Hat OpenShift

    1.6K
    1.4K
    517
    Red Hat's free Platform as a Service (PaaS) for hosting Java, PHP, Ruby, Python, Node.js, and Perl apps
    1.6K
    1.4K
    + 1
    517
    PROS OF RED HAT OPENSHIFT
    • 99
      Good free plan
    • 63
      Open Source
    • 47
      Easy setup
    • 43
      Nodejs support
    • 42
      Well documented
    • 32
      Custom domains
    • 28
      Mongodb support
    • 27
      Clean and simple architecture
    • 25
      PHP support
    • 21
      Customizable environments
    • 11
      Ability to run CRON jobs
    • 9
      Easier than Heroku for a WordPress blog
    • 8
      Easy deployment
    • 7
      PostgreSQL support
    • 7
      Autoscaling
    • 7
      Good balance between Heroku and AWS for flexibility
    • 5
      Free, Easy Setup, Lot of Gear or D.I.Y Gear
    • 4
      Shell access to gears
    • 3
      Great Support
    • 3
      High Security
    • 3
      Logging & Metrics
    • 2
      Cloud Agnostic
    • 2
      Runs Anywhere - AWS, GCP, Azure
    • 2
      No credit card needed
    • 2
      Because it is easy to manage
    • 2
      Secure
    • 2
      Meteor support
    • 2
      Overly complicated and over engineered in majority of e
    • 2
      Golang support
    • 2
      Its free and offer custom domain usage
    • 1
      Autoscaling at a good price point
    • 1
      Easy setup and great customer support
    • 1
      MultiCloud
    • 1
      Great free plan with excellent support
    • 1
      This is the only free one among the three as of today
    CONS OF RED HAT OPENSHIFT
    • 2
      Decisions are made for you, limiting your options
    • 2
      License cost
    • 1
      Behind, sometimes severely, the upstreams

    related Red Hat OpenShift posts

    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 9.5M views

    How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

    Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

    Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

    https://eng.uber.com/distributed-tracing/

    (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

    Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

    See more
    Michael Ionita

    We use Kubernetes because we decided to migrate to a hosted cluster (not AWS) and still be able to scale our clusters up and down depending on load. By wrapping it with OpenShift we are now able to easily adapt to demand but also able to separate concerns into separate Pods depending on use-cases we have.

    See more