Alternatives to Google Compute Engine logo

Alternatives to Google Compute Engine

Google App Engine, DigitalOcean, Google Cloud Platform, Amazon EC2, and Microsoft Azure are the most popular alternatives and competitors to Google Compute Engine.
5.8K
4.4K
+ 1
421

What is Google Compute Engine and what are its top alternatives?

Google Compute Engine is a service that provides virtual machines that run on Google infrastructure. Google Compute Engine offers scale, performance, and value that allows you to easily launch large compute clusters on Google's infrastructure. There are no upfront investments and you can run up to thousands of virtual CPUs on a system that has been designed from the ground up to be fast, and to offer strong consistency of performance.
Google Compute Engine is a tool in the Cloud Hosting category of a tech stack.

Google Compute Engine alternatives & related posts

related Google App Engine posts

Nick Rockwell
Nick Rockwell
CTO at NY Times | 11 upvotes 152.3K views

So, the shift from Amazon EC2 to Google App Engine and generally #AWS to #GCP was a long decision and in the end, it's one that we've taken with eyes open and that we reserve the right to modify at any time. And to be clear, we continue to do a lot of stuff with AWS. But, by default, the content of the decision was, for our consumer-facing products, we're going to use GCP first. And if there's some reason why we don't think that's going to work out great, then we'll happily use AWS. In practice, that hasn't really happened. We've been able to meet almost 100% of our needs in GCP.

So it's basically mostly Google Kubernetes Engine , we're mostly running stuff on Kubernetes right now.

#AWStoGCPmigration #cloudmigration #migration

See more
Aliadoc Team
Aliadoc Team
at aliadoc.com | 5 upvotes 411.1K views

In #Aliadoc, we're exploring the crowdfunding option to get traction before launch. We are building a SaaS platform for website design customization.

For the Admin UI and website editor we use React and we're currently transitioning from a Create React App setup to a custom one because our needs have become more specific. We use CloudFlare as much as possible, it's a great service.

For routing dynamic resources and proxy tasks to feed websites to the editor we leverage CloudFlare Workers for improved responsiveness. We use Firebase for our hosting needs and user authentication while also using several Cloud Functions for Firebase to interact with other services along with Google App Engine and Google Cloud Storage, but also the Real Time Database is on the radar for collaborative website editing.

We generally hate configuration but honestly because of the stage of our project we lack resources for doing heavy sysops work. So we are basically just relying on Serverless technologies as much as we can to do all server side processing.

Visual Studio Code definitively makes programming a much easier and enjoyable task, we just love it. We combine it with Bitbucket for our source code control needs.

See more
DigitalOcean logo

DigitalOcean

9.8K
6.9K
2.6K
9.8K
6.9K
+ 1
2.6K
Deploy an SSD cloud server in less than 55 seconds with a dedicated IP and root access.
DigitalOcean logo
DigitalOcean
VS
Google Compute Engine logo
Google Compute Engine

related DigitalOcean posts

I am going to build a backend which will serve my React site. It will need to interact with a PostgreSQL database where it will store and read users and create and use JSON Web Token for authenticating HTTP requests. I know EF core has good migration tooling, can Go provide the same or better? I am a one man team and I'll be hosting this either on Heroku or DigitalOcean.

See more
Rajat Jain
Rajat Jain
Devops Engineer at Aurochssoftware | 1 upvotes 74.7K views

Building my skill set to become Devops Engineer-Tool chain: Amazon EC2, Amazon S3, Bitbucket, GitLab, PyCharm, Ubuntu, DigitalOcean, Docker, Git

IT engineer with more than 6 months of experience in startups with focus on DevOps, Cloud infrastructure & Testing (QA). I had set up CI process, monitoring and infrastructure on dev/test (lower) environments

See more

related Google Cloud Platform posts

Jorge Cortell
Jorge Cortell
Founder & CEO at Kanteron Systems | 1 upvotes 23K views

We use Google Cloud Platform, Microsoft Azure and Amazon S3 (amongst others) because our platform needs to be cloud-independent to give customers the freedom they need and deserve. But being in the healthcare enterprise space, we believe Azure is the top choice... today (it tends to change often).

See more
Amazon EC2 logo

Amazon EC2

25.2K
17.9K
2.5K
25.2K
17.9K
+ 1
2.5K
Scalable, pay-as-you-go compute capacity in the cloud
Amazon EC2 logo
Amazon EC2
VS
Google Compute Engine logo
Google Compute Engine

related Amazon EC2 posts

Ashish Singh
Ashish Singh
Tech Lead, Big Data Platform at Pinterest | 28 upvotes 343.9K views

To provide employees with the critical need of interactive querying, we鈥檝e worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest鈥檚 scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

#BigData #AWS #DataScience #DataEngineering

See more
Simon Reymann
Simon Reymann
Senior Fullstack Developer at QUANTUSflow Software GmbH | 23 upvotes 229.8K views

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
See more

related Microsoft Azure posts

Omar Mehilba
Omar Mehilba
Co-Founder and COO at Magalix | 18 upvotes 171.5K views

We are hardcore Kubernetes users and contributors. We loved the automation it provides. However, as our team grew and added more clusters and microservices, capacity and resources management becomes a massive pain to us. We started suffering from a lot of outages and unexpected behavior as we promote our code from dev to production environments. Luckily we were working on our AI-powered tools to understand different dependencies, predict usage, and calculate the right resources and configurations that should be applied to our infrastructure and microservices. We dogfooded our agent (http://github.com/magalixcorp/magalix-agent) and were able to stabilize as the #autopilot continuously recovered any miscalculations we made or because of unexpected changes in workloads. We are open sourcing our agent in a few days. Check it out and let us know what you think! We run workloads on Microsoft Azure Google Kubernetes Engine and Amazon EC2 and we're all about Go and Python!

See more
Kestas Barzdaitis
Kestas Barzdaitis
Entrepreneur & Engineer | 15 upvotes 262.1K views

CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

See more

related Kubernetes posts

Yshay Yaacobi
Yshay Yaacobi
Software Engineer | 31 upvotes 1.3M views

Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

See more
Conor Myhrvold
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber | 28 upvotes 2.4M views

How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

https://eng.uber.com/distributed-tracing/

(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

See more