Alternatives to Serverless logo

Alternatives to Serverless

AWS Lambda, Terraform, Zappa, Kubernetes, and Azure Functions are the most popular alternatives and competitors to Serverless.
950
1K
+ 1
23

What is Serverless and what are its top alternatives?

Build applications comprised of microservices that run in response to events, auto-scale for you, and only charge you when they run. This lowers the total cost of maintaining your apps, enabling you to build more logic, faster. The Framework uses new event-driven compute services, like AWS Lambda, Google CloudFunctions, and more.
Serverless is a tool in the Serverless / Task Processing category of a tech stack.
Serverless is an open source tool with 41.7K GitHub stars and 5.1K GitHub forks. Here’s a link to Serverless's open source repository on GitHub

Top Alternatives to Serverless

  • AWS Lambda

    AWS Lambda

    AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security. ...

  • Terraform

    Terraform

    With Terraform, you describe your complete infrastructure as code, even as it spans multiple service providers. Your servers may come from AWS, your DNS may come from CloudFlare, and your database may come from Heroku. Terraform will build all these resources across all these providers in parallel. ...

  • Zappa

    Zappa

    Zappa makes it super easy to deploy all Python WSGI applications on AWS Lambda + API Gateway. Think of it as "serverless" web hosting for your Python web apps. That means infinite scaling, zero downtime, zero maintenance - and at a fraction of the cost of your current deployments! ...

  • Kubernetes

    Kubernetes

    Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. ...

  • Azure Functions

    Azure Functions

    Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems. ...

  • Cloud Functions for Firebase

    Cloud Functions for Firebase

    Cloud Functions for Firebase lets you create functions that are triggered by Firebase products, such as changes to data in the Realtime Database, uploads to Cloud Storage, new user sign ups via Authentication, and conversion events in Analytics. ...

  • Google Cloud Functions

    Google Cloud Functions

    Construct applications from bite-sized business logic billed to the nearest 100 milliseconds, only while your code is running ...

  • Google Cloud Run

    Google Cloud Run

    A managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. It's serverless by abstracting away all infrastructure management. ...

Serverless alternatives & related posts

AWS Lambda logo

AWS Lambda

17.4K
13.2K
418
Automatically run code in response to modifications to objects in Amazon S3 buckets, messages in Kinesis streams, or...
17.4K
13.2K
+ 1
418
PROS OF AWS LAMBDA
  • 127
    No infrastructure
  • 82
    Cheap
  • 69
    Quick
  • 57
    Stateless
  • 47
    No deploy, no server, great sleep
  • 9
    AWS Lambda went down taking many sites with it
  • 6
    Event Driven Governance
  • 5
    Auto scale and cost effective
  • 5
    Extensive API
  • 5
    Easy to deploy
  • 4
    VPC Support
  • 2
    Integrated with various AWS services
CONS OF AWS LAMBDA
  • 5
    Cant execute ruby or go
  • 0
    Can't execute PHP w/o significant effort

related AWS Lambda posts

Jeyabalaji Subramanian

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Tim Nolet

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

See more
Terraform logo

Terraform

11.2K
8.9K
324
Describe your complete infrastructure as code and build resources across providers
11.2K
8.9K
+ 1
324
PROS OF TERRAFORM
  • 110
    Infrastructure as code
  • 73
    Declarative syntax
  • 44
    Planning
  • 27
    Simple
  • 24
    Parallelism
  • 7
    Well-documented
  • 7
    Cloud agnostic
  • 6
    It's like coding your infrastructure in simple English
  • 5
    Platform agnostic
  • 4
    Immutable infrastructure
  • 4
    Automates infrastructure deployments
  • 3
    Automation
  • 3
    Extendable
  • 3
    Portability
  • 2
    Lightweight
  • 2
    Scales to hundreds of hosts
CONS OF TERRAFORM
  • 1
    Doesn't have full support to GKE

related Terraform posts

Context: I wanted to create an end to end IoT data pipeline simulation in Google Cloud IoT Core and other GCP services. I never touched Terraform meaningfully until working on this project, and it's one of the best explorations in my development career. The documentation and syntax is incredibly human-readable and friendly. I'm used to building infrastructure through the google apis via Python , but I'm so glad past Sung did not make that decision. I was tempted to use Google Cloud Deployment Manager, but the templates were a bit convoluted by first impression. I'm glad past Sung did not make this decision either.

Solution: Leveraging Google Cloud Build Google Cloud Run Google Cloud Bigtable Google BigQuery Google Cloud Storage Google Compute Engine along with some other fun tools, I can deploy over 40 GCP resources using Terraform!

Check Out My Architecture: CLICK ME

Check out the GitHub repo attached

See more
Emanuel Evans
Senior Architect at Rainforest QA · | 17 upvotes · 745.7K views

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

See more
Zappa logo

Zappa

55
82
0
Deploy all Python WSGI applications on AWS Lambda + API Gateway.
55
82
+ 1
0
PROS OF ZAPPA
    Be the first to leave a pro
    CONS OF ZAPPA
      Be the first to leave a con

      related Zappa posts

      Jeyabalaji Subramanian

      Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

      We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

      Based on the above criteria, we selected the following tools to perform the end to end data replication:

      We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

      We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

      In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

      Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

      In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

      See more
      Jeyabalaji Subramanian

      At FundsCorner, we are on a mission to enable fast accessible credit to India’s Kirana Stores. We are an early stage startup with an ultra small Engineering team. All the tech decisions we have made until now are based on our core philosophy: "Build usable products fast".

      Based on the above fundamentals, we chose Python as our base language for all our APIs and micro-services. It is ultra easy to start with, yet provides great libraries even for the most complex of use cases. Our entire backend stack runs on Python and we cannot be more happy with it! If you are looking to deploy your API as server-less, Python provides one of the least cold start times.

      We build our APIs with Flask. For backend database, our natural choice was MongoDB. It frees up our time from complex database specifications - we instead use our time in doing sensible data modelling & once we finalize the data model, we integrate it into Flask using Swagger UI. Mongo supports complex queries to cull out difficult data through aggregation framework & we have even built an internal framework called "Poetry", for aggregation queries.

      Our web apps are built on Vue.js , Vuetify and vuex. Initially we debated a lot around choosing Vue.js or React , but finally settled with Vue.js, mainly because of the ease of use, fast development cycles & awesome set of libraries and utilities backing Vue.

      You simply cannot go wrong with Vue.js . Great documentation, the library is ultra compact & is blazing fast. Choosing Vue.js was one of the critical decisions made, which enabled us to launch our web app in under a month (which otherwise would have taken 3 months easily). For those folks who are looking for big names, Adobe, and Alibaba and Gitlab are using Vue.

      By choosing Vuetify, we saved thousands of person hours in designing the CSS files. Vuetify contains all key material components for designing a smooth User experience & it just works! It's an awesome framework. All of us at FundsCorner are now lifelong fanboys of Vue.js and Vuetify.

      On the infrastructure side, all our API services and backend services are deployed as server less micro-services through Zappa. Zappa makes your life super easy by packaging everything that is required to deploy your code as AWS Lambda. We are now addicted to the single - click deploys / updates through Zappa. Try it out & you will convert!

      Also, if you are using Zappa, you can greatly simplify your CI / CD pipelines. Do try it! It's just awesome! and... you will be astonished by the savings you have made on AWS bills at end of the month.

      Our CI / CD pipelines are built using GitLab CI. The documentation is very good & it enables you to go from from concept to production in minimal time frame.

      We use Sentry for all crash reporting and resolution. Pro tip, they do have handlers for AWS Lambda , which made our integration super easy.

      All our micro-services including APIs are event-driven. Our background micro-services are message oriented & we use Amazon SQS as our message pipe. We have our own in-house workflow manager to orchestrate across micro - services.

      We host our static websites on Netlify. One of the cool things about Netlify is the automated CI / CD on git push. You just do a git push to deploy! Again, it is super simple to use and it just works. We were dogmatic about going server less even on static web sites & you can go server less on Netlify in a few minutes. It's just a few clicks away.

      We use Google Compute Engine, especially Google Vision for our AI experiments.

      For Ops automation, we use Slack. Slack provides a super-rich API (through Slack App) through which you can weave magical automation on boring ops tasks.

      See more
      Kubernetes logo

      Kubernetes

      40.5K
      34.6K
      631
      Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops
      40.5K
      34.6K
      + 1
      631
      PROS OF KUBERNETES
      • 160
        Leading docker container management solution
      • 125
        Simple and powerful
      • 102
        Open source
      • 75
        Backed by google
      • 56
        The right abstractions
      • 24
        Scale services
      • 18
        Replication controller
      • 9
        Permission managment
      • 7
        Supports autoscaling
      • 7
        Simple
      • 6
        Cheap
      • 4
        Reliable
      • 4
        No cloud platform lock-in
      • 4
        Self-healing
      • 3
        Quick cloud setup
      • 3
        Open, powerful, stable
      • 3
        Scalable
      • 3
        Promotes modern/good infrascture practice
      • 2
        Backed by Red Hat
      • 2
        Cloud Agnostic
      • 2
        Runs on azure
      • 2
        Custom and extensibility
      • 2
        Captain of Container Ship
      • 2
        A self healing environment with rich metadata
      • 1
        Golang
      • 1
        Easy setup
      • 1
        Everything of CaaS
      • 1
        Sfg
      • 1
        Expandable
      • 1
        Gke
      CONS OF KUBERNETES
      • 13
        Poor workflow for development
      • 11
        Steep learning curve
      • 5
        Orchestrates only infrastructure
      • 2
        High resource requirements for on-prem clusters

      related Kubernetes posts

      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber · | 39 upvotes · 4.4M views

      How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

      Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

      Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

      https://eng.uber.com/distributed-tracing/

      (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

      Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

      See more
      Yshay Yaacobi

      Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

      Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

      After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

      See more
      Azure Functions logo

      Azure Functions

      492
      543
      40
      Listen and react to events across your stack
      492
      543
      + 1
      40
      PROS OF AZURE FUNCTIONS
      • 12
        Pay only when invoked
      • 8
        Great developer experience for C#
      • 6
        Multiple languages supported
      • 5
        Great debugging support
      • 2
        Can be used as lightweight https service
      • 2
        Poor developer experience for C#
      • 2
        Easy scalability
      • 1
        Event driven
      • 1
        WebHooks
      • 1
        Azure component events for Storage, services etc
      CONS OF AZURE FUNCTIONS
      • 1
        No persistent (writable) file system available
      • 1
        Poor support for Linux environments
      • 1
        Sporadic server & language runtime issues
      • 1
        Not suited for long-running applications

      related Azure Functions posts

      Kestas Barzdaitis
      Entrepreneur & Engineer · | 16 upvotes · 465.6K views

      CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

      CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

      AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

      It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

      The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

      In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

      Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

      See more
      Michal Nowak

      In a couple of recent projects we had an opportunity to try out the new Serverless approach to building web applications. It wasn't necessarily a question if we should use any particular vendor but rather "if" we can consider serverless a viable option for building apps. Obviously our goal was also to get a feel for this technology and gain some hands-on experience.

      We did consider AWS Lambda, Firebase from Google as well as Azure Functions. Eventually we went with AWS Lambdas.

      PROS
      • No servers to manage (obviously!)
      • Limited fixed costs – you pay only for used time
      • Automated scaling and balancing
      • Automatic failover (or, at this level of abstraction, no failover problem at all)
      • Security easier to provide and audit
      • Low overhead at the start (with the certain level of knowledge)
      • Short time to market
      • Easy handover - deployment coupled with code
      • Perfect choice for lean startups with fast-paced iterations
      • Augmentation for the classic cloud, server(full) approach
      CONS
      • Not much know-how and best practices available about structuring the code and projects on the market
      • Not suitable for complex business logic due to the risk of producing highly coupled code
      • Cost difficult to estimate (helpful tools: serverlesscalc.com)
      • Difficulty in migration to other platforms (Vendor lock⚠️)
      • Little engineers with experience in serverless on the job market
      • Steep learning curve for engineers without any cloud experience

      More details are on our blog: https://evojam.com/blog/2018/12/5/should-you-go-serverless-meet-the-benefits-and-flaws-of-new-wave-of-cloud-solutions I hope it helps 🙌 & I'm curious of your experiences.

      See more
      Cloud Functions for Firebase logo

      Cloud Functions for Firebase

      418
      351
      5
      Run your mobile backend code without managing servers
      418
      351
      + 1
      5
      PROS OF CLOUD FUNCTIONS FOR FIREBASE
      • 4
        Up and running
      • 1
        Affordable
      CONS OF CLOUD FUNCTIONS FOR FIREBASE
        Be the first to leave a con

        related Cloud Functions for Firebase posts

        Eugene Cheah

        For inboxkitten.com, an opensource disposable email service;

        We migrated our serverless workload from Cloud Functions for Firebase to CloudFlare workers, taking advantage of the lower cost and faster-performing edge computing of Cloudflare network. Made possible due to our extremely low CPU and RAM overhead of our serverless functions.

        If I were to summarize the limitation of Cloudflare (as oppose to firebase/gcp functions), it would be ...

        1. <5ms CPU time limit
        2. Incompatible with express.js
        3. one script limitation per domain

        Limitations our workload is able to conform with (YMMV)

        For hosting of static files, we migrated from Firebase to CommonsHost

        More details on the trade-off in between both serverless providers is in the article

        See more
        Aliadoc Team

        In #Aliadoc, we're exploring the crowdfunding option to get traction before launch. We are building a SaaS platform for website design customization.

        For the Admin UI and website editor we use React and we're currently transitioning from a Create React App setup to a custom one because our needs have become more specific. We use CloudFlare as much as possible, it's a great service.

        For routing dynamic resources and proxy tasks to feed websites to the editor we leverage CloudFlare Workers for improved responsiveness. We use Firebase for our hosting needs and user authentication while also using several Cloud Functions for Firebase to interact with other services along with Google App Engine and Google Cloud Storage, but also the Real Time Database is on the radar for collaborative website editing.

        We generally hate configuration but honestly because of the stage of our project we lack resources for doing heavy sysops work. So we are basically just relying on Serverless technologies as much as we can to do all server side processing.

        Visual Studio Code definitively makes programming a much easier and enjoyable task, we just love it. We combine it with Bitbucket for our source code control needs.

        See more
        Google Cloud Functions logo

        Google Cloud Functions

        403
        398
        20
        A serverless environment to build and connect cloud services
        403
        398
        + 1
        20
        PROS OF GOOGLE CLOUD FUNCTIONS
        • 6
          Serverless Applications
        • 4
          Its not AWS
        • 3
          Simplicity
        • 2
          Free Tiers and Trainging
        • 1
          Typescript Support
        • 1
          Customer Support
        • 1
          Blaze, pay as you go
        • 1
          Simple config with GitLab CI/CD
        • 1
          Built-in Webhook trigger
        CONS OF GOOGLE CLOUD FUNCTIONS
        • 1
          Node.js only
        • 0
          Typescript Support
        • 0
          Blaze, pay as you go
        • 0
          Simple config with GitLab CI/CD

        related Google Cloud Functions posts

        Kestas Barzdaitis
        Entrepreneur & Engineer · | 16 upvotes · 465.6K views

        CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

        CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

        AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

        It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

        The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

        In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

        Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

        See more
        Tim Nolet

        AWS Lambda Serverless Amazon CloudWatch Azure Functions Google Cloud Functions Node.js

        In the last year or so, I moved all Checkly monitoring workloads to AWS Lambda. Here are some stats:

        • We run three core functions in all AWS regions. They handle API checks, browser checks and setup / teardown scripts. Check our docs to find out what that means.
        • All functions are hooked up to SNS topics but can also be triggered directly through AWS SDK calls.
        • The busiest function is a plumbing function that forwards data to our database. It is invoked anywhere between 7000 and 10.000 times per hour with an average duration of about 179 ms.
        • We run separate dev and test versions of each function in each region.

        Moving all this to AWS Lambda took some work and considerations. The blog post linked below goes into the following topics:

        • Why Lambda is an almost perfect match for SaaS. Especially when you're small.
        • Why I don't use a "big" framework around it.
        • Why distributed background jobs triggered by queues are Lambda's raison d'être.
        • Why monitoring & logging is still an issue.

        https://blog.checklyhq.com/how-i-made-aws-lambda-work-for-my-saas/

        See more
        Google Cloud Run logo

        Google Cloud Run

        176
        175
        55
        Run stateless HTTP containers on a fully managed environment or in your own GKE cluster
        176
        175
        + 1
        55
        PROS OF GOOGLE CLOUD RUN
        • 9
          Pay per use
        • 9
          Fully managed
        • 8
          HTTPS endpoints
        • 7
          Concurrency: multiple requests sent to each container
        • 6
          Custom domains with auto SSL
        • 6
          Serverless
        • 6
          Deploy containers
        • 4
          "Invoke IAM permission" to manage authentication
        CONS OF GOOGLE CLOUD RUN
          Be the first to leave a con

          related Google Cloud Run posts

          I use Google Cloud Run because it's like bring your own docker image to Google Cloud Functions.

          I use it for building Dash Apps

          It creates a nice url for web apps, and I see it being the evolution of serverless if GCP can scale this up.

          My Real-Time Python App Example

          See more

          What are the best options to host a Spring Boot application that acts as a receiver and publisher from Google Cloud Pub/Sub. I am using Google App Engine to do that, but there is Google Cloud Dataflow and Google Cloud Run that can be used. Which is the best option that can be used for this purpose and also that can handle the failover scenarios as well. Thanks!

          See more