Alternatives to Serverless logo

Alternatives to Serverless

AWS Lambda, Zappa, Terraform, Cloud Functions for Firebase, and Google Cloud Functions are the most popular alternatives and competitors to Serverless.
533
485
+ 1
19

What is Serverless and what are its top alternatives?

Build applications comprised of microservices that run in response to events, auto-scale for you, and only charge you when they run. This lowers the total cost of maintaining your apps, enabling you to build more logic, faster. The Framework uses new event-driven compute services, like AWS Lambda, Google CloudFunctions, and more.
Serverless is a tool in the Serverless / Task Processing category of a tech stack.
Serverless is an open source tool with 34.6K GitHub stars and 4K GitHub forks. Here’s a link to Serverless's open source repository on GitHub

Serverless alternatives & related posts

AWS Lambda logo

AWS Lambda

6.1K
4.5K
382
6.1K
4.5K
+ 1
382
Automatically run code in response to modifications to objects in Amazon S3 buckets, messages in Kinesis streams, or...
AWS Lambda logo
AWS Lambda
VS
Serverless logo
Serverless

related AWS Lambda posts

Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner · | 24 upvotes · 741.8K views
atFundsCornerFundsCorner
MongoDB
MongoDB
PostgreSQL
PostgreSQL
MongoDB Stitch
MongoDB Stitch
Node.js
Node.js
Amazon SQS
Amazon SQS
Python
Python
SQLAlchemy
SQLAlchemy
AWS Lambda
AWS Lambda
Zappa
Zappa

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Julien DeFrance
Julien DeFrance
Principal Software Engineer at Tophatter · | 16 upvotes · 1.1M views
atSmartZipSmartZip
Rails
Rails
Rails API
Rails API
AWS Elastic Beanstalk
AWS Elastic Beanstalk
Capistrano
Capistrano
Docker
Docker
Amazon S3
Amazon S3
Amazon RDS
Amazon RDS
MySQL
MySQL
Amazon RDS for Aurora
Amazon RDS for Aurora
Amazon ElastiCache
Amazon ElastiCache
Memcached
Memcached
Amazon CloudFront
Amazon CloudFront
Segment
Segment
Zapier
Zapier
Amazon Redshift
Amazon Redshift
Amazon Quicksight
Amazon Quicksight
Superset
Superset
Elasticsearch
Elasticsearch
Amazon Elasticsearch Service
Amazon Elasticsearch Service
New Relic
New Relic
AWS Lambda
AWS Lambda
Node.js
Node.js
Ruby
Ruby
Amazon DynamoDB
Amazon DynamoDB
Algolia
Algolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

See more
Zappa logo

Zappa

31
28
0
31
28
+ 1
0
Deploy all Python WSGI applications on AWS Lambda + API Gateway.
    Be the first to leave a pro
    Zappa logo
    Zappa
    VS
    Serverless logo
    Serverless

    related Zappa posts

    Jeyabalaji Subramanian
    Jeyabalaji Subramanian
    CTO at FundsCorner · | 24 upvotes · 741.8K views
    atFundsCornerFundsCorner
    MongoDB
    MongoDB
    PostgreSQL
    PostgreSQL
    MongoDB Stitch
    MongoDB Stitch
    Node.js
    Node.js
    Amazon SQS
    Amazon SQS
    Python
    Python
    SQLAlchemy
    SQLAlchemy
    AWS Lambda
    AWS Lambda
    Zappa
    Zappa

    Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

    We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

    Based on the above criteria, we selected the following tools to perform the end to end data replication:

    We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

    We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

    In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

    Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

    In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

    See more
    Jeyabalaji Subramanian
    Jeyabalaji Subramanian
    CTO at FundsCorner · | 16 upvotes · 456K views
    atFundsCornerFundsCorner
    Amazon SQS
    Amazon SQS
    Python
    Python
    Flask
    Flask
    MongoDB
    MongoDB
    Swagger UI
    Swagger UI
    Vue.js
    Vue.js
    Vuetify
    Vuetify
    vuex
    vuex
    Zappa
    Zappa
    AWS Lambda
    AWS Lambda
    Netlify
    Netlify
    Google Compute Engine
    Google Compute Engine
    Slack
    Slack
    GitLab CI
    GitLab CI
    Sentry
    Sentry

    At FundsCorner, we are on a mission to enable fast accessible credit to India’s Kirana Stores. We are an early stage startup with an ultra small Engineering team. All the tech decisions we have made until now are based on our core philosophy: "Build usable products fast".

    Based on the above fundamentals, we chose Python as our base language for all our APIs and micro-services. It is ultra easy to start with, yet provides great libraries even for the most complex of use cases. Our entire backend stack runs on Python and we cannot be more happy with it! If you are looking to deploy your API as server-less, Python provides one of the least cold start times.

    We build our APIs with Flask. For backend database, our natural choice was MongoDB. It frees up our time from complex database specifications - we instead use our time in doing sensible data modelling & once we finalize the data model, we integrate it into Flask using Swagger UI. Mongo supports complex queries to cull out difficult data through aggregation framework & we have even built an internal framework called "Poetry", for aggregation queries.

    Our web apps are built on Vue.js , Vuetify and vuex. Initially we debated a lot around choosing Vue.js or React , but finally settled with Vue.js, mainly because of the ease of use, fast development cycles & awesome set of libraries and utilities backing Vue.

    You simply cannot go wrong with Vue.js . Great documentation, the library is ultra compact & is blazing fast. Choosing Vue.js was one of the critical decisions made, which enabled us to launch our web app in under a month (which otherwise would have taken 3 months easily). For those folks who are looking for big names, Adobe, and Alibaba and Gitlab are using Vue.

    By choosing Vuetify, we saved thousands of person hours in designing the CSS files. Vuetify contains all key material components for designing a smooth User experience & it just works! It's an awesome framework. All of us at FundsCorner are now lifelong fanboys of Vue.js and Vuetify.

    On the infrastructure side, all our API services and backend services are deployed as server less micro-services through Zappa. Zappa makes your life super easy by packaging everything that is required to deploy your code as AWS Lambda. We are now addicted to the single - click deploys / updates through Zappa. Try it out & you will convert!

    Also, if you are using Zappa, you can greatly simplify your CI / CD pipelines. Do try it! It's just awesome! and... you will be astonished by the savings you have made on AWS bills at end of the month.

    Our CI / CD pipelines are built using GitLab CI. The documentation is very good & it enables you to go from from concept to production in minimal time frame.

    We use Sentry for all crash reporting and resolution. Pro tip, they do have handlers for AWS Lambda , which made our integration super easy.

    All our micro-services including APIs are event-driven. Our background micro-services are message oriented & we use Amazon SQS as our message pipe. We have our own in-house workflow manager to orchestrate across micro - services.

    We host our static websites on Netlify. One of the cool things about Netlify is the automated CI / CD on git push. You just do a git push to deploy! Again, it is super simple to use and it just works. We were dogmatic about going server less even on static web sites & you can go server less on Netlify in a few minutes. It's just a few clicks away.

    We use Google Compute Engine, especially Google Vision for our AI experiments.

    For Ops automation, we use Slack. Slack provides a super-rich API (through Slack App) through which you can weave magical automation on boring ops tasks.

    See more

    related Terraform posts

    Google Cloud IoT Core
    Google Cloud IoT Core
    Terraform
    Terraform
    Python
    Python
    Google Cloud Deployment Manager
    Google Cloud Deployment Manager
    Google Cloud Build
    Google Cloud Build
    Google Cloud Run
    Google Cloud Run
    Google Cloud Bigtable
    Google Cloud Bigtable
    Google BigQuery
    Google BigQuery
    Google Cloud Storage
    Google Cloud Storage
    Google Compute Engine
    Google Compute Engine
    GitHub
    GitHub

    Context: I wanted to create an end to end IoT data pipeline simulation in Google Cloud IoT Core and other GCP services. I never touched Terraform meaningfully until working on this project, and it's one of the best explorations in my development career. The documentation and syntax is incredibly human-readable and friendly. I'm used to building infrastructure through the google apis via Python , but I'm so glad past Sung did not make that decision. I was tempted to use Google Cloud Deployment Manager, but the templates were a bit convoluted by first impression. I'm glad past Sung did not make this decision either.

    Solution: Leveraging Google Cloud Build Google Cloud Run Google Cloud Bigtable Google BigQuery Google Cloud Storage Google Compute Engine along with some other fun tools, I can deploy over 40 GCP resources using Terraform!

    Check Out My Architecture: CLICK ME

    Check out the GitHub repo attached

    See more
    Praveen Mooli
    Praveen Mooli
    Engineering Manager at Taylor and Francis · | 12 upvotes · 548.8K views
    MongoDB Atlas
    MongoDB Atlas
    Java
    Java
    Spring Boot
    Spring Boot
    Node.js
    Node.js
    ExpressJS
    ExpressJS
    Python
    Python
    Flask
    Flask
    Amazon Kinesis
    Amazon Kinesis
    Amazon Kinesis Firehose
    Amazon Kinesis Firehose
    Amazon SNS
    Amazon SNS
    Amazon SQS
    Amazon SQS
    AWS Lambda
    AWS Lambda
    Angular 2
    Angular 2
    RxJS
    RxJS
    GitHub
    GitHub
    Travis CI
    Travis CI
    Terraform
    Terraform
    Docker
    Docker
    Serverless
    Serverless
    Amazon RDS
    Amazon RDS
    Amazon DynamoDB
    Amazon DynamoDB
    Amazon S3
    Amazon S3
    #Backend
    #Microservices
    #Eventsourcingframework
    #Webapps
    #Devops
    #Data

    We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

    To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

    To build #Webapps we decided to use Angular 2 with RxJS

    #Devops - GitHub , Travis CI , Terraform , Docker , Serverless

    See more
    Cloud Functions for Firebase logo

    Cloud Functions for Firebase

    272
    228
    1
    272
    228
    + 1
    1
    Run your mobile backend code without managing servers
    Cloud Functions for Firebase logo
    Cloud Functions for Firebase
    VS
    Serverless logo
    Serverless

    related Cloud Functions for Firebase posts

    Aliadoc Team
    Aliadoc Team
    at aliadoc.com · | 5 upvotes · 308.5K views
    atAliadocAliadoc
    React
    React
    Create React App
    Create React App
    CloudFlare
    CloudFlare
    Firebase
    Firebase
    Cloud Functions for Firebase
    Cloud Functions for Firebase
    Google App Engine
    Google App Engine
    Google Cloud Storage
    Google Cloud Storage
    Serverless
    Serverless
    Visual Studio Code
    Visual Studio Code
    Bitbucket
    Bitbucket
    #Aliadoc

    In #Aliadoc, we're exploring the crowdfunding option to get traction before launch. We are building a SaaS platform for website design customization.

    For the Admin UI and website editor we use React and we're currently transitioning from a Create React App setup to a custom one because our needs have become more specific. We use CloudFlare as much as possible, it's a great service.

    For routing dynamic resources and proxy tasks to feed websites to the editor we leverage CloudFlare Workers for improved responsiveness. We use Firebase for our hosting needs and user authentication while also using several Cloud Functions for Firebase to interact with other services along with Google App Engine and Google Cloud Storage, but also the Real Time Database is on the radar for collaborative website editing.

    We generally hate configuration but honestly because of the stage of our project we lack resources for doing heavy sysops work. So we are basically just relying on Serverless technologies as much as we can to do all server side processing.

    Visual Studio Code definitively makes programming a much easier and enjoyable task, we just love it. We combine it with Bitbucket for our source code control needs.

    See more
    Google Cloud Functions logo

    Google Cloud Functions

    221
    194
    9
    221
    194
    + 1
    9
    A serverless environment to build and connect cloud services
    Google Cloud Functions logo
    Google Cloud Functions
    VS
    Serverless logo
    Serverless

    related Google Cloud Functions posts

    Kestas Barzdaitis
    Kestas Barzdaitis
    Entrepreneur & Engineer · | 14 upvotes · 177.9K views
    atCodeFactorCodeFactor
    Kubernetes
    Kubernetes
    CodeFactor.io
    CodeFactor.io
    Amazon EC2
    Amazon EC2
    Microsoft Azure
    Microsoft Azure
    Google Compute Engine
    Google Compute Engine
    Docker
    Docker
    AWS Lambda
    AWS Lambda
    Azure Functions
    Azure Functions
    Google Cloud Functions
    Google Cloud Functions
    #SAAS
    #IAAS
    #Containerization
    #Autoscale
    #Startup
    #Automation
    #Machinelearning
    #AI
    #Devops

    CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

    CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

    AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

    It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

    The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

    In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

    Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

    See more
    Tim Nolet
    Tim Nolet
    Founder, Engineer & Dishwasher at Checkly · | 5 upvotes · 55.7K views
    atChecklyHQChecklyHQ
    AWS Lambda
    AWS Lambda
    Serverless
    Serverless
    Amazon CloudWatch
    Amazon CloudWatch
    Azure Functions
    Azure Functions
    Google Cloud Functions
    Google Cloud Functions
    Node.js
    Node.js

    AWS Lambda Serverless Amazon CloudWatch Azure Functions Google Cloud Functions Node.js

    In the last year or so, I moved all Checkly monitoring workloads to AWS Lambda. Here are some stats:

    • We run three core functions in all AWS regions. They handle API checks, browser checks and setup / teardown scripts. Check our docs to find out what that means.
    • All functions are hooked up to SNS topics but can also be triggered directly through AWS SDK calls.
    • The busiest function is a plumbing function that forwards data to our database. It is invoked anywhere between 7000 and 10.000 times per hour with an average duration of about 179 ms.
    • We run separate dev and test versions of each function in each region.

    Moving all this to AWS Lambda took some work and considerations. The blog post linked below goes into the following topics:

    • Why Lambda is an almost perfect match for SaaS. Especially when you're small.
    • Why I don't use a "big" framework around it.
    • Why distributed background jobs triggered by queues are Lambda's raison d'être.
    • Why monitoring & logging is still an issue.

    https://blog.checklyhq.com/how-i-made-aws-lambda-work-for-my-saas/

    See more

    related Azure Functions posts

    Kestas Barzdaitis
    Kestas Barzdaitis
    Entrepreneur & Engineer · | 14 upvotes · 177.9K views
    atCodeFactorCodeFactor
    Kubernetes
    Kubernetes
    CodeFactor.io
    CodeFactor.io
    Amazon EC2
    Amazon EC2
    Microsoft Azure
    Microsoft Azure
    Google Compute Engine
    Google Compute Engine
    Docker
    Docker
    AWS Lambda
    AWS Lambda
    Azure Functions
    Azure Functions
    Google Cloud Functions
    Google Cloud Functions
    #SAAS
    #IAAS
    #Containerization
    #Autoscale
    #Startup
    #Automation
    #Machinelearning
    #AI
    #Devops

    CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

    CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

    AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

    It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

    The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

    In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

    Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

    See more
    Michal Nowak
    Michal Nowak
    Co-founder at Evojam · | 7 upvotes · 159.5K views
    atEvojamEvojam
    Serverless
    Serverless
    AWS Lambda
    AWS Lambda
    Firebase
    Firebase
    Azure Functions
    Azure Functions

    In a couple of recent projects we had an opportunity to try out the new Serverless approach to building web applications. It wasn't necessarily a question if we should use any particular vendor but rather "if" we can consider serverless a viable option for building apps. Obviously our goal was also to get a feel for this technology and gain some hands-on experience.

    We did consider AWS Lambda, Firebase from Google as well as Azure Functions. Eventually we went with AWS Lambdas.

    PROS
    • No servers to manage (obviously!)
    • Limited fixed costs – you pay only for used time
    • Automated scaling and balancing
    • Automatic failover (or, at this level of abstraction, no failover problem at all)
    • Security easier to provide and audit
    • Low overhead at the start (with the certain level of knowledge)
    • Short time to market
    • Easy handover - deployment coupled with code
    • Perfect choice for lean startups with fast-paced iterations
    • Augmentation for the classic cloud, server(full) approach
    CONS
    • Not much know-how and best practices available about structuring the code and projects on the market
    • Not suitable for complex business logic due to the risk of producing highly coupled code
    • Cost difficult to estimate (helpful tools: serverlesscalc.com)
    • Difficulty in migration to other platforms (Vendor lock⚠️)
    • Little engineers with experience in serverless on the job market
    • Steep learning curve for engineers without any cloud experience

    More details are on our blog: https://evojam.com/blog/2018/12/5/should-you-go-serverless-meet-the-benefits-and-flaws-of-new-wave-of-cloud-solutions I hope it helps 🙌 & I'm curious of your experiences.

    See more
    Apex logo

    Apex

    63
    59
    0
    63
    59
    + 1
    0
    Serverless Architecture with AWS Lambda
      Be the first to leave a pro
      Apex logo
      Apex
      VS
      Serverless logo
      Serverless

      related Google Cloud Run posts

      Google Cloud Run
      Google Cloud Run
      Google Cloud Functions
      Google Cloud Functions

      I use Google Cloud Run because it's like bring your own docker image to Google Cloud Functions.

      I use it for building Dash Apps

      It creates a nice url for web apps, and I see it being the evolution of serverless if GCP can scale this up.

      My Real-Time Python App Example

      See more