AWS Lambda logo
Automatically run code in response to modifications to objects in Amazon S3 buckets, messages in Kinesis streams, or updates in DynamoDB

What is AWS Lambda?

AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security.
AWS Lambda is a tool in the Serverless / Task Processing category of a tech stack.

Who uses AWS Lambda?

Companies
1002 companies use AWS Lambda in their tech stacks, including 9GAG, Asana, and CircleCI.

Developers
585 developers use AWS Lambda.

AWS Lambda Integrations

Amazon API Gateway, Serverless, Buddy, Prisma Cloud, and OpsGenie are some of the popular tools that integrate with AWS Lambda. Here's a list of all 42 tools that integrate with AWS Lambda.

Why developers like AWS Lambda?

Here’s a list of reasons why companies and developers use AWS Lambda
AWS Lambda Reviews

Here are some stack decisions, common use cases and reviews by companies and developers who chose AWS Lambda in their tech stack.

Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner · | 23 upvotes · 59.5K views
atFundsCorner
Zappa
AWS Lambda
SQLAlchemy
Python
Amazon SQS
Node.js
MongoDB Stitch
PostgreSQL
MongoDB

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Julien DeFrance
Julien DeFrance
Full Stack Engineering Manager at ValiMail · | 16 upvotes · 69.2K views
atSmartZip
Amazon DynamoDB
Ruby
Node.js
AWS Lambda
New Relic
Amazon Elasticsearch Service
Elasticsearch
Superset
Amazon Quicksight
Amazon Redshift
Zapier
Segment
Amazon CloudFront
Memcached
Amazon ElastiCache
Amazon RDS for Aurora
MySQL
Amazon RDS
Amazon S3
Docker
Capistrano
AWS Elastic Beanstalk
Rails API
Rails
Algolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

See more
Tim Nolet
Tim Nolet
Founder, Engineer & Dishwasher at Checkly · | 16 upvotes · 43.6K views
atChecklyHQ
vuex
Knex.js
PostgreSQL
Amazon S3
AWS Lambda
Vue.js
hapi
Node.js
GitHub
Docker
Heroku

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

See more
Tim Specht
Tim Specht
‎Co-Founder and CTO at Dubsmash · | 14 upvotes · 7K views
atDubsmash
Google BigQuery
Amazon SQS
AWS Lambda
Amazon Kinesis
Google Analytics
#BigDataAsAService
#RealTimeDataProcessing
#GeneralAnalytics
#ServerlessTaskProcessing

In order to accurately measure & track user behaviour on our platform we moved over quickly from the initial solution using Google Analytics to a custom-built one due to resource & pricing concerns we had.

While this does sound complicated, it’s as easy as clients sending JSON blobs of events to Amazon Kinesis from where we use AWS Lambda & Amazon SQS to batch and process incoming events and then ingest them into Google BigQuery. Once events are stored in BigQuery (which usually only takes a second from the time the client sends the data until it’s available), we can use almost-standard-SQL to simply query for data while Google makes sure that, even with terabytes of data being scanned, query times stay in the range of seconds rather than hours. Before ingesting their data into the pipeline, our mobile clients are aggregating events internally and, once a certain threshold is reached or the app is going to the background, sending the events as a JSON blob into the stream.

In the past we had workers running that continuously read from the stream and would validate and post-process the data and then enqueue them for other workers to write them to BigQuery. We went ahead and implemented the Lambda-based approach in such a way that Lambda functions would automatically be triggered for incoming records, pre-aggregate events, and write them back to SQS, from which we then read them, and persist the events to BigQuery. While this approach had a couple of bumps on the road, like re-triggering functions asynchronously to keep up with the stream and proper batch sizes, we finally managed to get it running in a reliable way and are very happy with this solution today.

#ServerlessTaskProcessing #GeneralAnalytics #RealTimeDataProcessing #BigDataAsAService

See more
Tim Specht
Tim Specht
‎Co-Founder and CTO at Dubsmash · | 14 upvotes · 2.1K views
atDubsmash
Amazon SNS
AWS Lambda
#ApplicationHosting

Whenever we need to notify a user of something happening on our platform, whether it’s a personal push notification from one user to another, a new Dub, or a notification going out to millions of users at the same time that new content is available, we rely on AWS Lambda to do this task for us. When we started implementing this feature 2 years ago we were luckily able to get early access to the Lambda Beta and are still happy with the way things are running on there, especially given all the easy to set up integrations with other AWS services.

Lambda enables us to quickly send out million of pushes within a couple of minutes by acting as a multiplexer in front of Amazon SNS. We simply call a first Lambda function with a batch of up to 300 push notifications to be sent, which then calls a subsequent Lambda function with 20 pushes each, which then does the call to SNS to actually send out the push notifications.

This multi-tier process of sending push notifications enables us to quickly adjust our sending volume while keeping costs & maintenance overhead, on our side, to a bare minimum.

#ApplicationHosting

See more
Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner · | 12 upvotes · 47K views
atFundsCorner
Amazon SQS
Sentry
GitLab CI
Slack
Google Compute Engine
Netlify
AWS Lambda
Zappa
vuex
Vuetify
Vue.js
Swagger UI
MongoDB
Flask
Python

At FundsCorner, we are on a mission to enable fast accessible credit to India’s Kirana Stores. We are an early stage startup with an ultra small Engineering team. All the tech decisions we have made until now are based on our core philosophy: "Build usable products fast".

Based on the above fundamentals, we chose Python as our base language for all our APIs and micro-services. It is ultra easy to start with, yet provides great libraries even for the most complex of use cases. Our entire backend stack runs on Python and we cannot be more happy with it! If you are looking to deploy your API as server-less, Python provides one of the least cold start times.

We build our APIs with Flask. For backend database, our natural choice was MongoDB. It frees up our time from complex database specifications - we instead use our time in doing sensible data modelling & once we finalize the data model, we integrate it into Flask using Swagger UI. Mongo supports complex queries to cull out difficult data through aggregation framework & we have even built an internal framework called "Poetry", for aggregation queries.

Our web apps are built on Vue.js , Vuetify and vuex. Initially we debated a lot around choosing Vue.js or React , but finally settled with Vue.js, mainly because of the ease of use, fast development cycles & awesome set of libraries and utilities backing Vue.

You simply cannot go wrong with Vue.js . Great documentation, the library is ultra compact & is blazing fast. Choosing Vue.js was one of the critical decisions made, which enabled us to launch our web app in under a month (which otherwise would have taken 3 months easily). For those folks who are looking for big names, Adobe, and Alibaba and Gitlab are using Vue.

By choosing Vuetify, we saved thousands of person hours in designing the CSS files. Vuetify contains all key material components for designing a smooth User experience & it just works! It's an awesome framework. All of us at FundsCorner are now lifelong fanboys of Vue.js and Vuetify.

On the infrastructure side, all our API services and backend services are deployed as server less micro-services through Zappa. Zappa makes your life super easy by packaging everything that is required to deploy your code as AWS Lambda. We are now addicted to the single - click deploys / updates through Zappa. Try it out & you will convert!

Also, if you are using Zappa, you can greatly simplify your CI / CD pipelines. Do try it! It's just awesome! and... you will be astonished by the savings you have made on AWS bills at end of the month.

Our CI / CD pipelines are built using GitLab CI. The documentation is very good & it enables you to go from from concept to production in minimal time frame.

We use Sentry for all crash reporting and resolution. Pro tip, they do have handlers for AWS Lambda , which made our integration super easy.

All our micro-services including APIs are event-driven. Our background micro-services are message oriented & we use Amazon SQS as our message pipe. We have our own in-house workflow manager to orchestrate across micro - services.

We host our static websites on Netlify. One of the cool things about Netlify is the automated CI / CD on git push. You just do a git push to deploy! Again, it is super simple to use and it just works. We were dogmatic about going server less even on static web sites & you can go server less on Netlify in a few minutes. It's just a few clicks away.

We use Google Compute Engine, especially Google Vision for our AI experiments.

For Ops automation, we use Slack. Slack provides a super-rich API (through Slack App) through which you can weave magical automation on boring ops tasks.

See more

AWS Lambda's features

  • Extend other AWS services with custom logic
  • Build custom back-end services
  • Completely Automated Administration
  • Built-in Fault Tolerance
  • Automatic Scaling
  • Integrated Security Model
  • Bring Your Own Code
  • Pay Per Use
  • Flexible Resource Model

AWS Lambda Alternatives & Comparisons

What are some alternatives to AWS Lambda?
Serverless
Build applications comprised of microservices that run in response to events, auto-scale for you, and only charge you when they run. This lowers the total cost of maintaining your apps, enabling you to build more logic, faster. The Framework uses new event-driven compute services, like AWS Lambda, Google CloudFunctions, and more.
AWS Elastic Beanstalk
Once you upload your application, Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
Azure Functions
Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems.
AWS Step Functions
AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly.
Google App Engine
Google has a reputation for highly reliable, high performance infrastructure. With App Engine you can take advantage of the 10 years of knowledge Google has in running massively scalable, performance driven systems. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow.
See all alternatives

AWS Lambda's Stats

- No public GitHub repository available -