AWS Lambda logo

AWS Lambda

Automatically run code in response to modifications to objects in Amazon S3 buckets, messages in Kinesis streams, or updates in DynamoDB
5.1K
3.5K
+ 1
383

What is AWS Lambda?

AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security.
AWS Lambda is a tool in the Serverless / Task Processing category of a tech stack.

Who uses AWS Lambda?

Companies
1450 companies reportedly use AWS Lambda in their tech stacks, including 9GAG, Asana, and CircleCI.

Developers
3449 developers on StackShare have stated that they use AWS Lambda.

AWS Lambda Integrations

Transloadit, Amazon API Gateway, Buddy, AWS Mobile Hub, and Cloudcraft are some of the popular tools that integrate with AWS Lambda. Here's a list of all 51 tools that integrate with AWS Lambda.

Why developers like AWS Lambda?

Here鈥檚 a list of reasons why companies and developers use AWS Lambda
AWS Lambda Reviews

Here are some stack decisions, common use cases and reviews by companies and developers who chose AWS Lambda in their tech stack.

Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner | 24 upvotes 346.2K views
atFundsCornerFundsCorner
MongoDB
MongoDB
PostgreSQL
PostgreSQL
MongoDB Stitch
MongoDB Stitch
Node.js
Node.js
Amazon SQS
Amazon SQS
Python
Python
SQLAlchemy
SQLAlchemy
AWS Lambda
AWS Lambda
Zappa
Zappa

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Tim Nolet
Tim Nolet
Founder, Engineer & Dishwasher at Checkly | 19 upvotes 267.8K views
atChecklyHQChecklyHQ
Heroku
Heroku
Docker
Docker
GitHub
GitHub
Node.js
Node.js
hapi
hapi
Vue.js
Vue.js
AWS Lambda
AWS Lambda
Amazon S3
Amazon S3
PostgreSQL
PostgreSQL
Knex.js
Knex.js
vuex
vuex

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

See more
Julien DeFrance
Julien DeFrance
Principal Software Engineer at Tophatter | 16 upvotes 485.9K views
atSmartZipSmartZip
Rails
Rails
Rails API
Rails API
AWS Elastic Beanstalk
AWS Elastic Beanstalk
Capistrano
Capistrano
Docker
Docker
Amazon S3
Amazon S3
Amazon RDS
Amazon RDS
MySQL
MySQL
Amazon RDS for Aurora
Amazon RDS for Aurora
Amazon ElastiCache
Amazon ElastiCache
Memcached
Memcached
Amazon CloudFront
Amazon CloudFront
Segment
Segment
Zapier
Zapier
Amazon Redshift
Amazon Redshift
Amazon Quicksight
Amazon Quicksight
Superset
Superset
Elasticsearch
Elasticsearch
Amazon Elasticsearch Service
Amazon Elasticsearch Service
New Relic
New Relic
AWS Lambda
AWS Lambda
Node.js
Node.js
Ruby
Ruby
Amazon DynamoDB
Amazon DynamoDB
Algolia
Algolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

See more
Tim Specht
Tim Specht
鈥嶤o-Founder and CTO at Dubsmash | 14 upvotes 86.9K views
atDubsmashDubsmash
Google Analytics
Google Analytics
Amazon Kinesis
Amazon Kinesis
AWS Lambda
AWS Lambda
Amazon SQS
Amazon SQS
Google BigQuery
Google BigQuery
#ServerlessTaskProcessing
#GeneralAnalytics
#RealTimeDataProcessing
#BigDataAsAService

In order to accurately measure & track user behaviour on our platform we moved over quickly from the initial solution using Google Analytics to a custom-built one due to resource & pricing concerns we had.

While this does sound complicated, it鈥檚 as easy as clients sending JSON blobs of events to Amazon Kinesis from where we use AWS Lambda & Amazon SQS to batch and process incoming events and then ingest them into Google BigQuery. Once events are stored in BigQuery (which usually only takes a second from the time the client sends the data until it鈥檚 available), we can use almost-standard-SQL to simply query for data while Google makes sure that, even with terabytes of data being scanned, query times stay in the range of seconds rather than hours. Before ingesting their data into the pipeline, our mobile clients are aggregating events internally and, once a certain threshold is reached or the app is going to the background, sending the events as a JSON blob into the stream.

In the past we had workers running that continuously read from the stream and would validate and post-process the data and then enqueue them for other workers to write them to BigQuery. We went ahead and implemented the Lambda-based approach in such a way that Lambda functions would automatically be triggered for incoming records, pre-aggregate events, and write them back to SQS, from which we then read them, and persist the events to BigQuery. While this approach had a couple of bumps on the road, like re-triggering functions asynchronously to keep up with the stream and proper batch sizes, we finally managed to get it running in a reliable way and are very happy with this solution today.

#ServerlessTaskProcessing #GeneralAnalytics #RealTimeDataProcessing #BigDataAsAService

See more
Kestas Barzdaitis
Kestas Barzdaitis
Entrepreneur & Engineer | 14 upvotes 80.9K views
atCodeFactorCodeFactor
Kubernetes
Kubernetes
CodeFactor.io
CodeFactor.io
Amazon EC2
Amazon EC2
Microsoft Azure
Microsoft Azure
Google Compute Engine
Google Compute Engine
Docker
Docker
AWS Lambda
AWS Lambda
Azure Functions
Azure Functions
Google Cloud Functions
Google Cloud Functions
#SAAS
#IAAS
#Containerization
#Autoscale
#Startup
#Automation
#Machinelearning
#AI
#Devops

CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

See more
Tim Specht
Tim Specht
鈥嶤o-Founder and CTO at Dubsmash | 14 upvotes 12.6K views
atDubsmashDubsmash
AWS Lambda
AWS Lambda
Amazon SNS
Amazon SNS
#ApplicationHosting

Whenever we need to notify a user of something happening on our platform, whether it鈥檚 a personal push notification from one user to another, a new Dub, or a notification going out to millions of users at the same time that new content is available, we rely on AWS Lambda to do this task for us. When we started implementing this feature 2 years ago we were luckily able to get early access to the Lambda Beta and are still happy with the way things are running on there, especially given all the easy to set up integrations with other AWS services.

Lambda enables us to quickly send out million of pushes within a couple of minutes by acting as a multiplexer in front of Amazon SNS. We simply call a first Lambda function with a batch of up to 300 push notifications to be sent, which then calls a subsequent Lambda function with 20 pushes each, which then does the call to SNS to actually send out the push notifications.

This multi-tier process of sending push notifications enables us to quickly adjust our sending volume while keeping costs & maintenance overhead, on our side, to a bare minimum.

#ApplicationHosting

See more

AWS Lambda's Features

  • Extend other AWS services with custom logic
  • Build custom back-end services
  • Completely Automated Administration
  • Built-in Fault Tolerance
  • Automatic Scaling
  • Integrated Security Model
  • Bring Your Own Code
  • Pay Per Use
  • Flexible Resource Model

AWS Lambda Alternatives & Comparisons

What are some alternatives to AWS Lambda?
Serverless
Build applications comprised of microservices that run in response to events, auto-scale for you, and only charge you when they run. This lowers the total cost of maintaining your apps, enabling you to build more logic, faster. The Framework uses new event-driven compute services, like AWS Lambda, Google CloudFunctions, and more.
Azure Functions
Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems.
AWS Elastic Beanstalk
Once you upload your application, Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
AWS Step Functions
AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly.
Google App Engine
Google has a reputation for highly reliable, high performance infrastructure. With App Engine you can take advantage of the 10 years of knowledge Google has in running massively scalable, performance driven systems. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow.
See all alternatives

AWS Lambda's Followers
3532 developers follow AWS Lambda to keep up with related blogs and decisions.
Tirupati Balaji
jeremie olivier
Leila Dos Santos
Dmitry Kovalenko
William Xu
呕aneta Ja偶d偶yk
Bijay Nayak
Ankur Bansal
Jon Rydarowski
dyorg