Alternatives to Kubeless logo

Alternatives to Kubeless

Knative, OpenFaaS, AWS Lambda, Fission, and Nuclio are the most popular alternatives and competitors to Kubeless.
39
192
+ 1
0

What is Kubeless and what are its top alternatives?

Kubeless is a Kubernetes native serverless Framework. Kubeless supports both HTTP and event based functions triggers. It has a serverless plugin, a graphical user interface and multiple runtimes, including Python and Node.js.
Kubeless is a tool in the Serverless / Task Processing category of a tech stack.
Kubeless is an open source tool with GitHub stars and GitHub forks. Here’s a link to Kubeless's open source repository on GitHub

Top Alternatives to Kubeless

  • Knative
    Knative

    Knative provides a set of middleware components that are essential to build modern, source-centric, and container-based applications that can run anywhere: on premises, in the cloud, or even in a third-party data center ...

  • OpenFaaS
    OpenFaaS

    Serverless Functions Made Simple for Docker and Kubernetes

  • AWS Lambda
    AWS Lambda

    AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security. ...

  • Fission
    Fission

    Write short-lived functions in any language, and map them to HTTP requests (or other event triggers). Deploy functions instantly with one command. There are no containers to build, and no Docker registries to manage. ...

  • Nuclio
    Nuclio

    nuclio is portable across IoT devices, laptops, on-premises datacenters and cloud deployments, eliminating cloud lock-ins and enabling hybrid solutions. ...

  • Kubernetes
    Kubernetes

    Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. ...

  • Serverless
    Serverless

    Build applications comprised of microservices that run in response to events, auto-scale for you, and only charge you when they run. This lowers the total cost of maintaining your apps, enabling you to build more logic, faster. The Framework uses new event-driven compute services, like AWS Lambda, Google CloudFunctions, and more. ...

  • Azure Functions
    Azure Functions

    Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems. ...

Kubeless alternatives & related posts

Knative logo

Knative

82
336
21
Kubernetes-based platform for serverless workloads
82
336
+ 1
21
PROS OF KNATIVE
  • 5
    Portability
  • 4
    Autoscaling
  • 3
    Open source
  • 3
    Eventing
  • 3
    Secure Eventing
  • 3
    On top of Kubernetes
CONS OF KNATIVE
    Be the first to leave a con

    related Knative posts

    Currently been using an older version of OpenFaaS, but the new version now requires payment for things we did on the older version. Been looking for alternatives to OpenFaas that have Kafka integrations, and scale to 0 capabilities.

    looked at Apache OpenWhisk, but we run on RKE2, and my initial install of Openwhisk appears to be too out of date to support RKE2 and missing images from docker.io. So now looking at Knative. What are your thoughts? We need support to be able to process functions about 10k a min, which can vary on time of execution, between ms and mins. So looking for horizontal scaling that can be controlled by other metrics, than just cpu and ram utilization, but more so, for example if the wait is over 5 scale out.. Issue with older openfaas, was scaling on RKE2 was not working great, for example, I could get it to scale from 5 to 20 pods, but only 12 of them would ever have data, but my backlog would have 100k's of files waiting.. So even though it scaled up, it was as if the distribution of work was only being married to specific pods. If I killed the pods that had no work, they come up again with no work, if I killed one with work, then another pod would scale up and another pod would start to get work. And On occasion with hours, it would reset down to the original deployment allotment of pods, and never scale up again, until I go into Kubernetes and tell it to add more pods.

    So hoping to find a solution that doesn't require as much triage, to work with scaling, as points in time we are at higher volume and other points of time could be no volume.

    See more
    OpenFaaS logo

    OpenFaaS

    55
    230
    17
    Serverless Functions Made Simple for Kubernetes and Docker
    55
    230
    + 1
    17
    PROS OF OPENFAAS
    • 5
      Open source
    • 4
      Ease
    • 3
      Autoscaling
    • 2
      Community
    • 2
      Documentation
    • 1
      Async
    CONS OF OPENFAAS
      Be the first to leave a con

      related OpenFaaS posts

      Currently been using an older version of OpenFaaS, but the new version now requires payment for things we did on the older version. Been looking for alternatives to OpenFaas that have Kafka integrations, and scale to 0 capabilities.

      looked at Apache OpenWhisk, but we run on RKE2, and my initial install of Openwhisk appears to be too out of date to support RKE2 and missing images from docker.io. So now looking at Knative. What are your thoughts? We need support to be able to process functions about 10k a min, which can vary on time of execution, between ms and mins. So looking for horizontal scaling that can be controlled by other metrics, than just cpu and ram utilization, but more so, for example if the wait is over 5 scale out.. Issue with older openfaas, was scaling on RKE2 was not working great, for example, I could get it to scale from 5 to 20 pods, but only 12 of them would ever have data, but my backlog would have 100k's of files waiting.. So even though it scaled up, it was as if the distribution of work was only being married to specific pods. If I killed the pods that had no work, they come up again with no work, if I killed one with work, then another pod would scale up and another pod would start to get work. And On occasion with hours, it would reset down to the original deployment allotment of pods, and never scale up again, until I go into Kubernetes and tell it to add more pods.

      So hoping to find a solution that doesn't require as much triage, to work with scaling, as points in time we are at higher volume and other points of time could be no volume.

      See more
      AWS Lambda logo

      AWS Lambda

      25.4K
      18.3K
      432
      Automatically run code in response to modifications to objects in Amazon S3 buckets, messages in Kinesis streams, or...
      25.4K
      18.3K
      + 1
      432
      PROS OF AWS LAMBDA
      • 129
        No infrastructure
      • 83
        Cheap
      • 70
        Quick
      • 59
        Stateless
      • 47
        No deploy, no server, great sleep
      • 12
        AWS Lambda went down taking many sites with it
      • 6
        Event Driven Governance
      • 6
        Extensive API
      • 6
        Auto scale and cost effective
      • 6
        Easy to deploy
      • 5
        VPC Support
      • 3
        Integrated with various AWS services
      CONS OF AWS LAMBDA
      • 7
        Cant execute ruby or go
      • 3
        Compute time limited
      • 1
        Can't execute PHP w/o significant effort

      related AWS Lambda posts

      Jeyabalaji Subramanian

      Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

      We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

      Based on the above criteria, we selected the following tools to perform the end to end data replication:

      We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

      We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

      In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

      Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

      In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

      See more
      Tim Nolet

      Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

      We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

      Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

      Enough biz talk, onto tech. The challenges were:

      • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
      • Update API and back end services to handle and enforce plan limits.
      • Update the UI to kindly state plan limits are in effect on some part of the UI.
      • Update the pricing page to reflect all changes.
      • Keep the actual processing backend, storage and API's as untouched as possible.

      In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

      1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
      2. The Vue.js frontend reads these from the vuex store on login.
      3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
      4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

      Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

      What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

      Hope this helps anyone building out their SaaS and is in a similar situation.

      See more
      Fission logo

      Fission

      27
      80
      3
      Serverless Functions as a Service for Kubernetes
      27
      80
      + 1
      3
      PROS OF FISSION
      • 1
        Any language
      • 1
        Portability
      • 1
        Open source
      CONS OF FISSION
        Be the first to leave a con

        related Fission posts

        Nuclio logo

        Nuclio

        16
        47
        11
        Real-time serverless platform
        16
        47
        + 1
        11
        PROS OF NUCLIO
        • 1
          Enterprise grade
        • 1
          Air gap friendly
        • 1
          Actively maintained and supported
        • 1
          Variety of runtimes
        • 1
          Variety of triggers
        • 1
          Secure image building
        • 1
          Scale to zero
        • 1
          Autoscaling
        • 1
          Parallelism
        • 1
          Performance
        • 1
          Open source
        CONS OF NUCLIO
          Be the first to leave a con

          related Nuclio posts

          Kubernetes logo

          Kubernetes

          58.7K
          50.4K
          677
          Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops
          58.7K
          50.4K
          + 1
          677
          PROS OF KUBERNETES
          • 164
            Leading docker container management solution
          • 128
            Simple and powerful
          • 106
            Open source
          • 76
            Backed by google
          • 58
            The right abstractions
          • 25
            Scale services
          • 20
            Replication controller
          • 11
            Permission managment
          • 9
            Supports autoscaling
          • 8
            Cheap
          • 8
            Simple
          • 6
            Self-healing
          • 5
            No cloud platform lock-in
          • 5
            Promotes modern/good infrascture practice
          • 5
            Open, powerful, stable
          • 5
            Reliable
          • 4
            Scalable
          • 4
            Quick cloud setup
          • 3
            Cloud Agnostic
          • 3
            Captain of Container Ship
          • 3
            A self healing environment with rich metadata
          • 3
            Runs on azure
          • 3
            Backed by Red Hat
          • 3
            Custom and extensibility
          • 2
            Sfg
          • 2
            Gke
          • 2
            Everything of CaaS
          • 2
            Golang
          • 2
            Easy setup
          • 2
            Expandable
          CONS OF KUBERNETES
          • 16
            Steep learning curve
          • 15
            Poor workflow for development
          • 8
            Orchestrates only infrastructure
          • 4
            High resource requirements for on-prem clusters
          • 2
            Too heavy for simple systems
          • 1
            Additional vendor lock-in (Docker)
          • 1
            More moving parts to secure
          • 1
            Additional Technology Overhead

          related Kubernetes posts

          Conor Myhrvold
          Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 9.5M views

          How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

          Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

          Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

          https://eng.uber.com/distributed-tracing/

          (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

          Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

          See more
          Yshay Yaacobi

          Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

          Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

          After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

          See more
          Serverless logo

          Serverless

          2.2K
          1.2K
          26
          The most widely-adopted toolkit for building serverless applications
          2.2K
          1.2K
          + 1
          26
          PROS OF SERVERLESS
          • 14
            API integration
          • 7
            Supports cloud functions for Google, Azure, and IBM
          • 3
            Lower cost
          • 1
            Auto scale
          • 1
            Openwhisk
          CONS OF SERVERLESS
            Be the first to leave a con

            related Serverless posts

            Praveen Mooli
            Engineering Manager at Taylor and Francis · | 18 upvotes · 3.8M views

            We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

            To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

            To build #Webapps we decided to use Angular 2 with RxJS

            #Devops - GitHub , Travis CI , Terraform , Docker , Serverless

            See more
            Nitzan Shapira

            At Epsagon, we use hundreds of AWS Lambda functions, most of them are written in Python, and the Serverless Framework to pack and deploy them. One of the issues we've encountered is the difficulty to package external libraries into the Lambda environment using the Serverless Framework. This limitation is probably by design since the external code your Lambda needs can be usually included with a package manager.

            In order to overcome this issue, we've developed a tool, which we also published as open-source (see link below), which automatically packs these libraries using a simple npm package and a YAML configuration file. Support for Node.js, Go, and Java will be available soon.

            The GitHub respoitory: https://github.com/epsagon/serverless-package-external

            See more
            Azure Functions logo

            Azure Functions

            785
            689
            62
            Listen and react to events across your stack
            785
            689
            + 1
            62
            PROS OF AZURE FUNCTIONS
            • 14
              Pay only when invoked
            • 11
              Great developer experience for C#
            • 9
              Multiple languages supported
            • 7
              Great debugging support
            • 5
              Can be used as lightweight https service
            • 4
              Easy scalability
            • 3
              WebHooks
            • 3
              Costo
            • 2
              Event driven
            • 2
              Azure component events for Storage, services etc
            • 2
              Poor developer experience for C#
            CONS OF AZURE FUNCTIONS
            • 1
              No persistent (writable) file system available
            • 1
              Poor support for Linux environments
            • 1
              Sporadic server & language runtime issues
            • 1
              Not suited for long-running applications

            related Azure Functions posts

            Kestas Barzdaitis
            Entrepreneur & Engineer · | 16 upvotes · 762.4K views

            CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

            CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

            AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

            It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

            The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

            In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

            Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

            See more
            Michal Nowak

            In a couple of recent projects we had an opportunity to try out the new Serverless approach to building web applications. It wasn't necessarily a question if we should use any particular vendor but rather "if" we can consider serverless a viable option for building apps. Obviously our goal was also to get a feel for this technology and gain some hands-on experience.

            We did consider AWS Lambda, Firebase from Google as well as Azure Functions. Eventually we went with AWS Lambdas.

            PROS
            • No servers to manage (obviously!)
            • Limited fixed costs – you pay only for used time
            • Automated scaling and balancing
            • Automatic failover (or, at this level of abstraction, no failover problem at all)
            • Security easier to provide and audit
            • Low overhead at the start (with the certain level of knowledge)
            • Short time to market
            • Easy handover - deployment coupled with code
            • Perfect choice for lean startups with fast-paced iterations
            • Augmentation for the classic cloud, server(full) approach
            CONS
            • Not much know-how and best practices available about structuring the code and projects on the market
            • Not suitable for complex business logic due to the risk of producing highly coupled code
            • Cost difficult to estimate (helpful tools: serverlesscalc.com)
            • Difficulty in migration to other platforms (Vendor lock⚠️)
            • Little engineers with experience in serverless on the job market
            • Steep learning curve for engineers without any cloud experience

            More details are on our blog: https://evojam.com/blog/2018/12/5/should-you-go-serverless-meet-the-benefits-and-flaws-of-new-wave-of-cloud-solutions I hope it helps 🙌 & I'm curious of your experiences.

            See more