Alternatives to Apache Storm logo

Alternatives to Apache Storm

Apache Spark, Kafka, Amazon Kinesis, Apache Flume, and Apache Flink are the most popular alternatives and competitors to Apache Storm.
191
277
+ 1
24

What is Apache Storm and what are its top alternatives?

Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate.
Apache Storm is a tool in the Stream Processing category of a tech stack.
Apache Storm is an open source tool with 6.4K GitHub stars and 4.1K GitHub forks. Here’s a link to Apache Storm's open source repository on GitHub

Top Alternatives to Apache Storm

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Kafka
    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • Amazon Kinesis
    Amazon Kinesis

    Amazon Kinesis can collect and process hundreds of gigabytes of data per second from hundreds of thousands of sources, allowing you to easily write applications that process information in real-time, from sources such as web site click-streams, marketing and financial information, manufacturing instrumentation and social media, and operational logs and metering data. ...

  • Apache Flume
    Apache Flume

    It is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application. ...

  • Apache Flink
    Apache Flink

    Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala. ...

  • Kafka Streams
    Kafka Streams

    It is a client library for building applications and microservices, where the input and output data are stored in Kafka clusters. It combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology. ...

  • Hadoop
    Hadoop

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. ...

  • Heron
    Heron

    Heron is realtime analytics platform developed by Twitter. It is the direct successor of Apache Storm, built to be backwards compatible with Storm's topology API but with a wide array of architectural improvements. ...

Apache Storm alternatives & related posts

Apache Spark logo

Apache Spark

2.8K
3.3K
139
Fast and general engine for large-scale data processing
2.8K
3.3K
+ 1
139
PROS OF APACHE SPARK
  • 60
    Open-source
  • 48
    Fast and Flexible
  • 8
    Great for distributed SQL like applications
  • 8
    One platform for every big data problem
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    In memory Computation
  • 2
    Interactive Query
  • 2
    Machine learning libratimery, Streaming in real
CONS OF APACHE SPARK
  • 3
    Speed

related Apache Spark posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2.7M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 1.3M views

Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

(Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

See more
Kafka logo

Kafka

20.8K
19.7K
600
Distributed, fault tolerant, high throughput pub-sub messaging system
20.8K
19.7K
+ 1
600
PROS OF KAFKA
  • 126
    High-throughput
  • 119
    Distributed
  • 91
    Scalable
  • 85
    High-Performance
  • 65
    Durable
  • 37
    Publish-Subscribe
  • 19
    Simple-to-use
  • 18
    Open source
  • 11
    Written in Scala and java. Runs on JVM
  • 8
    Message broker + Streaming system
  • 4
    KSQL
  • 4
    Robust
  • 4
    Avro schema integration
  • 3
    Suport Multiple clients
  • 2
    Partioned, replayable log
  • 1
    Flexible
  • 1
    Extremely good parallelism constructs
  • 1
    Fun
  • 1
    Simple publisher / multi-subscriber model
CONS OF KAFKA
  • 31
    Non-Java clients are second-class citizens
  • 28
    Needs Zookeeper
  • 8
    Operational difficulties
  • 3
    Terrible Packaging

related Kafka posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2.7M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
John Kodumal

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more
Amazon Kinesis logo

Amazon Kinesis

768
578
9
Store and process terabytes of data each hour from hundreds of thousands of sources
768
578
+ 1
9
PROS OF AMAZON KINESIS
  • 9
    Scalable
CONS OF AMAZON KINESIS
  • 3
    Cost

related Amazon Kinesis posts

Praveen Mooli
Engineering Manager at Taylor and Francis · | 18 upvotes · 2.8M views

We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

To build #Webapps we decided to use Angular 2 with RxJS

#Devops - GitHub , Travis CI , Terraform , Docker , Serverless

See more
John Kodumal

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more
Apache Flume logo

Apache Flume

44
112
0
A service for collecting, aggregating, and moving large amounts of log data
44
112
+ 1
0
PROS OF APACHE FLUME
    Be the first to leave a pro
    CONS OF APACHE FLUME
      Be the first to leave a con

      related Apache Flume posts

      Apache Flink logo

      Apache Flink

      474
      795
      38
      Fast and reliable large-scale data processing engine
      474
      795
      + 1
      38
      PROS OF APACHE FLINK
      • 16
        Unified batch and stream processing
      • 8
        Easy to use streaming apis
      • 8
        Out-of-the box connector to kinesis,s3,hdfs
      • 4
        Open Source
      • 2
        Low latency
      CONS OF APACHE FLINK
        Be the first to leave a con

        related Apache Flink posts

        Surabhi Bhawsar
        Technical Architect at Pepcus · | 7 upvotes · 639.9K views
        Shared insights
        on
        KafkaKafkaApache FlinkApache Flink

        I need to build the Alert & Notification framework with the use of a scheduled program. We will analyze the events from the database table and filter events that are falling under a day timespan and send these event messages over email. Currently, we are using Kafka Pub/Sub for messaging. The customer wants us to move on Apache Flink, I am trying to understand how Apache Flink could be fit better for us.

        See more

        I have to build a data processing application with an Apache Beam stack and Apache Flink runner on an Amazon EMR cluster. I saw some instability with the process and EMR clusters that keep going down. Here, the Apache Beam application gets inputs from Kafka and sends the accumulative data streams to another Kafka topic. Any advice on how to make the process more stable?

        See more
        Kafka Streams logo

        Kafka Streams

        352
        434
        0
        A client library for building applications and microservices
        352
        434
        + 1
        0
        PROS OF KAFKA STREAMS
          Be the first to leave a pro
          CONS OF KAFKA STREAMS
            Be the first to leave a con

            related Kafka Streams posts

            I have recently started using Confluent/Kafka cloud. We want to do some stream processing. As I was going through Kafka I came across Kafka Streams and KSQL. Both seem to be A good fit for stream processing. But I could not understand which one should be used and one has any advantage over another. We will be using Confluent/Kafka Managed Cloud Instance. In near future, our Producers and Consumers are running on premise and we will be interacting with Confluent Cloud.

            Also, Confluent Cloud Kafka has a primitive interface; is there any better UI interface to manage Kafka Cloud Cluster?

            See more
            Shared insights
            on
            Apache FlinkApache FlinkKafka StreamsKafka Streams

            We currently have 2 Kafka Streams topics that have records coming in continuously. We're looking into joining the 2 streams based on a key with a window of 5 minutes based on their timestamp.

            Should I consider kStream - kStream join or Apache Flink window joins? Or is there any other better way to achieve this?

            See more
            Hadoop logo

            Hadoop

            2.4K
            2.2K
            56
            Open-source software for reliable, scalable, distributed computing
            2.4K
            2.2K
            + 1
            56
            PROS OF HADOOP
            • 39
              Great ecosystem
            • 11
              One stack to rule them all
            • 4
              Great load balancer
            • 1
              Amazon aws
            • 1
              Java syntax
            CONS OF HADOOP
              Be the first to leave a con

              related Hadoop posts

              Shared insights
              on
              KafkaKafkaHadoopHadoop
              at

              The early data ingestion pipeline at Pinterest used Kafka as the central message transporter, with the app servers writing messages directly to Kafka, which then uploaded log files to S3.

              For databases, a custom Hadoop streamer pulled database data and wrote it to S3.

              Challenges cited for this infrastructure included high operational overhead, as well as potential data loss occurring when Kafka broker outages led to an overflow of in-memory message buffering.

              See more
              Conor Myhrvold
              Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 1.3M views

              Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

              Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

              https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

              (Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

              See more
              Heron logo

              Heron

              22
              61
              4
              Realtime, distributed, fault-tolerant stream processing engine from Twitter
              22
              61
              + 1
              4
              PROS OF HERON
              • 1
                Highly Customizable
              • 1
                Support most popular container environment
              • 1
                Operation friendly
              • 1
                Realtime Stream Processing
              CONS OF HERON
                Be the first to leave a con

                related Heron posts

                Marc Bollinger
                Infra & Data Eng Manager at Thumbtack · | 5 upvotes · 606.8K views

                Lumosity is home to the world's largest cognitive training database, a responsibility we take seriously. For most of the company's history, our analysis of user behavior and training data has been powered by an event stream--first a simple Node.js pub/sub app, then a heavyweight Ruby app with stronger durability. Both supported decent throughput and latency, but they lacked some major features supported by existing open-source alternatives: replaying existing messages (also lacking in most message queue-based solutions), scaling out many different readers for the same stream, the ability to leverage existing solutions for reading and writing, and possibly most importantly: the ability to hire someone externally who already had expertise.

                We ultimately migrated to Kafka in early- to mid-2016, citing both industry trends in companies we'd talked to with similar durability and throughput needs, the extremely strong documentation and community. We pored over Kyle Kingsbury's Jepsen post (https://aphyr.com/posts/293-jepsen-Kafka), as well as Jay Kreps' follow-up (http://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen), talked at length with Confluent folks and community members, and still wound up running parallel systems for quite a long time, but ultimately, we've been very, very happy. Understanding the internals and proper levers takes some commitment, but it's taken very little maintenance once configured. Since then, the Confluent Platform community has grown and grown; we've gone from doing most development using custom Scala consumers and producers to being 60/40 Kafka Streams/Connects.

                We originally looked into Storm / Heron , and we'd moved on from Redis pub/sub. Heron looks great, but we already had a programming model across services that was more akin to consuming a message consumers than required a topology of bolts, etc. Heron also had just come out while we were starting to migrate things, and the community momentum and direction of Kafka felt more substantial than the older Storm. If we were to start the process over again today, we might check out Pulsar , although the ecosystem is much younger.

                To find out more, read our 2017 engineering blog post about the migration!

                See more