Alternatives to Apache Spark logo

Alternatives to Apache Spark

Hadoop, Splunk, Cassandra, Apache Beam, and Apache Flume are the most popular alternatives and competitors to Apache Spark.
2.9K
3.5K
+ 1
140

What is Apache Spark and what are its top alternatives?

Apache Spark is a powerful open-source distributed computing system that provides an easy-to-use, fault-tolerant, and scalable framework for big data processing and analytics. Its key features include in-memory processing, support for various programming languages like Java, Scala, Python, and R, advanced analytics capabilities, and real-time data processing. However, some limitations of Apache Spark include high memory usage, complexity in setting up and tuning, and lack of built-in support for some machine learning algorithms.

  1. Apache Flink: Apache Flink is a powerful and robust stream processing framework that excels in data streaming applications. It offers low-latency and high-throughput processing, support for event time processing, fault tolerance, and efficient windowing operations. Pros of Apache Flink include its flexible deployment options, support for high availability, and compatibility with different data sources. However, it may have a steeper learning curve compared to Apache Spark.
  2. Hadoop MapReduce: Hadoop MapReduce is a well-known parallel processing framework for processing large datasets in distributed computing environments. It offers fault tolerance, scalability, and a simple programming model. Pros of Hadoop MapReduce include its stability, wide adoption, and seamless integration with the Hadoop ecosystem. However, it lacks the interactive query processing capabilities and advanced analytics features of Apache Spark.
  3. PrestoDB: PrestoDB is an open-source distributed SQL query engine designed for interactive analytics. It provides fast query execution, support for multiple data sources, and a flexible architecture. Pros of PrestoDB include its ability to query data from various sources like HDFS, relational databases, and cloud storage, as well as its support for federated queries. However, compared to Apache Spark, PrestoDB may not be as suitable for complex data processing workflows.
  4. Databricks: Databricks is a unified analytics platform built on top of Apache Spark, offering a collaborative workspace for data engineers, data scientists, and analysts. It provides features like interactive notebooks, automated cluster management, and integration with popular data sources. Pros of Databricks include its ease of use, seamless integration with cloud services, and support for various machine learning libraries. However, it is a commercial product and may involve additional costs compared to Apache Spark.
  5. Kafka Streams: Kafka Streams is a client library for building real-time streaming applications using Apache Kafka as the underlying data source. It offers fault tolerance, scalability, and simple API for stream processing. Pros of Kafka Streams include its seamless integration with Apache Kafka, support for exactly-once processing semantics, and low-latency data processing. However, it may not provide as broad a range of analytics capabilities as Apache Spark.
  6. Pulsar Functions: Pulsar Functions is a lightweight compute framework for Apache Pulsar, enabling serverless computing and stream processing within the Pulsar ecosystem. It offers seamless integration with Apache Pulsar messaging system, support for event-driven architecture, and scalability. Pros of Pulsar Functions include its ease of use, low latency, and efficient resource utilization. However, it may lack some of the advanced analytics features of Apache Spark.
  7. Hazelcast Jet: Hazelcast Jet is an in-memory data processing engine that provides high-performance real-time stream processing and batch processing capabilities. It offers fault tolerance, distributed processing, and low-latency data processing. Pros of Hazelcast Jet include its easy deployment, near real-time processing, and scalable architecture. However, compared to Apache Spark, it may have limitations in terms of machine learning and graph processing.
  8. Beam: Apache Beam is a unified programming model for both batch and stream processing, providing portability across multiple execution engines like Apache Flink, Apache Spark, Google Cloud Dataflow, and more. It offers a flexible API, support for multiple data sources, and fault tolerance. Pros of Apache Beam include its cross-platform compatibility, scalability, and ease of development. However, it may have a learning curve in understanding its programming model compared to Apache Spark.
  9. Samza: Apache Samza is a distributed stream processing framework that provides fault tolerance, stateful processing, and high-throughput data processing capabilities. It offers seamless integration with Apache Kafka, support for data partitioning, and low-latency processing. Pros of Apache Samza include its simplicity in building and deploying stream processing applications, efficient resource utilization, and strong consistency guarantees. However, it may not provide as diverse a set of analytics functionalities as Apache Spark.
  10. Ignite: Apache Ignite is an in-memory computing platform that offers distributed data storage, processing, and real-time analytics capabilities. It provides features like distributed SQL queries, machine learning algorithms, and streaming data processing. Pros of Apache Ignite include its high performance, horizontal scalability, and support for various programming languages. However, compared to Apache Spark, it may have limitations in terms of machine learning model training and interactive query processing.

Top Alternatives to Apache Spark

  • Hadoop
    Hadoop

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. ...

  • Splunk
    Splunk

    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...

  • Cassandra
    Cassandra

    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL. ...

  • Apache Beam
    Apache Beam

    It implements batch and streaming data processing jobs that run on any execution engine. It executes pipelines on multiple execution environments. ...

  • Apache Flume
    Apache Flume

    It is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application. ...

  • Apache Storm
    Apache Storm

    Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. ...

  • Kafka
    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • PySpark
    PySpark

    It is the collaboration of Apache Spark and Python. it is a Python API for Spark that lets you harness the simplicity of Python and the power of Apache Spark in order to tame Big Data. ...

Apache Spark alternatives & related posts

Hadoop logo

Hadoop

2.5K
2.3K
56
Open-source software for reliable, scalable, distributed computing
2.5K
2.3K
+ 1
56
PROS OF HADOOP
  • 39
    Great ecosystem
  • 11
    One stack to rule them all
  • 4
    Great load balancer
  • 1
    Amazon aws
  • 1
    Java syntax
CONS OF HADOOP
    Be the first to leave a con

    related Hadoop posts

    Shared insights
    on
    KafkaKafkaHadoopHadoop
    at

    The early data ingestion pipeline at Pinterest used Kafka as the central message transporter, with the app servers writing messages directly to Kafka, which then uploaded log files to S3.

    For databases, a custom Hadoop streamer pulled database data and wrote it to S3.

    Challenges cited for this infrastructure included high operational overhead, as well as potential data loss occurring when Kafka broker outages led to an overflow of in-memory message buffering.

    See more
    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 2.9M views

    Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

    Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

    https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

    (Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

    See more
    Splunk logo

    Splunk

    597
    997
    20
    Search, monitor, analyze and visualize machine data
    597
    997
    + 1
    20
    PROS OF SPLUNK
    • 3
      API for searching logs, running reports
    • 3
      Alert system based on custom query results
    • 2
      Dashboarding on any log contents
    • 2
      Custom log parsing as well as automatic parsing
    • 2
      Ability to style search results into reports
    • 2
      Query engine supports joining, aggregation, stats, etc
    • 2
      Splunk language supports string, date manip, math, etc
    • 2
      Rich GUI for searching live logs
    • 1
      Query any log as key-value pairs
    • 1
      Granular scheduling and time window support
    CONS OF SPLUNK
    • 1
      Splunk query language rich so lots to learn

    related Splunk posts

    Shared insights
    on
    KibanaKibanaSplunkSplunkGrafanaGrafana

    I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.

    See more
    Shared insights
    on
    SplunkSplunkElasticsearchElasticsearch

    We are currently exploring Elasticsearch and Splunk for our centralized logging solution. I need some feedback about these two tools. We expect our logs in the range of upwards > of 10TB of logging data.

    See more
    Cassandra logo

    Cassandra

    3.5K
    3.5K
    507
    A partitioned row store. Rows are organized into tables with a required primary key.
    3.5K
    3.5K
    + 1
    507
    PROS OF CASSANDRA
    • 119
      Distributed
    • 98
      High performance
    • 81
      High availability
    • 74
      Easy scalability
    • 53
      Replication
    • 26
      Reliable
    • 26
      Multi datacenter deployments
    • 10
      Schema optional
    • 9
      OLTP
    • 8
      Open source
    • 2
      Workload separation (via MDC)
    • 1
      Fast
    CONS OF CASSANDRA
    • 3
      Reliability of replication
    • 1
      Size
    • 1
      Updates

    related Cassandra posts

    Thierry Schellenbach
    Shared insights
    on
    RedisRedisCassandraCassandraRocksDBRocksDB
    at

    1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.

    Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.

    RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.

    This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.

    #InMemoryDatabases #DataStores #Databases

    See more
    Umair Iftikhar
    Technical Architect at ERP Studio · | 3 upvotes · 436.7K views

    Developing a solution that collects Telemetry Data from different devices, nearly 1000 devices minimum and maximum 12000. Each device is sending 2 packets in 1 second. This is time-series data, and this data definition and different reports are saved on PostgreSQL. Like Building information, maintenance records, etc. I want to know about the best solution. This data is required for Math and ML to run different algorithms. Also, data is raw without definitions and information stored in PostgreSQL. Initially, I went with TimescaleDB due to PostgreSQL support, but to increase in sites, I started facing many issues with timescale DB in terms of flexibility of storing data.

    My major requirement is also the replication of the database for reporting and different purposes. You may also suggest other options other than Druid and Cassandra. But an open source solution is appreciated.

    See more
    Apache Beam logo

    Apache Beam

    178
    360
    14
    A unified programming model
    178
    360
    + 1
    14
    PROS OF APACHE BEAM
    • 5
      Open-source
    • 5
      Cross-platform
    • 2
      Portable
    • 2
      Unified batch and stream processing
    CONS OF APACHE BEAM
      Be the first to leave a con

      related Apache Beam posts

      I have to build a data processing application with an Apache Beam stack and Apache Flink runner on an Amazon EMR cluster. I saw some instability with the process and EMR clusters that keep going down. Here, the Apache Beam application gets inputs from Kafka and sends the accumulative data streams to another Kafka topic. Any advice on how to make the process more stable?

      See more
      Apache Flume logo

      Apache Flume

      48
      119
      0
      A service for collecting, aggregating, and moving large amounts of log data
      48
      119
      + 1
      0
      PROS OF APACHE FLUME
        Be the first to leave a pro
        CONS OF APACHE FLUME
          Be the first to leave a con

          related Apache Flume posts

          Apache Storm logo

          Apache Storm

          201
          281
          25
          Distributed and fault-tolerant realtime computation
          201
          281
          + 1
          25
          PROS OF APACHE STORM
          • 10
            Flexible
          • 6
            Easy setup
          • 4
            Event Processing
          • 3
            Clojure
          • 2
            Real Time
          CONS OF APACHE STORM
            Be the first to leave a con

            related Apache Storm posts

            Marc Bollinger
            Infra & Data Eng Manager at Thumbtack · | 5 upvotes · 1.8M views

            Lumosity is home to the world's largest cognitive training database, a responsibility we take seriously. For most of the company's history, our analysis of user behavior and training data has been powered by an event stream--first a simple Node.js pub/sub app, then a heavyweight Ruby app with stronger durability. Both supported decent throughput and latency, but they lacked some major features supported by existing open-source alternatives: replaying existing messages (also lacking in most message queue-based solutions), scaling out many different readers for the same stream, the ability to leverage existing solutions for reading and writing, and possibly most importantly: the ability to hire someone externally who already had expertise.

            We ultimately migrated to Kafka in early- to mid-2016, citing both industry trends in companies we'd talked to with similar durability and throughput needs, the extremely strong documentation and community. We pored over Kyle Kingsbury's Jepsen post (https://aphyr.com/posts/293-jepsen-Kafka), as well as Jay Kreps' follow-up (http://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen), talked at length with Confluent folks and community members, and still wound up running parallel systems for quite a long time, but ultimately, we've been very, very happy. Understanding the internals and proper levers takes some commitment, but it's taken very little maintenance once configured. Since then, the Confluent Platform community has grown and grown; we've gone from doing most development using custom Scala consumers and producers to being 60/40 Kafka Streams/Connects.

            We originally looked into Storm / Heron , and we'd moved on from Redis pub/sub. Heron looks great, but we already had a programming model across services that was more akin to consuming a message consumers than required a topology of bolts, etc. Heron also had just come out while we were starting to migrate things, and the community momentum and direction of Kafka felt more substantial than the older Storm. If we were to start the process over again today, we might check out Pulsar , although the ecosystem is much younger.

            To find out more, read our 2017 engineering blog post about the migration!

            See more
            Kafka logo

            Kafka

            23K
            21.6K
            607
            Distributed, fault tolerant, high throughput pub-sub messaging system
            23K
            21.6K
            + 1
            607
            PROS OF KAFKA
            • 126
              High-throughput
            • 119
              Distributed
            • 92
              Scalable
            • 86
              High-Performance
            • 66
              Durable
            • 38
              Publish-Subscribe
            • 19
              Simple-to-use
            • 18
              Open source
            • 12
              Written in Scala and java. Runs on JVM
            • 9
              Message broker + Streaming system
            • 4
              KSQL
            • 4
              Avro schema integration
            • 4
              Robust
            • 3
              Suport Multiple clients
            • 2
              Extremely good parallelism constructs
            • 2
              Partioned, replayable log
            • 1
              Simple publisher / multi-subscriber model
            • 1
              Fun
            • 1
              Flexible
            CONS OF KAFKA
            • 32
              Non-Java clients are second-class citizens
            • 29
              Needs Zookeeper
            • 9
              Operational difficulties
            • 5
              Terrible Packaging

            related Kafka posts

            Eric Colson
            Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

            The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

            Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

            At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

            For more info:

            #DataScience #DataStack #Data

            See more
            John Kodumal

            As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

            We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

            See more
            PySpark logo

            PySpark

            258
            288
            0
            The Python API for Spark
            258
            288
            + 1
            0
            PROS OF PYSPARK
              Be the first to leave a pro
              CONS OF PYSPARK
                Be the first to leave a con

                related PySpark posts