Alternatives to Apache Spark logo

Alternatives to Apache Spark

Hadoop, Splunk, Cassandra, Apache Beam, and Apache Flume are the most popular alternatives and competitors to Apache Spark.
3K
3.5K
+ 1
140

What is Apache Spark and what are its top alternatives?

Apache Spark is a powerful open-source distributed computing system that provides an easy-to-use, fault-tolerant, and scalable framework for big data processing and analytics. Its key features include in-memory processing, support for various programming languages like Java, Scala, Python, and R, advanced analytics capabilities, and real-time data processing. However, some limitations of Apache Spark include high memory usage, complexity in setting up and tuning, and lack of built-in support for some machine learning algorithms.

  1. Apache Flink: Apache Flink is a powerful and robust stream processing framework that excels in data streaming applications. It offers low-latency and high-throughput processing, support for event time processing, fault tolerance, and efficient windowing operations. Pros of Apache Flink include its flexible deployment options, support for high availability, and compatibility with different data sources. However, it may have a steeper learning curve compared to Apache Spark.
  2. Hadoop MapReduce: Hadoop MapReduce is a well-known parallel processing framework for processing large datasets in distributed computing environments. It offers fault tolerance, scalability, and a simple programming model. Pros of Hadoop MapReduce include its stability, wide adoption, and seamless integration with the Hadoop ecosystem. However, it lacks the interactive query processing capabilities and advanced analytics features of Apache Spark.
  3. PrestoDB: PrestoDB is an open-source distributed SQL query engine designed for interactive analytics. It provides fast query execution, support for multiple data sources, and a flexible architecture. Pros of PrestoDB include its ability to query data from various sources like HDFS, relational databases, and cloud storage, as well as its support for federated queries. However, compared to Apache Spark, PrestoDB may not be as suitable for complex data processing workflows.
  4. Databricks: Databricks is a unified analytics platform built on top of Apache Spark, offering a collaborative workspace for data engineers, data scientists, and analysts. It provides features like interactive notebooks, automated cluster management, and integration with popular data sources. Pros of Databricks include its ease of use, seamless integration with cloud services, and support for various machine learning libraries. However, it is a commercial product and may involve additional costs compared to Apache Spark.
  5. Kafka Streams: Kafka Streams is a client library for building real-time streaming applications using Apache Kafka as the underlying data source. It offers fault tolerance, scalability, and simple API for stream processing. Pros of Kafka Streams include its seamless integration with Apache Kafka, support for exactly-once processing semantics, and low-latency data processing. However, it may not provide as broad a range of analytics capabilities as Apache Spark.
  6. Pulsar Functions: Pulsar Functions is a lightweight compute framework for Apache Pulsar, enabling serverless computing and stream processing within the Pulsar ecosystem. It offers seamless integration with Apache Pulsar messaging system, support for event-driven architecture, and scalability. Pros of Pulsar Functions include its ease of use, low latency, and efficient resource utilization. However, it may lack some of the advanced analytics features of Apache Spark.
  7. Hazelcast Jet: Hazelcast Jet is an in-memory data processing engine that provides high-performance real-time stream processing and batch processing capabilities. It offers fault tolerance, distributed processing, and low-latency data processing. Pros of Hazelcast Jet include its easy deployment, near real-time processing, and scalable architecture. However, compared to Apache Spark, it may have limitations in terms of machine learning and graph processing.
  8. Beam: Apache Beam is a unified programming model for both batch and stream processing, providing portability across multiple execution engines like Apache Flink, Apache Spark, Google Cloud Dataflow, and more. It offers a flexible API, support for multiple data sources, and fault tolerance. Pros of Apache Beam include its cross-platform compatibility, scalability, and ease of development. However, it may have a learning curve in understanding its programming model compared to Apache Spark.
  9. Samza: Apache Samza is a distributed stream processing framework that provides fault tolerance, stateful processing, and high-throughput data processing capabilities. It offers seamless integration with Apache Kafka, support for data partitioning, and low-latency processing. Pros of Apache Samza include its simplicity in building and deploying stream processing applications, efficient resource utilization, and strong consistency guarantees. However, it may not provide as diverse a set of analytics functionalities as Apache Spark.
  10. Ignite: Apache Ignite is an in-memory computing platform that offers distributed data storage, processing, and real-time analytics capabilities. It provides features like distributed SQL queries, machine learning algorithms, and streaming data processing. Pros of Apache Ignite include its high performance, horizontal scalability, and support for various programming languages. However, compared to Apache Spark, it may have limitations in terms of machine learning model training and interactive query processing.

Top Alternatives to Apache Spark

  • Hadoop
    Hadoop

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. ...

  • Splunk
    Splunk

    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...

  • Cassandra
    Cassandra

    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL. ...

  • Apache Beam
    Apache Beam

    It implements batch and streaming data processing jobs that run on any execution engine. It executes pipelines on multiple execution environments. ...

  • Apache Flume
    Apache Flume

    It is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application. ...

  • Apache Storm
    Apache Storm

    Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. ...

  • Kafka
    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • PySpark
    PySpark

    It is the collaboration of Apache Spark and Python. it is a Python API for Spark that lets you harness the simplicity of Python and the power of Apache Spark in order to tame Big Data. ...

Apache Spark alternatives & related posts

Hadoop logo

Hadoop

2.5K
56
Open-source software for reliable, scalable, distributed computing
2.5K
56
PROS OF HADOOP
  • 39
    Great ecosystem
  • 11
    One stack to rule them all
  • 4
    Great load balancer
  • 1
    Amazon aws
  • 1
    Java syntax
CONS OF HADOOP
    Be the first to leave a con

    related Hadoop posts

    Shared insights
    on
    KafkaKafkaHadoopHadoop
    at

    The early data ingestion pipeline at Pinterest used Kafka as the central message transporter, with the app servers writing messages directly to Kafka, which then uploaded log files to S3.

    For databases, a custom Hadoop streamer pulled database data and wrote it to S3.

    Challenges cited for this infrastructure included high operational overhead, as well as potential data loss occurring when Kafka broker outages led to an overflow of in-memory message buffering.

    See more
    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 3M views

    Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

    Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

    https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

    (Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

    See more
    Splunk logo

    Splunk

    614
    20
    Search, monitor, analyze and visualize machine data
    614
    20
    PROS OF SPLUNK
    • 3
      API for searching logs, running reports
    • 3
      Alert system based on custom query results
    • 2
      Splunk language supports string, date manip, math, etc
    • 2
      Dashboarding on any log contents
    • 2
      Custom log parsing as well as automatic parsing
    • 2
      Query engine supports joining, aggregation, stats, etc
    • 2
      Rich GUI for searching live logs
    • 2
      Ability to style search results into reports
    • 1
      Granular scheduling and time window support
    • 1
      Query any log as key-value pairs
    CONS OF SPLUNK
    • 1
      Splunk query language rich so lots to learn

    related Splunk posts

    Shared insights
    on
    KibanaKibanaSplunkSplunkGrafanaGrafana

    I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.

    See more
    Shared insights
    on
    SplunkSplunkElasticsearchElasticsearch

    We are currently exploring Elasticsearch and Splunk for our centralized logging solution. I need some feedback about these two tools. We expect our logs in the range of upwards > of 10TB of logging data.

    See more
    Cassandra logo

    Cassandra

    3.6K
    507
    A partitioned row store. Rows are organized into tables with a required primary key.
    3.6K
    507
    PROS OF CASSANDRA
    • 119
      Distributed
    • 98
      High performance
    • 81
      High availability
    • 74
      Easy scalability
    • 53
      Replication
    • 26
      Reliable
    • 26
      Multi datacenter deployments
    • 10
      Schema optional
    • 9
      OLTP
    • 8
      Open source
    • 2
      Workload separation (via MDC)
    • 1
      Fast
    CONS OF CASSANDRA
    • 3
      Reliability of replication
    • 1
      Size
    • 1
      Updates

    related Cassandra posts

    Thierry Schellenbach
    Shared insights
    on
    RedisRedisCassandraCassandraRocksDBRocksDB
    at

    1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.

    Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.

    RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.

    This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.

    #InMemoryDatabases #DataStores #Databases

    See more

    Trying to establish a data lake(or maybe puddle) for my org's Data Sharing project. The idea is that outside partners would send cuts of their PHI data, regardless of format/variables/systems, to our Data Team who would then harmonize the data, create data marts, and eventually use it for something. End-to-end, I'm envisioning:

    1. Ingestion->Secure, role-based, self service portal for users to upload data (1a. bonus points if it can preform basic validations/masking)
    2. Storage->Amazon S3 seems like the cheapest. We probably won't need very big, even at full capacity. Our current storage is a secure Box folder that has ~4GB with several batches of test data, code, presentations, and planning docs.
    3. Data Catalog-> AWS Glue? Azure Data Factory? Snowplow? is the main difference basically based on the vendor? We also will have Data Dictionaries/Codebooks from submitters. Where would they fit in?
    4. Partitions-> I've seen Cassandra and YARN mentioned, but have no experience with either
    5. Processing-> We want to use SAS if at all possible. What will work with SAS code?
    6. Pipeline/Automation->The check-in and verification processes that have been outlined are rather involved. Some sort of automated messaging or approval workflow would be nice
    7. I have very little guidance on what a "Data Mart" should look like, so I'm going with the idea that it would be another "experimental" partition. Unless there's an actual mart-building paradigm I've missed?
    8. An end user might use the catalog to pull certain de-identified data sets from the marts. Again, role-based access and self-service gui would be preferable. I'm the only full-time tech person on this project, but I'm mostly an OOP, HTML, JavaScript, and some SQL programmer. Most of this is out of my repertoire. I've done a lot of research, but I can't be an effective evangelist without hands-on experience. Since we're starting a new year of our grant, they've finally decided to let me try some stuff out. Any pointers would be appreciated!
    See more
    Apache Beam logo

    Apache Beam

    180
    14
    A unified programming model
    180
    14
    PROS OF APACHE BEAM
    • 5
      Open-source
    • 5
      Cross-platform
    • 2
      Portable
    • 2
      Unified batch and stream processing
    CONS OF APACHE BEAM
      Be the first to leave a con

      related Apache Beam posts

      I have to build a data processing application with an Apache Beam stack and Apache Flink runner on an Amazon EMR cluster. I saw some instability with the process and EMR clusters that keep going down. Here, the Apache Beam application gets inputs from Kafka and sends the accumulative data streams to another Kafka topic. Any advice on how to make the process more stable?

      See more
      Apache Flume logo

      Apache Flume

      48
      0
      A service for collecting, aggregating, and moving large amounts of log data
      48
      0
      PROS OF APACHE FLUME
        Be the first to leave a pro
        CONS OF APACHE FLUME
          Be the first to leave a con

          related Apache Flume posts

          Apache Storm logo

          Apache Storm

          204
          25
          Distributed and fault-tolerant realtime computation
          204
          25
          PROS OF APACHE STORM
          • 10
            Flexible
          • 6
            Easy setup
          • 4
            Event Processing
          • 3
            Clojure
          • 2
            Real Time
          CONS OF APACHE STORM
            Be the first to leave a con

            related Apache Storm posts

            Marc Bollinger
            Infra & Data Eng Manager at Thumbtack · | 5 upvotes · 1.9M views

            Lumosity is home to the world's largest cognitive training database, a responsibility we take seriously. For most of the company's history, our analysis of user behavior and training data has been powered by an event stream--first a simple Node.js pub/sub app, then a heavyweight Ruby app with stronger durability. Both supported decent throughput and latency, but they lacked some major features supported by existing open-source alternatives: replaying existing messages (also lacking in most message queue-based solutions), scaling out many different readers for the same stream, the ability to leverage existing solutions for reading and writing, and possibly most importantly: the ability to hire someone externally who already had expertise.

            We ultimately migrated to Kafka in early- to mid-2016, citing both industry trends in companies we'd talked to with similar durability and throughput needs, the extremely strong documentation and community. We pored over Kyle Kingsbury's Jepsen post (https://aphyr.com/posts/293-jepsen-Kafka), as well as Jay Kreps' follow-up (http://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen), talked at length with Confluent folks and community members, and still wound up running parallel systems for quite a long time, but ultimately, we've been very, very happy. Understanding the internals and proper levers takes some commitment, but it's taken very little maintenance once configured. Since then, the Confluent Platform community has grown and grown; we've gone from doing most development using custom Scala consumers and producers to being 60/40 Kafka Streams/Connects.

            We originally looked into Storm / Heron , and we'd moved on from Redis pub/sub. Heron looks great, but we already had a programming model across services that was more akin to consuming a message consumers than required a topology of bolts, etc. Heron also had just come out while we were starting to migrate things, and the community momentum and direction of Kafka felt more substantial than the older Storm. If we were to start the process over again today, we might check out Pulsar , although the ecosystem is much younger.

            To find out more, read our 2017 engineering blog post about the migration!

            See more
            Kafka logo

            Kafka

            23.6K
            607
            Distributed, fault tolerant, high throughput pub-sub messaging system
            23.6K
            607
            PROS OF KAFKA
            • 126
              High-throughput
            • 119
              Distributed
            • 92
              Scalable
            • 86
              High-Performance
            • 66
              Durable
            • 38
              Publish-Subscribe
            • 19
              Simple-to-use
            • 18
              Open source
            • 12
              Written in Scala and java. Runs on JVM
            • 9
              Message broker + Streaming system
            • 4
              KSQL
            • 4
              Avro schema integration
            • 4
              Robust
            • 3
              Suport Multiple clients
            • 2
              Extremely good parallelism constructs
            • 2
              Partioned, replayable log
            • 1
              Simple publisher / multi-subscriber model
            • 1
              Fun
            • 1
              Flexible
            CONS OF KAFKA
            • 32
              Non-Java clients are second-class citizens
            • 29
              Needs Zookeeper
            • 9
              Operational difficulties
            • 5
              Terrible Packaging

            related Kafka posts

            Nick Rockwell
            SVP, Engineering at Fastly · | 46 upvotes · 4.1M views

            When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

            So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

            React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

            Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

            See more
            Ashish Singh
            Tech Lead, Big Data Platform at Pinterest · | 38 upvotes · 3.3M views

            To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

            Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

            We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

            Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

            Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

            #BigData #AWS #DataScience #DataEngineering

            See more
            PySpark logo

            PySpark

            266
            0
            The Python API for Spark
            266
            0
            PROS OF PYSPARK
              Be the first to leave a pro
              CONS OF PYSPARK
                Be the first to leave a con

                related PySpark posts