Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Google Cloud Dataflow
Google Cloud Dataflow

91
87
+ 1
0
Hadoop
Hadoop

1.3K
1.1K
+ 1
48
Add tool

Google Cloud Dataflow vs Hadoop: What are the differences?

Google Cloud Dataflow: A fully-managed cloud service and programming model for batch and streaming big data processing. Google Cloud Dataflow is a unified programming model and a managed service for developing and executing a wide range of data processing patterns including ETL, batch computation, and continuous computation. Cloud Dataflow frees you from operational tasks like resource management and performance optimization; Hadoop: Open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

Google Cloud Dataflow belongs to "Real-time Data Processing" category of the tech stack, while Hadoop can be primarily classified under "Databases".

Hadoop is an open source tool with 9.27K GitHub stars and 5.78K GitHub forks. Here's a link to Hadoop's open source repository on GitHub.

According to the StackShare community, Hadoop has a broader approval, being mentioned in 237 company stacks & 127 developers stacks; compared to Google Cloud Dataflow, which is listed in 32 company stacks and 8 developer stacks.

- No public GitHub repository available -

What is Google Cloud Dataflow?

Google Cloud Dataflow is a unified programming model and a managed service for developing and executing a wide range of data processing patterns including ETL, batch computation, and continuous computation. Cloud Dataflow frees you from operational tasks like resource management and performance optimization.

What is Hadoop?

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Google Cloud Dataflow?
Why do developers choose Hadoop?
    Be the first to leave a pro
      Be the first to leave a con
        Be the first to leave a con
        What companies use Google Cloud Dataflow?
        What companies use Hadoop?

        Sign up to get full access to all the companiesMake informed product decisions

        What tools integrate with Google Cloud Dataflow?
        What tools integrate with Hadoop?

        Sign up to get full access to all the tool integrationsMake informed product decisions

        What are some alternatives to Google Cloud Dataflow and Hadoop?
        Apache Spark
        Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
        Kafka
        Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.
        Beam
        A distributed knowledge graph store. Knowledge graphs are suitable for modeling data that is highly interconnected by many types of relationships, like encyclopedic information about the world.
        Apache Beam
        It implements batch and streaming data processing jobs that run on any execution engine. It executes pipelines on multiple execution environments.
        Amazon Kinesis
        Amazon Kinesis can collect and process hundreds of gigabytes of data per second from hundreds of thousands of sources, allowing you to easily write applications that process information in real-time, from sources such as web site click-streams, marketing and financial information, manufacturing instrumentation and social media, and operational logs and metering data.
        See all alternatives
        Decisions about Google Cloud Dataflow and Hadoop
        StackShare Editors
        StackShare Editors
        Prometheus
        Prometheus
        Chef
        Chef
        Consul
        Consul
        Memcached
        Memcached
        Hack
        Hack
        Swift
        Swift
        Hadoop
        Hadoop
        Terraform
        Terraform
        Airflow
        Airflow
        Apache Spark
        Apache Spark
        Kubernetes
        Kubernetes
        gRPC
        gRPC
        HHVM (HipHop Virtual Machine)
        HHVM (HipHop Virtual Machine)
        Presto
        Presto
        Kotlin
        Kotlin
        Apache Thrift
        Apache Thrift

        Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.

        Apps
        • Web: a mix of JavaScript/ES6 and React.
        • Desktop: And Electron to ship it as a desktop application.
        • Android: a mix of Java and Kotlin.
        • iOS: written in a mix of Objective C and Swift.
        Backend
        • The core application and the API written in PHP/Hack that runs on HHVM.
        • The data is stored in MySQL using Vitess.
        • Caching is done using Memcached and MCRouter.
        • The search service takes help from SolrCloud, with various Java services.
        • The messaging system uses WebSockets with many services in Java and Go.
        • Load balancing is done using HAproxy with Consul for configuration.
        • Most services talk to each other over gRPC,
        • Some Thrift and JSON-over-HTTP
        • Voice and video calling service was built in Elixir.
        Data warehouse
        • Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
        Etc
        See more
        Interest over time
        Reviews of Google Cloud Dataflow and Hadoop
        No reviews found
        How developers use Google Cloud Dataflow and Hadoop
        Avatar of Pinterest
        Pinterest uses HadoopHadoop

        The MapReduce workflow starts to process experiment data nightly when data of the previous day is copied over from Kafka. At this time, all the raw log requests are transformed into meaningful experiment results and in-depth analysis. To populate experiment data for the dashboard, we have around 50 jobs running to do all the calculations and transforms of data.

        Avatar of Yelp
        Yelp uses HadoopHadoop

        in 2009 we open sourced mrjob, which allows any engineer to write a MapReduce job without contending for resources. We’re only limited by the amount of machines in an Amazon data center (which is an issue we’ve rarely encountered).

        Avatar of Pinterest
        Pinterest uses HadoopHadoop

        The massive volume of discovery data that powers Pinterest and enables people to save Pins, create boards and follow other users, is generated through daily Hadoop jobs...

        Avatar of Robert Brown
        Robert Brown uses HadoopHadoop

        Importing/Exporting data, interpreting results. Possible integration with SAS

        Avatar of Rohith Nandakumar
        Rohith Nandakumar uses HadoopHadoop

        TBD. Good to have I think. Analytics on loads of data, recommendations?

        How much does Google Cloud Dataflow cost?
        How much does Hadoop cost?
        Pricing unavailable
        Pricing unavailable
        News about Google Cloud Dataflow
        More news