Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Amazon Redshift
Amazon Redshift

701
393
+ 1
86
Hadoop
Hadoop

1.2K
1K
+ 1
48
Add tool

Amazon Redshift vs Hadoop: What are the differences?

What is Amazon Redshift? Fast, fully managed, petabyte-scale data warehouse service. Redshift makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. It is optimized for datasets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

What is Hadoop? Open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

Amazon Redshift belongs to "Big Data as a Service" category of the tech stack, while Hadoop can be primarily classified under "Databases".

"Data Warehousing" is the primary reason why developers consider Amazon Redshift over the competitors, whereas "Great ecosystem" was stated as the key factor in picking Hadoop.

Hadoop is an open source tool with 9.27K GitHub stars and 5.78K GitHub forks. Here's a link to Hadoop's open source repository on GitHub.

Airbnb, Uber Technologies, and Spotify are some of the popular companies that use Hadoop, whereas Amazon Redshift is used by Lyft, Coursera, and 9GAG. Hadoop has a broader approval, being mentioned in 237 company stacks & 127 developers stacks; compared to Amazon Redshift, which is listed in 270 company stacks and 68 developer stacks.

- No public GitHub repository available -

What is Amazon Redshift?

It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

What is Hadoop?

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Amazon Redshift?
Why do developers choose Hadoop?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
      Be the first to leave a con
      What companies use Amazon Redshift?
      What companies use Hadoop?

      Sign up to get full access to all the companiesMake informed product decisions

      What tools integrate with Amazon Redshift?
      What tools integrate with Hadoop?

      Sign up to get full access to all the tool integrationsMake informed product decisions

      What are some alternatives to Amazon Redshift and Hadoop?
      Google BigQuery
      Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.
      Amazon Athena
      Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
      Amazon DynamoDB
      With it , you can offload the administrative burden of operating and scaling a highly available distributed database cluster, while paying a low price for only what you use.
      Amazon Redshift Spectrum
      With Redshift Spectrum, you can extend the analytic power of Amazon Redshift beyond data stored on local disks in your data warehouse to query vast amounts of unstructured data in your Amazon S3 “data lake” -- without having to load or transform any data.
      Amazon EMR
      It is used in a variety of applications, including log analysis, data warehousing, machine learning, financial analysis, scientific simulation, and bioinformatics.
      See all alternatives
      Decisions about Amazon Redshift and Hadoop
      Ankit Sobti
      Ankit Sobti
      CTO at Postman Inc · | 11 upvotes · 104.8K views
      atPostmanPostman
      Looker
      Looker
      Stitch
      Stitch
      Amazon Redshift
      Amazon Redshift
      dbt
      dbt

      Looker , Stitch , Amazon Redshift , dbt

      We recently moved our Data Analytics and Business Intelligence tooling to Looker . It's already helping us create a solid process for reusable SQL-based data modeling, with consistent definitions across the entire organizations. Looker allows us to collaboratively build these version-controlled models and push the limits of what we've traditionally been able to accomplish with analytics with a lean team.

      For Data Engineering, we're in the process of moving from maintaining our own ETL pipelines on AWS to a managed ELT system on Stitch. We're also evaluating the command line tool, dbt to manage data transformations. Our hope is that Stitch + dbt will streamline the ELT bit, allowing us to focus our energies on analyzing data, rather than managing it.

      See more
      StackShare Editors
      StackShare Editors
      Prometheus
      Prometheus
      Chef
      Chef
      Consul
      Consul
      Memcached
      Memcached
      Hack
      Hack
      Swift
      Swift
      Hadoop
      Hadoop
      Terraform
      Terraform
      Airflow
      Airflow
      Apache Spark
      Apache Spark
      Kubernetes
      Kubernetes
      gRPC
      gRPC
      HHVM (HipHop Virtual Machine)
      HHVM (HipHop Virtual Machine)
      Presto
      Presto
      Kotlin
      Kotlin
      Apache Thrift
      Apache Thrift

      Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.

      Apps
      • Web: a mix of JavaScript/ES6 and React.
      • Desktop: And Electron to ship it as a desktop application.
      • Android: a mix of Java and Kotlin.
      • iOS: written in a mix of Objective C and Swift.
      Backend
      • The core application and the API written in PHP/Hack that runs on HHVM.
      • The data is stored in MySQL using Vitess.
      • Caching is done using Memcached and MCRouter.
      • The search service takes help from SolrCloud, with various Java services.
      • The messaging system uses WebSockets with many services in Java and Go.
      • Load balancing is done using HAproxy with Consul for configuration.
      • Most services talk to each other over gRPC,
      • Some Thrift and JSON-over-HTTP
      • Voice and video calling service was built in Elixir.
      Data warehouse
      • Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
      Etc
      See more
      Interest over time
      Reviews of Amazon Redshift and Hadoop
      No reviews found
      How developers use Amazon Redshift and Hadoop
      Avatar of Pinterest
      Pinterest uses HadoopHadoop

      The MapReduce workflow starts to process experiment data nightly when data of the previous day is copied over from Kafka. At this time, all the raw log requests are transformed into meaningful experiment results and in-depth analysis. To populate experiment data for the dashboard, we have around 50 jobs running to do all the calculations and transforms of data.

      Avatar of Yelp
      Yelp uses HadoopHadoop

      in 2009 we open sourced mrjob, which allows any engineer to write a MapReduce job without contending for resources. We’re only limited by the amount of machines in an Amazon data center (which is an issue we’ve rarely encountered).

      Avatar of Pinterest
      Pinterest uses HadoopHadoop

      The massive volume of discovery data that powers Pinterest and enables people to save Pins, create boards and follow other users, is generated through daily Hadoop jobs...

      Avatar of Olo
      Olo uses Amazon RedshiftAmazon Redshift

      Aggressive archiving of historical data to keep the production database as small as possible. Using our in-house soon-to-be-open-sourced ETL library, SharpShifter.

      Avatar of Robert Brown
      Robert Brown uses HadoopHadoop

      Importing/Exporting data, interpreting results. Possible integration with SAS

      Avatar of Rohith Nandakumar
      Rohith Nandakumar uses HadoopHadoop

      TBD. Good to have I think. Analytics on loads of data, recommendations?

      Avatar of Christian Moeller
      Christian Moeller uses Amazon RedshiftAmazon Redshift

      Connected to BI (Pentaho)

      Avatar of Kovid Rathee
      Kovid Rathee uses Amazon RedshiftAmazon Redshift

      OLAP and BI

      How much does Amazon Redshift cost?
      How much does Hadoop cost?
      Pricing unavailable