Need advice about which tool to choose?Ask the StackShare community!
Amazon Redshift vs Hadoop: What are the differences?
What is Amazon Redshift? Fast, fully managed, petabyte-scale data warehouse service. Redshift makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. It is optimized for datasets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.
What is Hadoop? Open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
Amazon Redshift belongs to "Big Data as a Service" category of the tech stack, while Hadoop can be primarily classified under "Databases".
"Data Warehousing" is the primary reason why developers consider Amazon Redshift over the competitors, whereas "Great ecosystem" was stated as the key factor in picking Hadoop.
Hadoop is an open source tool with 9.27K GitHub stars and 5.78K GitHub forks. Here's a link to Hadoop's open source repository on GitHub.
Airbnb, Uber Technologies, and Spotify are some of the popular companies that use Hadoop, whereas Amazon Redshift is used by Lyft, Coursera, and 9GAG. Hadoop has a broader approval, being mentioned in 237 company stacks & 127 developers stacks; compared to Amazon Redshift, which is listed in 270 company stacks and 68 developer stacks.
We need to perform ETL from several databases into a data warehouse or data lake. We want to
- keep raw and transformed data available to users to draft their own queries efficiently
- give users the ability to give custom permissions and SSO
- move between open-source on-premises development and cloud-based production environments
We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.
You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.
But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.
Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.
You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.
Pros of Amazon Redshift
- Data Warehousing41
- Scalable27
- SQL17
- Backed by Amazon14
- Encryption5
- Cheap and reliable1
- Isolation1
- Best Cloud DW Performance1
- Fast columnar storage1
Pros of Hadoop
- Great ecosystem39
- One stack to rule them all11
- Great load balancer4
- Amazon aws1
- Java syntax1