Need advice about which tool to choose?Ask the StackShare community!
Amazon Redshift vs Oracle: What are the differences?
Introduction Amazon Redshift and Oracle are both widely used data warehousing solutions that offer powerful analytical capabilities. However, there are several key differences between the two that users should consider when choosing the right solution for their needs.
Scalability: One major difference between Amazon Redshift and Oracle is their scalability. Amazon Redshift is highly scalable and can easily accommodate large amounts of data, enabling users to add or remove nodes as needed to handle increased workloads. On the other hand, Oracle's scalability is limited by the capacity of the hardware it is installed on, which may require users to invest in additional hardware to handle growing data volumes.
Cost: When it comes to cost, Amazon Redshift offers a more cost-effective solution for data warehousing. With Amazon Redshift, users pay only for the resources they actually use, allowing for flexible and potentially lower costs. In contrast, Oracle typically requires users to purchase expensive licenses and hardware, leading to higher upfront costs.
Performance: Another key difference between Amazon Redshift and Oracle is their performance. Amazon Redshift is optimized for fast query performance and can efficiently process large-scale analytics tasks. On the other hand, Oracle may struggle with handling large volumes of data and complex queries, particularly without proper performance tuning and optimization.
Maintenance and Management: Amazon Redshift simplifies the maintenance and management of a data warehouse. It automatically handles software upgrades, backups, and monitoring, reducing the need for manual intervention. In contrast, Oracle requires more manual effort and expertise for routine maintenance tasks, potentially requiring dedicated DBA resources.
Data Movement and Integration: When it comes to data movement and integration, Oracle offers a wider range of options. Oracle provides robust tools for data movement, including Extract, Transform, Load (ETL) capabilities, and integration with other Oracle products. Amazon Redshift, while powerful for analytics, may require additional tools or services for complex data integration tasks.
Ecosystem and Vendors: Finally, the vendor ecosystem surrounding Amazon Redshift and Oracle differs significantly. Oracle has a long-established ecosystem with many third-party vendors providing support, tools, and services tailored for Oracle databases. Amazon Redshift, while gaining popularity, has a newer ecosystem with a more limited range of third-party options.
In summary, Amazon Redshift offers better scalability, cost-effectiveness, and ease of management, with optimized performance for large-scale analytics, while Oracle provides a wider range of data movement and integration options, and benefits from a more established vendor ecosystem.
We need to perform ETL from several databases into a data warehouse or data lake. We want to
- keep raw and transformed data available to users to draft their own queries efficiently
- give users the ability to give custom permissions and SSO
- move between open-source on-premises development and cloud-based production environments
We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.
You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.
But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.
Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.
You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.
Cloud Data-warehouse is the centerpiece of modern Data platform. The choice of the most suitable solution is therefore fundamental.
Our benchmark was conducted over BigQuery and Snowflake. These solutions seem to match our goals but they have very different approaches.
BigQuery is notably the only 100% serverless cloud data-warehouse, which requires absolutely NO maintenance: no re-clustering, no compression, no index optimization, no storage management, no performance management. Snowflake requires to set up (paid) reclustering processes, to manage the performance allocated to each profile, etc. We can also mention Redshift, which we have eliminated because this technology requires even more ops operation.
BigQuery can therefore be set up with almost zero cost of human resources. Its on-demand pricing is particularly adapted to small workloads. 0 cost when the solution is not used, only pay for the query you're running. But quickly the use of slots (with monthly or per-minute commitment) will drastically reduce the cost of use. We've reduced by 10 the cost of our nightly batches by using flex slots.
Finally, a major advantage of BigQuery is its almost perfect integration with Google Cloud Platform services: Cloud functions, Dataflow, Data Studio, etc.
BigQuery is still evolving very quickly. The next milestone, BigQuery Omni, will allow to run queries over data stored in an external Cloud platform (Amazon S3 for example). It will be a major breakthrough in the history of cloud data-warehouses. Omni will compensate a weakness of BigQuery: transferring data in near real time from S3 to BQ is not easy today. It was even simpler to implement via Snowflake's Snowpipe solution.
We also plan to use the Machine Learning features built into BigQuery to accelerate our deployment of Data-Science-based projects. An opportunity only offered by the BigQuery solution
We have chosen Tibero over Oracle because we want to offer a PL/SQL-as-a-Service that the users can deploy in any Cloud without concerns from our website at some standard cost. With Oracle Database, developers would have to worry about what they implement and the related costs of each feature but the licensing model from Tibero is just 1 price and we have all features included, so we don't have to worry and developers using our SQLaaS neither. PostgreSQL would be open source. We have chosen Tibero over Oracle because we want to offer a PL/SQL that you can deploy in any Cloud without concerns. PostgreSQL would be the open source option but we need to offer an SQLaaS with encryption and more enterprise features in the background and best value option we have found, it was Tibero Database for PL/SQL-based applications.
We wanted a JSON datastore that could save the state of our bioinformatics visualizations without destructive normalization. As a leading NoSQL data storage technology, MongoDB has been a perfect fit for our needs. Plus it's open source, and has an enterprise SLA scale-out path, with support of hosted solutions like Atlas. Mongo has been an absolute champ. So much so that SQL and Oracle have begun shipping JSON column types as a new feature for their databases. And when Fast Healthcare Interoperability Resources (FHIR) announced support for JSON, we basically had our FHIR datalake technology.
In the field of bioinformatics, we regularly work with hierarchical and unstructured document data. Unstructured text data from PDFs, image data from radiographs, phylogenetic trees and cladograms, network graphs, streaming ECG data... none of it fits into a traditional SQL database particularly well. As such, we prefer to use document oriented databases.
MongoDB is probably the oldest component in our stack besides Javascript, having been in it for over 5 years. At the time, we were looking for a technology that could simply cache our data visualization state (stored in JSON) in a database as-is without any destructive normalization. MongoDB was the perfect tool; and has been exceeding expectations ever since.
Trivia fact: some of the earliest electronic medical records (EMRs) used a document oriented database called MUMPS as early as the 1960s, prior to the invention of SQL. MUMPS is still in use today in systems like Epic and VistA, and stores upwards of 40% of all medical records at hospitals. So, we saw MongoDB as something as a 21st century version of the MUMPS database.
Pros of Amazon Redshift
- Data Warehousing41
- Scalable27
- SQL17
- Backed by Amazon14
- Encryption5
- Cheap and reliable1
- Isolation1
- Best Cloud DW Performance1
- Fast columnar storage1
Pros of Oracle
- Reliable44
- Enterprise33
- High Availability15
- Hard to maintain5
- Expensive5
- Maintainable4
- Hard to use4
- High complexity3
Sign up to add or upvote prosMake informed product decisions
Cons of Amazon Redshift
Cons of Oracle
- Expensive14