Need advice about which tool to choose?Ask the StackShare community!

Amazon Redshift

1.5K
1.4K
+ 1
108
Oracle

2.3K
1.7K
+ 1
113
Add tool

Amazon Redshift vs Oracle: What are the differences?

Introduction Amazon Redshift and Oracle are both widely used data warehousing solutions that offer powerful analytical capabilities. However, there are several key differences between the two that users should consider when choosing the right solution for their needs.

  1. Scalability: One major difference between Amazon Redshift and Oracle is their scalability. Amazon Redshift is highly scalable and can easily accommodate large amounts of data, enabling users to add or remove nodes as needed to handle increased workloads. On the other hand, Oracle's scalability is limited by the capacity of the hardware it is installed on, which may require users to invest in additional hardware to handle growing data volumes.

  2. Cost: When it comes to cost, Amazon Redshift offers a more cost-effective solution for data warehousing. With Amazon Redshift, users pay only for the resources they actually use, allowing for flexible and potentially lower costs. In contrast, Oracle typically requires users to purchase expensive licenses and hardware, leading to higher upfront costs.

  3. Performance: Another key difference between Amazon Redshift and Oracle is their performance. Amazon Redshift is optimized for fast query performance and can efficiently process large-scale analytics tasks. On the other hand, Oracle may struggle with handling large volumes of data and complex queries, particularly without proper performance tuning and optimization.

  4. Maintenance and Management: Amazon Redshift simplifies the maintenance and management of a data warehouse. It automatically handles software upgrades, backups, and monitoring, reducing the need for manual intervention. In contrast, Oracle requires more manual effort and expertise for routine maintenance tasks, potentially requiring dedicated DBA resources.

  5. Data Movement and Integration: When it comes to data movement and integration, Oracle offers a wider range of options. Oracle provides robust tools for data movement, including Extract, Transform, Load (ETL) capabilities, and integration with other Oracle products. Amazon Redshift, while powerful for analytics, may require additional tools or services for complex data integration tasks.

  6. Ecosystem and Vendors: Finally, the vendor ecosystem surrounding Amazon Redshift and Oracle differs significantly. Oracle has a long-established ecosystem with many third-party vendors providing support, tools, and services tailored for Oracle databases. Amazon Redshift, while gaining popularity, has a newer ecosystem with a more limited range of third-party options.

In summary, Amazon Redshift offers better scalability, cost-effectiveness, and ease of management, with optimized performance for large-scale analytics, while Oracle provides a wider range of data movement and integration options, and benefits from a more established vendor ecosystem.

Advice on Amazon Redshift and Oracle

We need to perform ETL from several databases into a data warehouse or data lake. We want to

  • keep raw and transformed data available to users to draft their own queries efficiently
  • give users the ability to give custom permissions and SSO
  • move between open-source on-premises development and cloud-based production environments

We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.

See more
Replies (3)
John Nguyen
Recommends
on
AirflowAirflowAWS LambdaAWS Lambda

You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.

But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.

See more
Recommends
on
AirflowAirflow

Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.

See more
Recommends

You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.

See more
Decisions about Amazon Redshift and Oracle
Julien Lafont

Cloud Data-warehouse is the centerpiece of modern Data platform. The choice of the most suitable solution is therefore fundamental.

Our benchmark was conducted over BigQuery and Snowflake. These solutions seem to match our goals but they have very different approaches.

BigQuery is notably the only 100% serverless cloud data-warehouse, which requires absolutely NO maintenance: no re-clustering, no compression, no index optimization, no storage management, no performance management. Snowflake requires to set up (paid) reclustering processes, to manage the performance allocated to each profile, etc. We can also mention Redshift, which we have eliminated because this technology requires even more ops operation.

BigQuery can therefore be set up with almost zero cost of human resources. Its on-demand pricing is particularly adapted to small workloads. 0 cost when the solution is not used, only pay for the query you're running. But quickly the use of slots (with monthly or per-minute commitment) will drastically reduce the cost of use. We've reduced by 10 the cost of our nightly batches by using flex slots.

Finally, a major advantage of BigQuery is its almost perfect integration with Google Cloud Platform services: Cloud functions, Dataflow, Data Studio, etc.

BigQuery is still evolving very quickly. The next milestone, BigQuery Omni, will allow to run queries over data stored in an external Cloud platform (Amazon S3 for example). It will be a major breakthrough in the history of cloud data-warehouses. Omni will compensate a weakness of BigQuery: transferring data in near real time from S3 to BQ is not easy today. It was even simpler to implement via Snowflake's Snowpipe solution.

We also plan to use the Machine Learning features built into BigQuery to accelerate our deployment of Data-Science-based projects. An opportunity only offered by the BigQuery solution

See more
Daniel Moya
Data Engineer at Dimensigon · | 4 upvotes · 459.3K views

We have chosen Tibero over Oracle because we want to offer a PL/SQL-as-a-Service that the users can deploy in any Cloud without concerns from our website at some standard cost. With Oracle Database, developers would have to worry about what they implement and the related costs of each feature but the licensing model from Tibero is just 1 price and we have all features included, so we don't have to worry and developers using our SQLaaS neither. PostgreSQL would be open source. We have chosen Tibero over Oracle because we want to offer a PL/SQL that you can deploy in any Cloud without concerns. PostgreSQL would be the open source option but we need to offer an SQLaaS with encryption and more enterprise features in the background and best value option we have found, it was Tibero Database for PL/SQL-based applications.

See more

We wanted a JSON datastore that could save the state of our bioinformatics visualizations without destructive normalization. As a leading NoSQL data storage technology, MongoDB has been a perfect fit for our needs. Plus it's open source, and has an enterprise SLA scale-out path, with support of hosted solutions like Atlas. Mongo has been an absolute champ. So much so that SQL and Oracle have begun shipping JSON column types as a new feature for their databases. And when Fast Healthcare Interoperability Resources (FHIR) announced support for JSON, we basically had our FHIR datalake technology.

See more

In the field of bioinformatics, we regularly work with hierarchical and unstructured document data. Unstructured text data from PDFs, image data from radiographs, phylogenetic trees and cladograms, network graphs, streaming ECG data... none of it fits into a traditional SQL database particularly well. As such, we prefer to use document oriented databases.

MongoDB is probably the oldest component in our stack besides Javascript, having been in it for over 5 years. At the time, we were looking for a technology that could simply cache our data visualization state (stored in JSON) in a database as-is without any destructive normalization. MongoDB was the perfect tool; and has been exceeding expectations ever since.

Trivia fact: some of the earliest electronic medical records (EMRs) used a document oriented database called MUMPS as early as the 1960s, prior to the invention of SQL. MUMPS is still in use today in systems like Epic and VistA, and stores upwards of 40% of all medical records at hospitals. So, we saw MongoDB as something as a 21st century version of the MUMPS database.

See more
Manage your open source components, licenses, and vulnerabilities
Learn More
Pros of Amazon Redshift
Pros of Oracle
  • 41
    Data Warehousing
  • 27
    Scalable
  • 17
    SQL
  • 14
    Backed by Amazon
  • 5
    Encryption
  • 1
    Cheap and reliable
  • 1
    Isolation
  • 1
    Best Cloud DW Performance
  • 1
    Fast columnar storage
  • 44
    Reliable
  • 33
    Enterprise
  • 15
    High Availability
  • 5
    Hard to maintain
  • 5
    Expensive
  • 4
    Maintainable
  • 4
    Hard to use
  • 3
    High complexity

Sign up to add or upvote prosMake informed product decisions

Cons of Amazon Redshift
Cons of Oracle
    Be the first to leave a con
    • 14
      Expensive

    Sign up to add or upvote consMake informed product decisions

    What is Amazon Redshift?

    It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

    What is Oracle?

    Oracle Database is an RDBMS. An RDBMS that implements object-oriented features such as user-defined types, inheritance, and polymorphism is called an object-relational database management system (ORDBMS). Oracle Database has extended the relational model to an object-relational model, making it possible to store complex business models in a relational database.

    Need advice about which tool to choose?Ask the StackShare community!

    What companies use Amazon Redshift?
    What companies use Oracle?
    Manage your open source components, licenses, and vulnerabilities
    Learn More

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Amazon Redshift?
    What tools integrate with Oracle?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    Blog Posts

    Jul 9 2019 at 7:22PM

    Blue Medora

    DockerPostgreSQLNew Relic+8
    11
    2368
    JavaScriptGitHubPython+42
    53
    22120
    GitHubMySQLSlack+44
    109
    50743
    What are some alternatives to Amazon Redshift and Oracle?
    Google BigQuery
    Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.
    Amazon Athena
    Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
    Amazon DynamoDB
    With it , you can offload the administrative burden of operating and scaling a highly available distributed database cluster, while paying a low price for only what you use.
    Amazon Redshift Spectrum
    With Redshift Spectrum, you can extend the analytic power of Amazon Redshift beyond data stored on local disks in your data warehouse to query vast amounts of unstructured data in your Amazon S3 “data lake” -- without having to load or transform any data.
    Hadoop
    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
    See all alternatives