Need advice about which tool to choose?Ask the StackShare community!
Apache Flink vs Dremio: What are the differences?
Introduction
Apache Flink and Dremio are both powerful tools in the data processing and analytics space. Below are the key differences between Apache Flink and Dremio.
Architecture: Apache Flink is a stream processing framework that enables high-throughput, low-latency data analytics while Dremio focuses on data virtualization and querying data across various sources in real-time. Flink is designed for real-time streaming and batch processing, while Dremio is more focused on providing a unified view of data from multiple sources.
Use Cases: Apache Flink is commonly used for real-time data processing, streaming analytics, and complex event processing. On the other hand, Dremio is ideal for self-service data exploration, data virtualization, accelerating queries on data lakes, cloud data warehouses, and other data sources.
Programming Language Support: Apache Flink primarily supports Java and Scala for writing data processing applications, while Dremio supports SQL queries to interact with various data sources using its SQL engine. Dremio also provides REST APIs for programmatic interaction.
Data Storage and Management: Apache Flink does not focus on data storage and management but rather data processing. In contrast, Dremio provides a data reflection engine that optimizes and accelerates queries by creating reflection caches and materialized views for data stored in various sources.
Community and Ecosystem: Apache Flink has a large and active open-source community with a wide range of connectors and integrations with other tools such as Apache Kafka, Apache Hadoop, and more. Dremio also has a growing community but focuses more on its proprietary data virtualization platform.
Deployment: Apache Flink can be deployed in standalone mode, on YARN, Mesos, Kubernetes, or can also run on cloud platforms like AWS and Azure. Dremio typically runs as a virtualization engine on-premises or on cloud environments like AWS, Azure, or Google Cloud Platform.
Summary
In Summary, Apache Flink is a stream processing framework focused on real-time analytics, while Dremio is a data virtualization platform that provides a unified view of data from various sources.
We need to perform ETL from several databases into a data warehouse or data lake. We want to
- keep raw and transformed data available to users to draft their own queries efficiently
- give users the ability to give custom permissions and SSO
- move between open-source on-premises development and cloud-based production environments
We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.
You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.
But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.
Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.
You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.
We have a Kafka topic having events of type A and type B. We need to perform an inner join on both type of events using some common field (primary-key). The joined events to be inserted in Elasticsearch.
In usual cases, type A and type B events (with same key) observed to be close upto 15 minutes. But in some cases they may be far from each other, lets say 6 hours. Sometimes event of either of the types never come.
In all cases, we should be able to find joined events instantly after they are joined and not-joined events within 15 minutes.
The first solution that came to me is to use upsert to update ElasticSearch:
- Use the primary-key as ES document id
- Upsert the records to ES as soon as you receive them. As you are using upsert, the 2nd record of the same primary-key will not overwrite the 1st one, but will be merged with it.
Cons: The load on ES will be higher, due to upsert.
To use Flink:
- Create a KeyedDataStream by the primary-key
- In the ProcessFunction, save the first record in a State. At the same time, create a Timer for 15 minutes in the future
- When the 2nd record comes, read the 1st record from the State, merge those two, and send out the result, and clear the State and the Timer if it has not fired
- When the Timer fires, read the 1st record from the State and send out as the output record.
- Have a 2nd Timer of 6 hours (or more) if you are not using Windowing to clean up the State
Pro: if you have already having Flink ingesting this stream. Otherwise, I would just go with the 1st solution.
Please refer "Structured Streaming" feature of Spark. Refer "Stream - Stream Join" at https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#stream-stream-joins . In short you need to specify "Define watermark delays on both inputs" and "Define a constraint on time across the two inputs"
I am trying to build a data lake by pulling data from multiple data sources ( custom-built tools, excel files, CSV files, etc) and use the data lake to generate dashboards.
My question is which is the best tool to do the following:
- Create pipelines to ingest the data from multiple sources into the data lake
- Help me in aggregating and filtering data available in the data lake.
- Create new reports by combining different data elements from the data lake.
I need to use only open-source tools for this activity.
I appreciate your valuable inputs and suggestions. Thanks in Advance.
Hi Karunakaran. I obviously have an interest here, as I work for the company, but the problem you are describing is one that Zetaris can solve. Talend is a good ETL product, and Dremio is a good data virtualization product, but the problem you are describing best fits a tool that can combine the five styles of data integration (bulk/batch data movement, data replication/data synchronization, message-oriented movement of data, data virtualization, and stream data integration). I may be wrong, but Zetaris is, to the best of my knowledge, the only product in the world that can do this. Zetaris is not a dashboarding tool - you would need to combine us with Tableau or Qlik or PowerBI (or whatever) - but Zetaris can consolidate data from any source and any location (structured, unstructured, on-prem or in the cloud) in real time to allow clients a consolidated view of whatever they want whenever they want it. Please take a look at www.zetaris.com for more information. I don't want to do a "hard sell", here, so I'll say no more! Warmest regards, Rod Beecham.
Pros of Dremio
- Nice GUI to enable more people to work with Data3
- Connect NoSQL databases with RDBMS2
- Easier to Deploy2
- Free1
Pros of Apache Flink
- Unified batch and stream processing16
- Easy to use streaming apis8
- Out-of-the box connector to kinesis,s3,hdfs8
- Open Source4
- Low latency2
Sign up to add or upvote prosMake informed product decisions
Cons of Dremio
- Works only on Iceberg structured data1