I've been going over the documentation and couldn't find answers to different questions like:
Apache Hive is built on top of Hadoop meaning if I wanted to scale it up I could do either horizontal scaling or vertical scaling. but if I want to scale up openrefine to cater more data then how can this be achieved? the only thing I could find was to allocate more memory like 2 of 4GB but using this approach would mean that we would run out of memory to allot. so thoughts on this?
Secondly, Hadoop has MapReduce meaning a task is reduced to many mapper running in parallel to perform the task which in turn increase the processing speed, is there a similar mechanism in OpenRefine or does it only have a single processing unit (as it is running locally). thoughts?
From my point of view, both OpenRefine and Apache Hive serve completely different purposes. OpenRefine is intended for interactive cleaning of messy data locally. You could work with their libraries to use some of OpenRefine features as part of your data pipeline (there are pointers in FAQ), but OpenRefine in general is intended for a single-user local operation.
I can't recommend a particular alternative without better understanding of your use case. But if you are looking for an interactive tool to work with big data at scale, take a look at notebook environments like Jupyter, Databricks, or Deepnote. If you are building a data processing pipeline, consider also Apache Spark.
Edit: Fixed references from Hadoop to Hive, which is actually closer to Spark.
I don't think that OpenRefine and Apache Hive are compatible for such tasks. If you need to cleanup and process huge amount of data (big data) I would recommend to use Clickhouse instead and to do data processing tasks using SQL queries, not manually.
OpenRefine is a great tool with the great limitations. It doesn't handle big datasets, it doesn't scale, it doesn't handle JSON documents with sub-documents.