Alternatives to OpenRefine logo

Alternatives to OpenRefine

Trifacta, R Language, Python, Pandas, and Talend are the most popular alternatives and competitors to OpenRefine.
31
66
+ 1
0

What is OpenRefine and what are its top alternatives?

It is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data.
OpenRefine is a tool in the Big Data Tools category of a tech stack.
OpenRefine is an open source tool with 10.4K GitHub stars and 1.9K GitHub forks. Here’s a link to OpenRefine's open source repository on GitHub

Top Alternatives to OpenRefine

  • Trifacta
    Trifacta

    It is an Intelligent Platform that Interoperates with Your Data Investments. It sits between the data storage and processing environments and the visualization, statistical or machine learning tools used downstream ...

  • R Language
    R Language

    R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. ...

  • Python
    Python

    Python is a general purpose programming language created by Guido Van Rossum. Python is most praised for its elegant syntax and readable code, if you are just beginning your programming career python suits you best. ...

  • Pandas
    Pandas

    Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more. ...

  • Talend
    Talend

    It is an open source software integration platform helps you in effortlessly turning data into business insights. It uses native code generation that lets you run your data pipelines seamlessly across all cloud providers and get optimized performance on all platforms. ...

  • RapidMiner
    RapidMiner

    It is a software platform for data science teams that unites data prep, machine learning, and predictive model deployment. ...

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Splunk
    Splunk

    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...

OpenRefine alternatives & related posts

Trifacta logo

Trifacta

15
40
0
Develops data wrangling software for data exploration and self-service data preparation for analysis
15
40
+ 1
0
PROS OF TRIFACTA
    Be the first to leave a pro
    CONS OF TRIFACTA
      Be the first to leave a con

      related Trifacta posts

      We are a young start-up with 2 developers and a team in India looking to choose our next ETL tool. We have a few processes in Azure Data Factory but are looking to switch to a better platform. We were debating Trifacta and Airflow. Or even staying with Azure Data Factory. The use case will be to feed data to front-end APIs.

      See more
      R Language logo

      R Language

      3.2K
      1.9K
      412
      A language and environment for statistical computing and graphics
      3.2K
      1.9K
      + 1
      412
      PROS OF R LANGUAGE
      • 84
        Data analysis
      • 63
        Graphics and data visualization
      • 54
        Free
      • 45
        Great community
      • 38
        Flexible statistical analysis toolkit
      • 27
        Easy packages setup
      • 27
        Access to powerful, cutting-edge analytics
      • 18
        Interactive
      • 13
        R Studio IDE
      • 9
        Hacky
      • 7
        Shiny apps
      • 6
        Shiny interactive plots
      • 6
        Preferred Medium
      • 5
        Automated data reports
      • 4
        Cutting-edge machine learning straight from researchers
      • 3
        Machine Learning
      • 2
        Graphical visualization
      • 1
        Flexible Syntax
      CONS OF R LANGUAGE
      • 6
        Very messy syntax
      • 4
        Tables must fit in RAM
      • 3
        Arrays indices start with 1
      • 2
        Messy syntax for string concatenation
      • 2
        No push command for vectors/lists
      • 1
        Messy character encoding
      • 0
        Poor syntax for classes
      • 0
        Messy syntax for array/vector combination

      related R Language posts

      Eric Colson
      Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

      The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

      Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

      At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

      For more info:

      #DataScience #DataStack #Data

      See more
      Maged Maged Rafaat Kamal
      Shared insights
      on
      PythonPythonR LanguageR Language

      I am currently trying to learn R Language for machine learning, I already have a good knowledge of Python. What resources would you recommend to learn from as a beginner in R?

      See more
      Python logo

      Python

      238.5K
      194.6K
      6.8K
      A clear and powerful object-oriented programming language, comparable to Perl, Ruby, Scheme, or Java.
      238.5K
      194.6K
      + 1
      6.8K
      PROS OF PYTHON
      • 1.2K
        Great libraries
      • 959
        Readable code
      • 844
        Beautiful code
      • 785
        Rapid development
      • 688
        Large community
      • 434
        Open source
      • 391
        Elegant
      • 280
        Great community
      • 272
        Object oriented
      • 218
        Dynamic typing
      • 77
        Great standard library
      • 58
        Very fast
      • 54
        Functional programming
      • 48
        Easy to learn
      • 45
        Scientific computing
      • 35
        Great documentation
      • 28
        Easy to read
      • 28
        Productivity
      • 28
        Matlab alternative
      • 23
        Simple is better than complex
      • 20
        It's the way I think
      • 19
        Imperative
      • 18
        Free
      • 18
        Very programmer and non-programmer friendly
      • 17
        Machine learning support
      • 17
        Powerfull language
      • 16
        Fast and simple
      • 14
        Scripting
      • 12
        Explicit is better than implicit
      • 11
        Ease of development
      • 10
        Clear and easy and powerfull
      • 9
        Unlimited power
      • 8
        It's lean and fun to code
      • 8
        Import antigravity
      • 7
        Python has great libraries for data processing
      • 7
        Print "life is short, use python"
      • 6
        Flat is better than nested
      • 6
        Readability counts
      • 6
        Rapid Prototyping
      • 6
        Fast coding and good for competitions
      • 6
        Now is better than never
      • 6
        There should be one-- and preferably only one --obvious
      • 6
        High Documented language
      • 6
        I love snakes
      • 6
        Although practicality beats purity
      • 6
        Great for tooling
      • 5
        Great for analytics
      • 5
        Lists, tuples, dictionaries
      • 4
        Multiple Inheritence
      • 4
        Complex is better than complicated
      • 4
        Socially engaged community
      • 4
        Easy to learn and use
      • 4
        Simple and easy to learn
      • 4
        Web scraping
      • 4
        Easy to setup and run smooth
      • 4
        Beautiful is better than ugly
      • 4
        Plotting
      • 4
        CG industry needs
      • 3
        No cruft
      • 3
        It is Very easy , simple and will you be love programmi
      • 3
        Many types of collections
      • 3
        If the implementation is easy to explain, it may be a g
      • 3
        If the implementation is hard to explain, it's a bad id
      • 3
        Special cases aren't special enough to break the rules
      • 3
        Pip install everything
      • 3
        List comprehensions
      • 3
        Generators
      • 3
        Import this
      • 2
        Flexible and easy
      • 2
        Batteries included
      • 2
        Can understand easily who are new to programming
      • 2
        Powerful language for AI
      • 2
        Should START with this but not STICK with This
      • 2
        A-to-Z
      • 2
        Because of Netflix
      • 2
        Only one way to do it
      • 2
        Better outcome
      • 2
        Good for hacking
      • 1
        Securit
      • 1
        Slow
      • 1
        Sexy af
      • 0
        Ni
      • 0
        Powerful
      CONS OF PYTHON
      • 53
        Still divided between python 2 and python 3
      • 28
        Performance impact
      • 26
        Poor syntax for anonymous functions
      • 22
        GIL
      • 19
        Package management is a mess
      • 14
        Too imperative-oriented
      • 12
        Hard to understand
      • 12
        Dynamic typing
      • 12
        Very slow
      • 8
        Indentations matter a lot
      • 8
        Not everything is expression
      • 7
        Incredibly slow
      • 7
        Explicit self parameter in methods
      • 6
        Requires C functions for dynamic modules
      • 6
        Poor DSL capabilities
      • 6
        No anonymous functions
      • 5
        Fake object-oriented programming
      • 5
        Threading
      • 5
        The "lisp style" whitespaces
      • 5
        Official documentation is unclear.
      • 5
        Hard to obfuscate
      • 5
        Circular import
      • 4
        Lack of Syntax Sugar leads to "the pyramid of doom"
      • 4
        The benevolent-dictator-for-life quit
      • 4
        Not suitable for autocomplete
      • 2
        Meta classes
      • 1
        Training wheels (forced indentation)

      related Python posts

      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 9.6M views

      How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

      Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

      Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

      https://eng.uber.com/distributed-tracing/

      (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

      Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

      See more
      Nick Parsons
      Building cool things on the internet 🛠️ at Stream · | 35 upvotes · 3.3M views

      Winds 2.0 is an open source Podcast/RSS reader developed by Stream with a core goal to enable a wide range of developers to contribute.

      We chose JavaScript because nearly every developer knows or can, at the very least, read JavaScript. With ES6 and Node.js v10.x.x, it’s become a very capable language. Async/Await is powerful and easy to use (Async/Await vs Promises). Babel allows us to experiment with next-generation JavaScript (features that are not in the official JavaScript spec yet). Yarn allows us to consistently install packages quickly (and is filled with tons of new tricks)

      We’re using JavaScript for everything – both front and backend. Most of our team is experienced with Go and Python, so Node was not an obvious choice for this app.

      Sure... there will be haters who refuse to acknowledge that there is anything remotely positive about JavaScript (there are even rants on Hacker News about Node.js); however, without writing completely in JavaScript, we would not have seen the results we did.

      #FrameworksFullStack #Languages

      See more
      Pandas logo

      Pandas

      1.7K
      1.3K
      23
      High-performance, easy-to-use data structures and data analysis tools for the Python programming language
      1.7K
      1.3K
      + 1
      23
      PROS OF PANDAS
      • 21
        Easy data frame management
      • 2
        Extensive file format compatibility
      CONS OF PANDAS
        Be the first to leave a con

        related Pandas posts

        Server side

        We decided to use Python for our backend because it is one of the industry standard languages for data analysis and machine learning. It also has a lot of support due to its large user base.

        • Web Server: We chose Flask because we want to keep our machine learning / data analysis and the web server in the same language. Flask is easy to use and we all have experience with it. Postman will be used for creating and testing APIs due to its convenience.

        • Machine Learning: We decided to go with PyTorch for machine learning since it is one of the most popular libraries. It is also known to have an easier learning curve than other popular libraries such as Tensorflow. This is important because our team lacks ML experience and learning the tool as fast as possible would increase productivity.

        • Data Analysis: Some common Python libraries will be used to analyze our data. These include NumPy, Pandas , and matplotlib. These tools combined will help us learn the properties and characteristics of our data. Jupyter notebook will be used to help organize the data analysis process, and improve the code readability.

        Client side

        • UI: We decided to use React for the UI because it helps organize the data and variables of the application into components, making it very convenient to maintain our dashboard. Since React is one of the most popular front end frameworks right now, there will be a lot of support for it as well as a lot of potential new hires that are familiar with the framework. CSS 3 and HTML5 will be used for the basic styling and structure of the web app, as they are the most widely used front end languages.

        • State Management: We decided to use Redux to manage the state of the application since it works naturally to React. Our team also already has experience working with Redux which gave it a slight edge over the other state management libraries.

        • Data Visualization: We decided to use the React-based library Victory to visualize the data. They have very user friendly documentation on their official website which we find easy to learn from.

        Cache

        • Caching: We decided between Redis and memcached because they are two of the most popular open-source cache engines. We ultimately decided to use Redis to improve our web app performance mainly due to the extra functionalities it provides such as fine-tuning cache contents and durability.

        Database

        • Database: We decided to use a NoSQL database over a relational database because of its flexibility from not having a predefined schema. The user behavior analytics has to be flexible since the data we plan to store may change frequently. We decided on MongoDB because it is lightweight and we can easily host the database with MongoDB Atlas . Everyone on our team also has experience working with MongoDB.

        Infrastructure

        • Deployment: We decided to use Heroku over AWS, Azure, Google Cloud because it is free. Although there are advantages to the other cloud services, Heroku makes the most sense to our team because our primary goal is to build an MVP.

        Other Tools

        • Communication Slack will be used as the primary source of communication. It provides all the features needed for basic discussions. In terms of more interactive meetings, Zoom will be used for its video calls and screen sharing capabilities.

        • Source Control The project will be stored on GitHub and all code changes will be done though pull requests. This will help us keep the codebase clean and make it easy to revert changes when we need to.

        See more

        Should I continue learning Django or take this Spring opportunity? I have been coding in python for about 2 years. I am currently learning Django and I am enjoying it. I also have some knowledge of data science libraries (Pandas, NumPy, scikit-learn, PyTorch). I am currently enhancing my web development and software engineering skills and may shift later into data science since I came from a medical background. The issue is that I am offered now a very trustworthy 9 months program teaching Java/Spring. The graduates of this program work directly in well know tech companies. Although I have been planning to continue with my Python, the other opportunity makes me hesitant since it will put me to work in a specific roadmap with deadlines and mentors. I also found on glassdoor that Spring jobs are way more than Django. Should I apply for this program or continue my journey?

        See more
        Talend logo

        Talend

        150
        247
        0
        A single, unified suite for all integration needs
        150
        247
        + 1
        0
        PROS OF TALEND
          Be the first to leave a pro
          CONS OF TALEND
            Be the first to leave a con

            related Talend posts

            RapidMiner logo

            RapidMiner

            34
            63
            0
            Prep data, create predictive models & operationalize analytics within any business process
            34
            63
            + 1
            0
            PROS OF RAPIDMINER
              Be the first to leave a pro
              CONS OF RAPIDMINER
                Be the first to leave a con

                related RapidMiner posts

                Apache Spark logo

                Apache Spark

                2.9K
                3.5K
                140
                Fast and general engine for large-scale data processing
                2.9K
                3.5K
                + 1
                140
                PROS OF APACHE SPARK
                • 61
                  Open-source
                • 48
                  Fast and Flexible
                • 8
                  One platform for every big data problem
                • 8
                  Great for distributed SQL like applications
                • 6
                  Easy to install and to use
                • 3
                  Works well for most Datascience usecases
                • 2
                  Interactive Query
                • 2
                  Machine learning libratimery, Streaming in real
                • 2
                  In memory Computation
                CONS OF APACHE SPARK
                • 4
                  Speed

                related Apache Spark posts

                Eric Colson
                Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

                The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

                Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

                At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

                For more info:

                #DataScience #DataStack #Data

                See more
                Conor Myhrvold
                Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 2.9M views

                Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

                Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

                https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

                (Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

                See more
                Splunk logo

                Splunk

                597
                998
                20
                Search, monitor, analyze and visualize machine data
                597
                998
                + 1
                20
                PROS OF SPLUNK
                • 3
                  API for searching logs, running reports
                • 3
                  Alert system based on custom query results
                • 2
                  Dashboarding on any log contents
                • 2
                  Custom log parsing as well as automatic parsing
                • 2
                  Ability to style search results into reports
                • 2
                  Query engine supports joining, aggregation, stats, etc
                • 2
                  Splunk language supports string, date manip, math, etc
                • 2
                  Rich GUI for searching live logs
                • 1
                  Query any log as key-value pairs
                • 1
                  Granular scheduling and time window support
                CONS OF SPLUNK
                • 1
                  Splunk query language rich so lots to learn

                related Splunk posts

                Shared insights
                on
                KibanaKibanaSplunkSplunkGrafanaGrafana

                I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.

                See more
                Shared insights
                on
                SplunkSplunkElasticsearchElasticsearch

                We are currently exploring Elasticsearch and Splunk for our centralized logging solution. I need some feedback about these two tools. We expect our logs in the range of upwards > of 10TB of logging data.

                See more