Alternatives to Apache Oozie logo

Alternatives to Apache Oozie

Apache Spark, Airflow, Apache NiFi, Yarn, and Zookeeper are the most popular alternatives and competitors to Apache Oozie.
40
0

What is Apache Oozie and what are its top alternatives?

Apache Oozie is a workflow scheduler system to manage Hadoop jobs. It allows users to define workflows to schedule jobs, monitor them, and manage dependencies between them. Key features include workflow scheduling, coordination of job executions, and integration with Hadoop ecosystem tools like HDFS and MapReduce. However, Oozie has limitations such as complex XML configuration and lack of user-friendly interface for creating and managing workflows.

  1. Airflow: Airflow is a platform to programmatically author, schedule, and monitor workflows. It has a user-friendly interface, support for various integrations, and dynamic execution dependencies. Compared to Oozie, Airflow offers a more flexible and scalable approach with a rich set of features but may require a learning curve for new users.

  2. Luigi: Luigi is a Python package for building complex pipelines of batch jobs. It offers a centralized scheduler, dependency resolution, and visualization of workflow status. Luigi is easy to set up and use, but it may lack some of the advanced features available in Oozie.

  3. Apache NiFi: NiFi is a data automation tool that provides a visual flow-based programming model. It supports data routing, transformation, and system mediation tasks. NiFi offers real-time data processing capabilities and a user-friendly interface but may have a different use case compared to Oozie.

  4. Prefect: Prefect is an open-source workflow automation system that simplifies the orchestration of complex data workflows. It offers a Python-based interface, versioning, and monitoring capabilities. Prefect provides a modern and intuitive approach to workflow management but may require additional setup compared to Oozie.

  5. Azkaban: Azkaban is a batch workflow job scheduler created at LinkedIn. It provides an easy-to-use web interface, project-based scheduling, and email notifications. Azkaban is well-suited for organizations handling large-scale workflow orchestration but may not offer as many integrations compared to Oozie.

  6. Camunda: Camunda is an open-source workflow and decision automation platform. It supports BPMN for defining workflows, CMMN for case management, and DMN for decision tables. Camunda offers a comprehensive set of features for process automation but may require additional development effort compared to Oozie.

  7. Pinball: Pinball is a scalable workflow manager developed at Pinterest. It supports job scheduling, dependency management, and fault tolerance. Pinball is designed for large-scale workflow orchestration but may have a steeper learning curve compared to Oozie.

  8. dagster: dagster is a data orchestrator for machine learning, analytics, and ETL pipelines. It provides a unified programming model for defining pipelines, data dependencies, and asset management. dagster offers a modern approach to data orchestration but may have a different focus compared to Oozie.

  9. JobScheduler: JobScheduler is a cross-platform workload automation system for enterprise IT environments. It offers schedule automation, event-based job triggering, and advanced monitoring capabilities. JobScheduler is suitable for complex IT workflows but may not have the same level of integration with Hadoop ecosystem tools as Oozie.

  10. Conductor: Conductor is a microservices orchestration engine developed at Netflix. It supports workflow execution, task routing, and external service integrations. Conductor is well-suited for cloud-native environments but may have a narrower focus compared to Oozie.

Top Alternatives to Apache Oozie

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Airflow
    Airflow

    Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed. ...

  • Apache NiFi
    Apache NiFi

    An easy to use, powerful, and reliable system to process and distribute data. It supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic. ...

  • Yarn
    Yarn

    Yarn caches every package it downloads so it never needs to again. It also parallelizes operations to maximize resource utilization so install times are faster than ever. ...

  • Zookeeper
    Zookeeper

    A centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications. ...

  • Apache Beam
    Apache Beam

    It implements batch and streaming data processing jobs that run on any execution engine. It executes pipelines on multiple execution environments. ...

  • MySQL
    MySQL

    The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...

  • PostgreSQL
    PostgreSQL

    PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions. ...

Apache Oozie alternatives & related posts

Apache Spark logo

Apache Spark

3K
3.5K
140
Fast and general engine for large-scale data processing
3K
3.5K
+ 1
140
PROS OF APACHE SPARK
  • 61
    Open-source
  • 48
    Fast and Flexible
  • 8
    One platform for every big data problem
  • 8
    Great for distributed SQL like applications
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    Interactive Query
  • 2
    Machine learning libratimery, Streaming in real
  • 2
    In memory Computation
CONS OF APACHE SPARK
  • 4
    Speed

related Apache Spark posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Patrick Sun
Software Engineer at Stitch Fix · | 10 upvotes · 60.4K views

As a frontend engineer on the Algorithms & Analytics team at Stitch Fix, I work with data scientists to develop applications and visualizations to help our internal business partners make data-driven decisions. I envisioned a platform that would assist data scientists in the data exploration process, allowing them to visually explore and rapidly iterate through their assumptions, then share their insights with others. This would align with our team's philosophy of having engineers "deploy platforms, services, abstractions, and frameworks that allow the data scientists to conceive of, develop, and deploy their ideas with autonomy", and solve the pain of data exploration.

The final product, code-named Dora, is built with React, Redux.js and Victory, backed by Elasticsearch to enable fast and iterative data exploration, and uses Apache Spark to move data from our Amazon S3 data warehouse into the Elasticsearch cluster.

See more
Airflow logo

Airflow

1.7K
2.7K
128
A platform to programmaticaly author, schedule and monitor data pipelines, by Airbnb
1.7K
2.7K
+ 1
128
PROS OF AIRFLOW
  • 53
    Features
  • 14
    Task Dependency Management
  • 12
    Beautiful UI
  • 12
    Cluster of workers
  • 10
    Extensibility
  • 6
    Open source
  • 5
    Complex workflows
  • 5
    Python
  • 3
    Good api
  • 3
    Apache project
  • 3
    Custom operators
  • 2
    Dashboard
CONS OF AIRFLOW
  • 2
    Observability is not great when the DAGs exceed 250
  • 2
    Running it on kubernetes cluster relatively complex
  • 2
    Open source - provides minimum or no support
  • 1
    Logical separation of DAGs is not straight forward

related Airflow posts

Data science and engineering teams at Lyft maintain several big data pipelines that serve as the foundation for various types of analysis throughout the business.

Apache Airflow sits at the center of this big data infrastructure, allowing users to “programmatically author, schedule, and monitor data pipelines.” Airflow is an open source tool, and “Lyft is the very first Airflow adopter in production since the project was open sourced around three years ago.”

There are several key components of the architecture. A web UI allows users to view the status of their queries, along with an audit trail of any modifications the query. A metadata database stores things like job status and task instance status. A multi-process scheduler handles job requests, and triggers the executor to execute those tasks.

Airflow supports several executors, though Lyft uses CeleryExecutor to scale task execution in production. Airflow is deployed to three Amazon Auto Scaling Groups, with each associated with a celery queue.

Audit logs supplied to the web UI are powered by the existing Airflow audit logs as well as Flask signal.

Datadog, Statsd, Grafana, and PagerDuty are all used to monitor the Airflow system.

See more

We are a young start-up with 2 developers and a team in India looking to choose our next ETL tool. We have a few processes in Azure Data Factory but are looking to switch to a better platform. We were debating Trifacta and Airflow. Or even staying with Azure Data Factory. The use case will be to feed data to front-end APIs.

See more
Apache NiFi logo

Apache NiFi

351
686
65
A reliable system to process and distribute data
351
686
+ 1
65
PROS OF APACHE NIFI
  • 17
    Visual Data Flows using Directed Acyclic Graphs (DAGs)
  • 8
    Free (Open Source)
  • 7
    Simple-to-use
  • 5
    Scalable horizontally as well as vertically
  • 5
    Reactive with back-pressure
  • 4
    Fast prototyping
  • 3
    Bi-directional channels
  • 3
    End-to-end security between all nodes
  • 2
    Built-in graphical user interface
  • 2
    Can handle messages up to gigabytes in size
  • 2
    Data provenance
  • 1
    Lots of documentation
  • 1
    Hbase support
  • 1
    Support for custom Processor in Java
  • 1
    Hive support
  • 1
    Kudu support
  • 1
    Slack integration
  • 1
    Lot of articles
CONS OF APACHE NIFI
  • 2
    HA support is not full fledge
  • 2
    Memory-intensive
  • 1
    Kkk

related Apache NiFi posts

John Calandra
Data Manager at The Garrett Group · | 8 upvotes · 366.9K views

There is a question coming... I am using Oracle VirtualBox to spawn 3 Ubuntu Linux virtual machines (VM). VM1 is being used as a data lake - just a place to store flat files. VM2 hosts Apache NiFi. VM3 hosts PostgreSQL. I have built a NiFi pipeline that reads flat files on VM1 and then pipes the data over to and inserts it into the Postgresql database. I left this setup alone for a while, and then something hiccupped on VM3, and I had to rebuild it. Now I cannot make a remote connection to Postgresql on VM3. I was using pgAdmin3 on VM3, but it kept throwing errors - I found out it went end-of-life in 2018 and uninstalled it. pgAdmin4 is out, but for some reason, I cannot get the APT utility to find/install it. I am trying to figure out the pgAdmin4 install problem and looking for a good alternative for pgAdmin4 that I can use to diagnose the remote database connection problem. Does anyone have any suggestions? Thanks in advance.

See more

I am looking for the best tool to orchestrate #ETL workflows in non-Hadoop environments, mainly for regression testing use cases. Would Airflow or Apache NiFi be a good fit for this purpose?

For example, I want to run an Informatica ETL job and then run an SQL task as a dependency, followed by another task from Jira. What tool is best suited to set up such a pipeline?

See more
Yarn logo

Yarn

24.5K
13.3K
151
A new package manager for JavaScript
24.5K
13.3K
+ 1
151
PROS OF YARN
  • 85
    Incredibly fast
  • 22
    Easy to use
  • 13
    Open Source
  • 11
    Can install any npm package
  • 8
    Works where npm fails
  • 7
    Workspaces
  • 3
    Incomplete to run tasks
  • 2
    Fast
CONS OF YARN
  • 16
    Facebook
  • 7
    Sends data to facebook
  • 4
    Should be installed separately
  • 3
    Cannot publish to registry other than npm

related Yarn posts

Nick Parsons
Building cool things on the internet 🛠️ at Stream · | 35 upvotes · 4.3M views

Winds 2.0 is an open source Podcast/RSS reader developed by Stream with a core goal to enable a wide range of developers to contribute.

We chose JavaScript because nearly every developer knows or can, at the very least, read JavaScript. With ES6 and Node.js v10.x.x, it’s become a very capable language. Async/Await is powerful and easy to use (Async/Await vs Promises). Babel allows us to experiment with next-generation JavaScript (features that are not in the official JavaScript spec yet). Yarn allows us to consistently install packages quickly (and is filled with tons of new tricks)

We’re using JavaScript for everything – both front and backend. Most of our team is experienced with Go and Python, so Node was not an obvious choice for this app.

Sure... there will be haters who refuse to acknowledge that there is anything remotely positive about JavaScript (there are even rants on Hacker News about Node.js); however, without writing completely in JavaScript, we would not have seen the results we did.

#FrameworksFullStack #Languages

See more
Simon Reymann
Senior Fullstack Developer at QUANTUSflow Software GmbH · | 27 upvotes · 5.1M views

Our whole Node.js backend stack consists of the following tools:

  • Lerna as a tool for multi package and multi repository management
  • npm as package manager
  • NestJS as Node.js framework
  • TypeScript as programming language
  • ExpressJS as web server
  • Swagger UI for visualizing and interacting with the API’s resources
  • Postman as a tool for API development
  • TypeORM as object relational mapping layer
  • JSON Web Token for access token management

The main reason we have chosen Node.js over PHP is related to the following artifacts:

  • Made for the web and widely in use: Node.js is a software platform for developing server-side network services. Well-known projects that rely on Node.js include the blogging software Ghost, the project management tool Trello and the operating system WebOS. Node.js requires the JavaScript runtime environment V8, which was specially developed by Google for the popular Chrome browser. This guarantees a very resource-saving architecture, which qualifies Node.js especially for the operation of a web server. Ryan Dahl, the developer of Node.js, released the first stable version on May 27, 2009. He developed Node.js out of dissatisfaction with the possibilities that JavaScript offered at the time. The basic functionality of Node.js has been mapped with JavaScript since the first version, which can be expanded with a large number of different modules. The current package managers (npm or Yarn) for Node.js know more than 1,000,000 of these modules.
  • Fast server-side solutions: Node.js adopts the JavaScript "event-loop" to create non-blocking I/O applications that conveniently serve simultaneous events. With the standard available asynchronous processing within JavaScript/TypeScript, highly scalable, server-side solutions can be realized. The efficient use of the CPU and the RAM is maximized and more simultaneous requests can be processed than with conventional multi-thread servers.
  • A language along the entire stack: Widely used frameworks such as React or AngularJS or Vue.js, which we prefer, are written in JavaScript/TypeScript. If Node.js is now used on the server side, you can use all the advantages of a uniform script language throughout the entire application development. The same language in the back- and frontend simplifies the maintenance of the application and also the coordination within the development team.
  • Flexibility: Node.js sets very few strict dependencies, rules and guidelines and thus grants a high degree of flexibility in application development. There are no strict conventions so that the appropriate architecture, design structures, modules and features can be freely selected for the development.
See more
Zookeeper logo

Zookeeper

812
1K
43
Because coordinating distributed systems is a Zoo
812
1K
+ 1
43
PROS OF ZOOKEEPER
  • 11
    High performance ,easy to generate node specific config
  • 8
    Java
  • 8
    Kafka support
  • 5
    Spring Boot Support
  • 3
    Supports extensive distributed IPC
  • 2
    Curator
  • 2
    Used in ClickHouse
  • 2
    Supports DC/OS
  • 1
    Used in Hadoop
  • 1
    Embeddable In Java Service
CONS OF ZOOKEEPER
    Be the first to leave a con

    related Zookeeper posts

    Shared insights
    on
    ZookeeperZookeeperHAProxyHAProxy
    at

    Early 2013

    In early 2013, Airbnb tackled the problem of service discovery and load balancing in the context of a service oriented architecture (SOA) by building and releasing an open source tool called SmartStack. SmartStack is built on two other open source tools created by Airbnb called Nerve and Synapse.

    Nerve is a service registration daemon that performs health checks that “creates ephemeral nodes in Zookeeper which contain information about the address/port combos for a backend available to serve requests for a particular service.”

    Synapse is a transparent service discovery framework for connecting an SOA that reads the information in Zookeeper for available backends, and then uses that information to configure a local HAProxy process, which then routes requests between clients and services.

    See more
    Apache Beam logo

    Apache Beam

    180
    360
    14
    A unified programming model
    180
    360
    + 1
    14
    PROS OF APACHE BEAM
    • 5
      Open-source
    • 5
      Cross-platform
    • 2
      Portable
    • 2
      Unified batch and stream processing
    CONS OF APACHE BEAM
      Be the first to leave a con

      related Apache Beam posts

      I have to build a data processing application with an Apache Beam stack and Apache Flink runner on an Amazon EMR cluster. I saw some instability with the process and EMR clusters that keep going down. Here, the Apache Beam application gets inputs from Kafka and sends the accumulative data streams to another Kafka topic. Any advice on how to make the process more stable?

      See more
      MySQL logo

      MySQL

      125.3K
      106K
      3.8K
      The world's most popular open source database
      125.3K
      106K
      + 1
      3.8K
      PROS OF MYSQL
      • 800
        Sql
      • 679
        Free
      • 562
        Easy
      • 528
        Widely used
      • 490
        Open source
      • 180
        High availability
      • 160
        Cross-platform support
      • 104
        Great community
      • 79
        Secure
      • 75
        Full-text indexing and searching
      • 26
        Fast, open, available
      • 16
        Reliable
      • 16
        SSL support
      • 15
        Robust
      • 9
        Enterprise Version
      • 7
        Easy to set up on all platforms
      • 3
        NoSQL access to JSON data type
      • 1
        Relational database
      • 1
        Easy, light, scalable
      • 1
        Sequel Pro (best SQL GUI)
      • 1
        Replica Support
      CONS OF MYSQL
      • 16
        Owned by a company with their own agenda
      • 3
        Can't roll back schema changes

      related MySQL posts

      Nick Rockwell
      SVP, Engineering at Fastly · | 46 upvotes · 4.1M views

      When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

      So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

      React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

      Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

      See more
      Tim Abbott

      We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

      We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

      And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

      I can't recommend it highly enough.

      See more
      PostgreSQL logo

      PostgreSQL

      98.2K
      82.2K
      3.5K
      A powerful, open source object-relational database system
      98.2K
      82.2K
      + 1
      3.5K
      PROS OF POSTGRESQL
      • 763
        Relational database
      • 510
        High availability
      • 439
        Enterprise class database
      • 383
        Sql
      • 304
        Sql + nosql
      • 173
        Great community
      • 147
        Easy to setup
      • 131
        Heroku
      • 130
        Secure by default
      • 113
        Postgis
      • 50
        Supports Key-Value
      • 48
        Great JSON support
      • 34
        Cross platform
      • 33
        Extensible
      • 28
        Replication
      • 26
        Triggers
      • 23
        Multiversion concurrency control
      • 23
        Rollback
      • 21
        Open source
      • 18
        Heroku Add-on
      • 17
        Stable, Simple and Good Performance
      • 15
        Powerful
      • 13
        Lets be serious, what other SQL DB would you go for?
      • 11
        Good documentation
      • 9
        Scalable
      • 8
        Free
      • 8
        Reliable
      • 8
        Intelligent optimizer
      • 7
        Transactional DDL
      • 7
        Modern
      • 6
        One stop solution for all things sql no matter the os
      • 5
        Relational database with MVCC
      • 5
        Faster Development
      • 4
        Full-Text Search
      • 4
        Developer friendly
      • 3
        Excellent source code
      • 3
        Free version
      • 3
        Great DB for Transactional system or Application
      • 3
        Relational datanbase
      • 3
        search
      • 3
        Open-source
      • 2
        Text
      • 2
        Full-text
      • 1
        Can handle up to petabytes worth of size
      • 1
        Composability
      • 1
        Multiple procedural languages supported
      • 0
        Native
      CONS OF POSTGRESQL
      • 10
        Table/index bloatings

      related PostgreSQL posts

      Simon Reymann
      Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11.2M views

      Our whole DevOps stack consists of the following tools:

      • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
      • Respectively Git as revision control system
      • SourceTree as Git GUI
      • Visual Studio Code as IDE
      • CircleCI for continuous integration (automatize development process)
      • Prettier / TSLint / ESLint as code linter
      • SonarQube as quality gate
      • Docker as container management (incl. Docker Compose for multi-container application management)
      • VirtualBox for operating system simulation tests
      • Kubernetes as cluster management for docker containers
      • Heroku for deploying in test environments
      • nginx as web server (preferably used as facade server in production environment)
      • SSLMate (using OpenSSL) for certificate management
      • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
      • PostgreSQL as preferred database system
      • Redis as preferred in-memory database/store (great for caching)

      The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

      • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
      • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
      • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
      • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
      • Scalability: All-in-one framework for distributed systems.
      • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
      See more
      Jeyabalaji Subramanian

      Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

      We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

      Based on the above criteria, we selected the following tools to perform the end to end data replication:

      We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

      We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

      In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

      Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

      In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

      See more