Alternatives to Digdag logo

Alternatives to Digdag

Airflow, Jenkins, Luigi, MySQL, and PostgreSQL are the most popular alternatives and competitors to Digdag.
17
0

What is Digdag and what are its top alternatives?

Digdag is an open-source workflow automation tool that allows users to define workflows as a series of tasks with dependencies, making it easy to schedule and monitor data processing jobs. It supports various data workflow languages and has features like retrying failed tasks, parallelism, and notifications. However, Digdag lacks some advanced monitoring and alerting features, and the documentation can be a bit lacking compared to other tools.

  1. Airflow: Airflow is a popular open-source workflow management platform with a rich set of features like scheduling, monitoring, and retrying tasks. Pros include a large active community and extensibility through plugins, but it might have a steeper learning curve compared to Digdag.
  2. Luigi: Luigi is a Python package that helps users build complex pipelines of batch jobs. Key features include task dependency management, workflow visualization, and strong integration with Python, but it may lack some advanced features present in Digdag.
  3. Azkaban: Azkaban is a batch workflow job scheduler created at LinkedIn. It supports job dependency management, a web-based user interface, and a plugin system. However, it may require more manual setup compared to Digdag.
  4. Prefect: Prefect is an open-source workflow orchestration tool with a focus on data engineering and ETL tasks. It offers features like versioning, monitoring, and a user-friendly interface. However, it may involve a higher level of complexity compared to Digdag.
  5. Apache NiFi: Apache NiFi is a powerful data ingestion and distribution system that can also handle workflow orchestration tasks. It offers a visual interface for designing data flows, extensive data routing capabilities, and strong security features. However, it may be overkill for users seeking a lightweight workflow automation tool like Digdag.
  6. KubeFlow: KubeFlow is a machine learning workflow automation platform built on top of Kubernetes. It offers features like versioning, artifact tracking, and scalable training pipelines. However, it is more tailored towards ML tasks and may require Kubernetes expertise compared to Digdag.
  7. Oozie: Oozie is a workflow scheduler system for managing Apache Hadoop jobs. It supports coordination of various Hadoop jobs, workflow execution, and custom actions. However, it may have a steeper learning curve and lack some modern features found in Digdag.
  8. Conductor: Conductor is an open-source workflow orchestration engine created by Netflix. It offers features like task dependency resolution, dynamic execution plans, and observability. However, it may not be as widely adopted as some other alternatives to Digdag.
  9. Pinball: Pinball is a scalable workflow execution engine developed at Pinterest. It features distributed task scheduling, service orchestration, and a web-based UI. However, it may require more manual intervention for setting up and maintaining workflows compared to Digdag.
  10. Celery: Celery is a distributed task queue that supports both real-time processing and task scheduling. It offers features like task routing, monitoring, and support for multiple brokers. However, it may require more effort to set up workflows compared to Digdag's more streamlined approach.

Top Alternatives to Digdag

  • Airflow
    Airflow

    Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed. ...

  • Jenkins
    Jenkins

    In a nutshell Jenkins CI is the leading open-source continuous integration server. Built with Java, it provides over 300 plugins to support building and testing virtually any project. ...

  • Luigi
    Luigi

    It is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in. ...

  • MySQL
    MySQL

    The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...

  • PostgreSQL
    PostgreSQL

    PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions. ...

  • MongoDB
    MongoDB

    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...

  • Redis
    Redis

    Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. ...

  • Amazon S3
    Amazon S3

    Amazon Simple Storage Service provides a fully redundant data storage infrastructure for storing and retrieving any amount of data, at any time, from anywhere on the web ...

Digdag alternatives & related posts

Airflow logo

Airflow

1.7K
2.7K
128
A platform to programmaticaly author, schedule and monitor data pipelines, by Airbnb
1.7K
2.7K
+ 1
128
PROS OF AIRFLOW
  • 53
    Features
  • 14
    Task Dependency Management
  • 12
    Beautiful UI
  • 12
    Cluster of workers
  • 10
    Extensibility
  • 6
    Open source
  • 5
    Complex workflows
  • 5
    Python
  • 3
    Good api
  • 3
    Apache project
  • 3
    Custom operators
  • 2
    Dashboard
CONS OF AIRFLOW
  • 2
    Observability is not great when the DAGs exceed 250
  • 2
    Running it on kubernetes cluster relatively complex
  • 2
    Open source - provides minimum or no support
  • 1
    Logical separation of DAGs is not straight forward

related Airflow posts

Data science and engineering teams at Lyft maintain several big data pipelines that serve as the foundation for various types of analysis throughout the business.

Apache Airflow sits at the center of this big data infrastructure, allowing users to “programmatically author, schedule, and monitor data pipelines.” Airflow is an open source tool, and “Lyft is the very first Airflow adopter in production since the project was open sourced around three years ago.”

There are several key components of the architecture. A web UI allows users to view the status of their queries, along with an audit trail of any modifications the query. A metadata database stores things like job status and task instance status. A multi-process scheduler handles job requests, and triggers the executor to execute those tasks.

Airflow supports several executors, though Lyft uses CeleryExecutor to scale task execution in production. Airflow is deployed to three Amazon Auto Scaling Groups, with each associated with a celery queue.

Audit logs supplied to the web UI are powered by the existing Airflow audit logs as well as Flask signal.

Datadog, Statsd, Grafana, and PagerDuty are all used to monitor the Airflow system.

See more

We are a young start-up with 2 developers and a team in India looking to choose our next ETL tool. We have a few processes in Azure Data Factory but are looking to switch to a better platform. We were debating Trifacta and Airflow. Or even staying with Azure Data Factory. The use case will be to feed data to front-end APIs.

See more
Jenkins logo

Jenkins

58.4K
49.8K
2.2K
An extendable open source continuous integration server
58.4K
49.8K
+ 1
2.2K
PROS OF JENKINS
  • 523
    Hosted internally
  • 469
    Free open source
  • 318
    Great to build, deploy or launch anything async
  • 243
    Tons of integrations
  • 211
    Rich set of plugins with good documentation
  • 111
    Has support for build pipelines
  • 68
    Easy setup
  • 66
    It is open-source
  • 53
    Workflow plugin
  • 13
    Configuration as code
  • 12
    Very powerful tool
  • 11
    Many Plugins
  • 10
    Continuous Integration
  • 10
    Great flexibility
  • 9
    Git and Maven integration is better
  • 8
    100% free and open source
  • 7
    Github integration
  • 7
    Slack Integration (plugin)
  • 6
    Easy customisation
  • 6
    Self-hosted GitLab Integration (plugin)
  • 5
    Docker support
  • 5
    Pipeline API
  • 4
    Fast builds
  • 4
    Platform idnependency
  • 4
    Hosted Externally
  • 4
    Excellent docker integration
  • 3
    It`w worked
  • 3
    Customizable
  • 3
    Can be run as a Docker container
  • 3
    It's Everywhere
  • 3
    JOBDSL
  • 3
    AWS Integration
  • 2
    Easily extendable with seamless integration
  • 2
    PHP Support
  • 2
    Build PR Branch Only
  • 2
    NodeJS Support
  • 2
    Ruby/Rails Support
  • 2
    Universal controller
  • 2
    Loose Coupling
CONS OF JENKINS
  • 13
    Workarounds needed for basic requirements
  • 10
    Groovy with cumbersome syntax
  • 8
    Plugins compatibility issues
  • 7
    Lack of support
  • 7
    Limited abilities with declarative pipelines
  • 5
    No YAML syntax
  • 4
    Too tied to plugins versions

related Jenkins posts

Tymoteusz Paul
Devops guy at X20X Development LTD · | 23 upvotes · 9.8M views

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

See more
Thierry Schellenbach

Releasing new versions of our services is done by Travis CI. Travis first runs our test suite. Once it passes, it publishes a new release binary to GitHub.

Common tasks such as installing dependencies for the Go project, or building a binary are automated using plain old Makefiles. (We know, crazy old school, right?) Our binaries are compressed using UPX.

Travis has come a long way over the past years. I used to prefer Jenkins in some cases since it was easier to debug broken builds. With the addition of the aptly named “debug build” button, Travis is now the clear winner. It’s easy to use and free for open source, with no need to maintain anything.

#ContinuousIntegration #CodeCollaborationVersionControl

See more
Luigi logo

Luigi

78
210
9
ETL and data flow management library
78
210
+ 1
9
PROS OF LUIGI
  • 5
    Hadoop Support
  • 3
    Python
  • 1
    Open soure
CONS OF LUIGI
    Be the first to leave a con

    related Luigi posts

    MySQL logo

    MySQL

    125.3K
    106K
    3.8K
    The world's most popular open source database
    125.3K
    106K
    + 1
    3.8K
    PROS OF MYSQL
    • 800
      Sql
    • 679
      Free
    • 562
      Easy
    • 528
      Widely used
    • 490
      Open source
    • 180
      High availability
    • 160
      Cross-platform support
    • 104
      Great community
    • 79
      Secure
    • 75
      Full-text indexing and searching
    • 26
      Fast, open, available
    • 16
      Reliable
    • 16
      SSL support
    • 15
      Robust
    • 9
      Enterprise Version
    • 7
      Easy to set up on all platforms
    • 3
      NoSQL access to JSON data type
    • 1
      Relational database
    • 1
      Easy, light, scalable
    • 1
      Sequel Pro (best SQL GUI)
    • 1
      Replica Support
    CONS OF MYSQL
    • 16
      Owned by a company with their own agenda
    • 3
      Can't roll back schema changes

    related MySQL posts

    Nick Rockwell
    SVP, Engineering at Fastly · | 46 upvotes · 4.1M views

    When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

    So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

    React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

    Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

    See more
    Tim Abbott

    We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

    We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

    And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

    I can't recommend it highly enough.

    See more
    PostgreSQL logo

    PostgreSQL

    98.2K
    82.2K
    3.5K
    A powerful, open source object-relational database system
    98.2K
    82.2K
    + 1
    3.5K
    PROS OF POSTGRESQL
    • 763
      Relational database
    • 510
      High availability
    • 439
      Enterprise class database
    • 383
      Sql
    • 304
      Sql + nosql
    • 173
      Great community
    • 147
      Easy to setup
    • 131
      Heroku
    • 130
      Secure by default
    • 113
      Postgis
    • 50
      Supports Key-Value
    • 48
      Great JSON support
    • 34
      Cross platform
    • 33
      Extensible
    • 28
      Replication
    • 26
      Triggers
    • 23
      Multiversion concurrency control
    • 23
      Rollback
    • 21
      Open source
    • 18
      Heroku Add-on
    • 17
      Stable, Simple and Good Performance
    • 15
      Powerful
    • 13
      Lets be serious, what other SQL DB would you go for?
    • 11
      Good documentation
    • 9
      Scalable
    • 8
      Free
    • 8
      Reliable
    • 8
      Intelligent optimizer
    • 7
      Transactional DDL
    • 7
      Modern
    • 6
      One stop solution for all things sql no matter the os
    • 5
      Relational database with MVCC
    • 5
      Faster Development
    • 4
      Full-Text Search
    • 4
      Developer friendly
    • 3
      Excellent source code
    • 3
      Free version
    • 3
      Great DB for Transactional system or Application
    • 3
      Relational datanbase
    • 3
      search
    • 3
      Open-source
    • 2
      Text
    • 2
      Full-text
    • 1
      Can handle up to petabytes worth of size
    • 1
      Composability
    • 1
      Multiple procedural languages supported
    • 0
      Native
    CONS OF POSTGRESQL
    • 10
      Table/index bloatings

    related PostgreSQL posts

    Simon Reymann
    Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11.2M views

    Our whole DevOps stack consists of the following tools:

    • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
    • Respectively Git as revision control system
    • SourceTree as Git GUI
    • Visual Studio Code as IDE
    • CircleCI for continuous integration (automatize development process)
    • Prettier / TSLint / ESLint as code linter
    • SonarQube as quality gate
    • Docker as container management (incl. Docker Compose for multi-container application management)
    • VirtualBox for operating system simulation tests
    • Kubernetes as cluster management for docker containers
    • Heroku for deploying in test environments
    • nginx as web server (preferably used as facade server in production environment)
    • SSLMate (using OpenSSL) for certificate management
    • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
    • PostgreSQL as preferred database system
    • Redis as preferred in-memory database/store (great for caching)

    The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

    • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
    • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
    • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
    • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
    • Scalability: All-in-one framework for distributed systems.
    • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
    See more
    Jeyabalaji Subramanian

    Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

    We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

    Based on the above criteria, we selected the following tools to perform the end to end data replication:

    We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

    We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

    In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

    Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

    In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

    See more
    MongoDB logo

    MongoDB

    93.5K
    80.7K
    4.1K
    The database for giant ideas
    93.5K
    80.7K
    + 1
    4.1K
    PROS OF MONGODB
    • 828
      Document-oriented storage
    • 593
      No sql
    • 553
      Ease of use
    • 464
      Fast
    • 410
      High performance
    • 255
      Free
    • 218
      Open source
    • 180
      Flexible
    • 145
      Replication & high availability
    • 112
      Easy to maintain
    • 42
      Querying
    • 39
      Easy scalability
    • 38
      Auto-sharding
    • 37
      High availability
    • 31
      Map/reduce
    • 27
      Document database
    • 25
      Easy setup
    • 25
      Full index support
    • 16
      Reliable
    • 15
      Fast in-place updates
    • 14
      Agile programming, flexible, fast
    • 12
      No database migrations
    • 8
      Easy integration with Node.Js
    • 8
      Enterprise
    • 6
      Enterprise Support
    • 5
      Great NoSQL DB
    • 4
      Support for many languages through different drivers
    • 3
      Schemaless
    • 3
      Aggregation Framework
    • 3
      Drivers support is good
    • 2
      Fast
    • 2
      Managed service
    • 2
      Easy to Scale
    • 2
      Awesome
    • 2
      Consistent
    • 1
      Good GUI
    • 1
      Acid Compliant
    CONS OF MONGODB
    • 6
      Very slowly for connected models that require joins
    • 3
      Not acid compliant
    • 2
      Proprietary query language

    related MongoDB posts

    Jeyabalaji Subramanian

    Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

    We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

    Based on the above criteria, we selected the following tools to perform the end to end data replication:

    We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

    We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

    In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

    Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

    In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

    See more
    Robert Zuber

    We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

    As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

    When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

    See more
    Redis logo

    Redis

    59.4K
    45.7K
    3.9K
    Open source (BSD licensed), in-memory data structure store
    59.4K
    45.7K
    + 1
    3.9K
    PROS OF REDIS
    • 886
      Performance
    • 542
      Super fast
    • 513
      Ease of use
    • 444
      In-memory cache
    • 324
      Advanced key-value cache
    • 194
      Open source
    • 182
      Easy to deploy
    • 164
      Stable
    • 155
      Free
    • 121
      Fast
    • 42
      High-Performance
    • 40
      High Availability
    • 35
      Data Structures
    • 32
      Very Scalable
    • 24
      Replication
    • 22
      Great community
    • 22
      Pub/Sub
    • 19
      "NoSQL" key-value data store
    • 16
      Hashes
    • 13
      Sets
    • 11
      Sorted Sets
    • 10
      NoSQL
    • 10
      Lists
    • 9
      Async replication
    • 9
      BSD licensed
    • 8
      Bitmaps
    • 8
      Integrates super easy with Sidekiq for Rails background
    • 7
      Keys with a limited time-to-live
    • 7
      Open Source
    • 6
      Lua scripting
    • 6
      Strings
    • 5
      Awesomeness for Free
    • 5
      Hyperloglogs
    • 4
      Transactions
    • 4
      Outstanding performance
    • 4
      Runs server side LUA
    • 4
      LRU eviction of keys
    • 4
      Feature Rich
    • 4
      Written in ANSI C
    • 4
      Networked
    • 3
      Data structure server
    • 3
      Performance & ease of use
    • 2
      Dont save data if no subscribers are found
    • 2
      Automatic failover
    • 2
      Easy to use
    • 2
      Temporarily kept on disk
    • 2
      Scalable
    • 2
      Existing Laravel Integration
    • 2
      Channels concept
    • 2
      Object [key/value] size each 500 MB
    • 2
      Simple
    CONS OF REDIS
    • 15
      Cannot query objects directly
    • 3
      No secondary indexes for non-numeric data types
    • 1
      No WAL

    related Redis posts

    Russel Werner
    Lead Engineer at StackShare · | 32 upvotes · 2.8M views

    StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

    Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

    #StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

    See more
    Simon Reymann
    Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11.2M views

    Our whole DevOps stack consists of the following tools:

    • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
    • Respectively Git as revision control system
    • SourceTree as Git GUI
    • Visual Studio Code as IDE
    • CircleCI for continuous integration (automatize development process)
    • Prettier / TSLint / ESLint as code linter
    • SonarQube as quality gate
    • Docker as container management (incl. Docker Compose for multi-container application management)
    • VirtualBox for operating system simulation tests
    • Kubernetes as cluster management for docker containers
    • Heroku for deploying in test environments
    • nginx as web server (preferably used as facade server in production environment)
    • SSLMate (using OpenSSL) for certificate management
    • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
    • PostgreSQL as preferred database system
    • Redis as preferred in-memory database/store (great for caching)

    The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

    • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
    • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
    • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
    • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
    • Scalability: All-in-one framework for distributed systems.
    • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
    See more
    Amazon S3 logo

    Amazon S3

    53.2K
    39.8K
    2K
    Store and retrieve any amount of data, at any time, from anywhere on the web
    53.2K
    39.8K
    + 1
    2K
    PROS OF AMAZON S3
    • 590
      Reliable
    • 492
      Scalable
    • 456
      Cheap
    • 329
      Simple & easy
    • 83
      Many sdks
    • 30
      Logical
    • 13
      Easy Setup
    • 11
      REST API
    • 11
      1000+ POPs
    • 6
      Secure
    • 4
      Easy
    • 4
      Plug and play
    • 3
      Web UI for uploading files
    • 2
      Faster on response
    • 2
      Flexible
    • 2
      GDPR ready
    • 1
      Easy to use
    • 1
      Plug-gable
    • 1
      Easy integration with CloudFront
    CONS OF AMAZON S3
    • 7
      Permissions take some time to get right
    • 6
      Requires a credit card
    • 6
      Takes time/work to organize buckets & folders properly
    • 3
      Complex to set up

    related Amazon S3 posts

    Ashish Singh
    Tech Lead, Big Data Platform at Pinterest · | 38 upvotes · 3.3M views

    To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

    Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

    We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

    Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

    Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

    #BigData #AWS #DataScience #DataEngineering

    See more
    Russel Werner
    Lead Engineer at StackShare · | 32 upvotes · 2.8M views

    StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

    Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

    #StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

    See more