Need advice about which tool to choose?Ask the StackShare community!
Luigi vs Github Actions: What are the differences?
Luigi: ETL and data flow management library *. It is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in; *Github Actions:** Automate your workflow from idea to production. It makes it easy to automate all your software workflows, now with world-class CI/CD. Build, test, and deploy your code right from GitHub. Make code reviews, branch management, and issue triaging work the way you want.
Luigi and Github Actions can be categorized as "Workflow Manager" tools.
Some of the features offered by Luigi are:
- dependency resolution
- workflow management
- visualization
On the other hand, Github Actions provides the following key features:
- Multiple workflow files support
- Free and open source
- Workflow run interface
Luigi is an open source tool with 13.4K GitHub stars and 2.13K GitHub forks. Here's a link to Luigi's open source repository on GitHub.
According to the StackShare community, Github Actions has a broader approval, being mentioned in 31 company stacks & 30 developers stacks; compared to Luigi, which is listed in 9 company stacks and 34 developer stacks.
I am so confused. I need a tool that will allow me to go to about 10 different URLs to get a list of objects. Those object lists will be hundreds or thousands in length. I then need to get detailed data lists about each object. Those detailed data lists can have hundreds of elements that could be map/reduced somehow. My batch process dies sometimes halfway through which means hours of processing gone, i.e. time wasted. I need something like a directed graph that will keep results of successful data collection and allow me either pragmatically or manually to retry the failed ones some way (0 - forever) times. I want it to then process all the ones that have succeeded or been effectively ignored and load the data store with the aggregation of some couple thousand data-points. I know hitting this many endpoints is not a good practice but I can't put collectors on all the endpoints or anything like that. It is pretty much the only way to get the data.
For a non-streaming approach:
You could consider using more checkpoints throughout your spark jobs. Furthermore, you could consider separating your workload into multiple jobs with an intermittent data store (suggesting cassandra or you may choose based on your choice and availability) to store results , perform aggregations and store results of those.
Spark Job 1 - Fetch Data From 10 URLs and store data and metadata in a data store (cassandra) Spark Job 2..n - Check data store for unprocessed items and continue the aggregation
Alternatively for a streaming approach: Treating your data as stream might be useful also. Spark Streaming allows you to utilize a checkpoint interval - https://spark.apache.org/docs/latest/streaming-programming-guide.html#checkpointing
Pros of GitHub Actions
- Integration with GitHub8
- Free5
- Easy to duplicate a workflow3
- Ready actions in Marketplace3
- Configs stored in .github2
- Docker Support2
- Read actions in Marketplace2
- Active Development Roadmap1
- Fast1
Pros of Luigi
- Hadoop Support5
- Python3
- Open soure1
Sign up to add or upvote prosMake informed product decisions
Cons of GitHub Actions
- Lacking [skip ci]5
- Lacking allow failure4
- Lacking job specific badges3
- No ssh login to servers2
- No Deployment Projects1
- No manual launch1