Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
Jul 29, 2020
Mar 12, 2020
A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. | It enables you to quickly develop, orchestrate, and operate distributed streaming applications on Kubernetes. With Cloudflow, streaming applications are comprised of small composable components wired together with schema-based contracts. It can dramatically accelerate streaming application development—reducing the time required to create, package, and deploy—from weeks to hours. |
Easy - AWS Glue automates much of the effort in building, maintaining, and running ETL jobs. AWS Glue crawls your data sources, identifies data formats, and suggests schemas and transformations. AWS Glue automatically generates the code to execute your data transformations and loading processes.; Integrated - AWS Glue is integrated across a wide range of AWS services.; Serverless - AWS Glue is serverless. There is no infrastructure to provision or manage. AWS Glue handles provisioning, configuration, and scaling of the resources required to run your ETL jobs on a fully managed, scale-out Apache Spark environment. You pay only for the resources used while your jobs are running.; Developer Friendly - AWS Glue generates ETL code that is customizable, reusable, and portable, using familiar technology - Scala, Python, and Apache Spark. You can also import custom readers, writers and transformations into your Glue ETL code. Since the code AWS Glue generates is based on open frameworks, there is no lock-in. You can use it anywhere. | Apache Spark, Apache Flink, and Akka Streams;
Focus only on business logic, leave the boilerplate to us;
We provide all the tooling for going from business logic to a deployable Docker image;
We provide Kubernetes tooling to deploy your distributed system with a single command, and manage durable connections between processing stages;
With a Lightbend subscription, you get all the tools you need to provide insights, observability, and lifecycle management for evolving your distributed streaming application |
Statistics | |
GitHub Stars - | GitHub Stars 323 |
GitHub Forks - | GitHub Forks 89 |
Stacks 461 | Stacks 5 |
Followers 819 | Followers 13 |
Votes 9 | Votes 0 |
Pros & Cons | |
Pros
| No community feedback yet |
Integrations | |

Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.

Rancher is an open source container management platform that includes full distributions of Kubernetes, Apache Mesos and Docker Swarm, and makes it simple to operate container clusters on any cloud or infrastructure platform.

With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

Swarm serves the standard Docker API, so any tool which already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts: Dokku, Compose, Krane, Deis, DockerUI, Shipyard, Drone, Jenkins... and, of course, the Docker client itself.

Tutum lets developers easily manage and run lightweight, portable, self-sufficient containers from any application. AWS-like control, Heroku-like ease. The same container that a developer builds and tests on a laptop can run at scale in Tutum.

It is a universal container management tool. It works with Kubernetes, Docker, Docker Swarm and Azure ACI. It allows you to manage containers without needing to know platform-specific code.

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Distributed SQL Query Engine for Big Data

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Automate and parallelize testing. Codefresh allows teams to spin up on-demand compositions to run unit and integration tests as part of the continuous integration process. Jenkins integration allows more complex pipelines.