Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is an analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. It brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs. | It enables you to quickly develop, orchestrate, and operate distributed streaming applications on Kubernetes. With Cloudflow, streaming applications are comprised of small composable components wired together with schema-based contracts. It can dramatically accelerate streaming application development—reducing the time required to create, package, and deploy—from weeks to hours. |
Complete T-SQL based analytics – Generally Available; Deeply integrated Apache Spark; Hybrid data integration; Unified user experience | Apache Spark, Apache Flink, and Akka Streams;
Focus only on business logic, leave the boilerplate to us;
We provide all the tooling for going from business logic to a deployable Docker image;
We provide Kubernetes tooling to deploy your distributed system with a single command, and manage durable connections between processing stages;
With a Lightbend subscription, you get all the tools you need to provide insights, observability, and lifecycle management for evolving your distributed streaming application |
Statistics | |
GitHub Stars - | GitHub Stars 323 |
GitHub Forks - | GitHub Forks 89 |
Stacks 104 | Stacks 5 |
Followers 230 | Followers 13 |
Votes 10 | Votes 0 |
Pros & Cons | |
Pros
Cons
| No community feedback yet |
Integrations | |
| No integrations available | |

Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.

Rancher is an open source container management platform that includes full distributions of Kubernetes, Apache Mesos and Docker Swarm, and makes it simple to operate container clusters on any cloud or infrastructure platform.

With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

Swarm serves the standard Docker API, so any tool which already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts: Dokku, Compose, Krane, Deis, DockerUI, Shipyard, Drone, Jenkins... and, of course, the Docker client itself.

It is an easy way to generate charts and dashboards, ask simple ad hoc queries without using SQL, and see detailed information about rows in your Database. You can set it up in under 5 minutes, and then give yourself and others a place to ask simple questions and understand the data your application is generating.

Tutum lets developers easily manage and run lightweight, portable, self-sufficient containers from any application. AWS-like control, Heroku-like ease. The same container that a developer builds and tests on a laptop can run at scale in Tutum.

Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.

It is a universal container management tool. It works with Kubernetes, Docker, Docker Swarm and Azure ACI. It allows you to manage containers without needing to know platform-specific code.

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.