Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
Git platform for web and software developers with Docker-based tools for Continuous Integration and Deployment. | Vespa is an engine for low-latency computation over large data sets. It stores and indexes your data such that queries, selection and processing over the data can be performed at serving time. |
Automatic deployments on push to branch;Docker-based builds and tests;10-minute setup of complete environment;Integrates with GitHub, Bitbucket & GitLab;DevOps and website monitoring actions;Clear and telling UI/UX;Supports all popular languages and frameworks, including PHP/Laravel, Node.js, Rails, Python, Java, .NET Core, Elixir and Go | - |
Statistics | |
GitHub Stars - | GitHub Stars 6.5K |
GitHub Forks - | GitHub Forks 675 |
Stacks 293 | Stacks 12 |
Followers 348 | Followers 29 |
Votes 606 | Votes 0 |
Pros & Cons | |
Pros
Cons
| No community feedback yet |
Integrations | |

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Cloud 66 gives you everything you need to build, deploy and maintain your applications on any cloud, without the headache of dealing with "server stuff". Frameworks: Ruby on Rails, Node.js, Jamstack, Laravel, GoLang, and more.

DeployBot makes it simple to deploy your work anywhere. You can compile or process your code in a Docker container on our infrastructure, and we'll copy it to your servers once everything has been successfully built.

Distributed SQL Query Engine for Big Data

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala.

It is an open-source data version control system for data lakes. It provides a “Git for data” platform enabling you to implement best practices from software engineering on your data lake, including branching and merging, CI/CD, and production-like dev/test environments.

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define.

Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop/Spark supporting extremely large datasets, originally contributed from eBay Inc.