Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
The Docker Platform is the industry-leading container platform for continuous, high-velocity innovation, enabling organizations to seamlessly build and share any application — from legacy to what comes next — and securely run them anywhere | It is an open-source data integration platform that syncs data from applications, APIs & databases to data warehouses lakes & DBs. |
Integrated developer tools; open, portable images; shareable, reusable apps; framework-aware builds;
standardized templates; multi-environment support; remote registry management; simple setup for Docker and Kubernetes; certified Kubernetes; application templates; enterprise controls; secure software supply chain; industry-leading container runtime; image scanning; access controls; image signing; caching and mirroring; image lifecycle; policy-based image promotion | Scheduled updates; Manual full refresh; Real-time monitoring; Debugging autonomy; Optional normalized schemas; Full control over the data; Benefit from the long tail of connectors, and adapt them to your needs; Build connectors in the language of your choice, as they run in Docker containers |
Statistics | |
GitHub Stars - | GitHub Stars 20.0K |
GitHub Forks - | GitHub Forks 4.9K |
Stacks 194.2K | Stacks 105 |
Followers 143.8K | Followers 112 |
Votes 3.9K | Votes 5 |
Pros & Cons | |
Pros
Cons
| Pros
|
Integrations | |

Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.

It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.

Qubole is a cloud based service that makes big data easy for analysts and data engineers.

It is used in a variety of applications, including log analysis, data warehousing, machine learning, financial analysis, scientific simulation, and bioinformatics.

we run Apache Hadoop for you. We not only deploy Hadoop, we monitor, manage, fix, and update it for you. Then we take it a step further: We monitor your jobs, notify you when something’s wrong with them, and can help with tuning.

Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn.

LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

Stitch is a simple, powerful ETL service built for software developers. Stitch evolved out of RJMetrics, a widely used business intelligence platform. When RJMetrics was acquired by Magento in 2016, Stitch was launched as its own company.

It is an analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. It brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs.