Docker logo
An open source project to pack, ship and run any application as a lightweight container

What is Docker?

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
Docker is a tool in the Virtual Machine Platforms & Containers category of a tech stack.
Docker is an open source tool with 53.5K GitHub stars and 15.4K GitHub forks. Here’s a link to Docker's open source repository on GitHub

Who uses Docker?

Companies
3413 companies use Docker in their tech stacks, including Spotify, Pinterest, and Twitter.

Developers
2738 developers use Docker.

Docker Integrations

Bitbucket, VirtualBox, Kubernetes, Ansible, and Vagrant are some of the popular tools that integrate with Docker. Here's a list of all 190 tools that integrate with Docker.

Why developers like Docker?

Here’s a list of reasons why companies and developers use Docker
Docker Reviews

Here are some stack decisions, common use cases and reviews by companies and developers who chose Docker in their tech stack.

Eric Colson
Eric Colson
Chief Algorithms Officer at Stitch Fix · | 19 upvotes · 34.6K views
atStitch Fix
Amazon EC2 Container Service
Docker
PyTorch
R
Python
Presto
Apache Spark
Amazon S3
PostgreSQL
Kafka
#Data
#DataStack
#DataScience
#ML
#Etl
#AWS

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Julien DeFrance
Julien DeFrance
Full Stack Engineering Manager at ValiMail · | 16 upvotes · 45.2K views
atSmartZip
Amazon DynamoDB
Ruby
Node.js
AWS Lambda
New Relic
Amazon Elasticsearch Service
Elasticsearch
Superset
Amazon Quicksight
Amazon Redshift
Zapier
Segment
Amazon CloudFront
Memcached
Amazon ElastiCache
Amazon RDS for Aurora
MySQL
Amazon RDS
Amazon S3
Docker
Capistrano
AWS Elastic Beanstalk
Rails API
Rails
Algolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

See more
Tim Nolet
Tim Nolet
Founder, Engineer & Dishwasher at Checkly · | 16 upvotes · 26.2K views
atChecklyHQ
vuex
Knex.js
PostgreSQL
Amazon S3
AWS Lambda
Vue.js
hapi
Node.js
GitHub
Docker
Heroku

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

See more
Ganesa Vijayakumar
Ganesa Vijayakumar
Full Stack Coder | Module Lead · | 14 upvotes · 10.6K views
SonarQube
Codacy
Docker
Git
Apache Maven
Amazon EC2 Container Service
Microsoft Azure
Amazon Route 53
Elasticsearch
Solr
Amazon RDS
Amazon S3
Heroku
Hibernate
MySQL
Node.js
Java
Bootstrap
jQuery Mobile
jQuery UI
jQuery
JavaScript
React Native
React Router
React

I'm planning to create a web application and also a mobile application to provide a very good shopping experience to the end customers. Shortly, my application will be aggregate the product details from difference sources and giving a clear picture to the user that when and where to buy that product with best in Quality and cost.

I have planned to develop this in many milestones for adding N number of features and I have picked my first part to complete the core part (aggregate the product details from different sources).

As per my work experience and knowledge, I have chosen the followings stacks to this mission.

UI: I would like to develop this application using React, React Router and React Native since I'm a little bit familiar on this and also most importantly these will help on developing both web and mobile apps. In addition, I'm gonna use the stacks JavaScript, jQuery, jQuery UI, jQuery Mobile, Bootstrap wherever required.

Service: I have planned to use Java as the main business layer language as I have 7+ years of experience on this I believe I can do better work using Java than other languages. In addition, I'm thinking to use the stacks Node.js.

Database and ORM: I'm gonna pick MySQL as DB and Hibernate as ORM since I have a piece of good knowledge and also work experience on this combination.

Search Engine: I need to deal with a large amount of product data and it's in-detailed info to provide enough details to end user at the same time I need to focus on the performance area too. so I have decided to use Solr as a search engine for product search and suggestions. In addition, I'm thinking to replace Solr by Elasticsearch once explored/reviewed enough about Elasticsearch.

Host: As of now, my plan to complete the application with decent features first and deploy it in a free hosting environment like Docker and Heroku and then once it is stable then I have planned to use the AWS products Amazon S3, EC2, Amazon RDS and Amazon Route 53. I'm not sure about Microsoft Azure that what is the specialty in it than Heroku and Amazon EC2 Container Service. Anyhow, I will do explore these once again and pick the best suite one for my requirement once I reached this level.

Build and Repositories: I have decided to choose Apache Maven and Git as these are my favorites and also so popular on respectively build and repositories.

Additional Utilities :) - I would like to choose Codacy for code review as their Startup plan will be very helpful to this application. I'm already experienced with Google CheckStyle and SonarQube even I'm looking something on Codacy.

Happy Coding! Suggestions are welcome! :)

Thanks, Ganesa

See more
Zach Holman
Zach Holman
at Zach Holman · | 14 upvotes · 5.5K views
Docker Compose
Docker
Home Assistant

I've been recently getting really into home automation- you know, making my house Smart™, which basically means half the time my lights don't turn on and the other half of the time apparently my kitchen faucet needs a static IP address.

But it's been a blast! It's a fun way to write code for yourself, outside of work, to have an impact in the real world. It's a nice way of falling in love with a different side of programming again.

I've used Apple's HomeKit for awhile, since we're pretty all-in in Apple devices at home, but the rough edges have been grating at me more and more. HomeKit is so opaque- you can't see what's wrong, why a device is unresponsive, and most importantly: the compatibility isn't there. HomeKit has a limited selection of — more expensive — accessories, and as you go beyond just simple LED lights, you want a bit more power. Also, we're programmers, dammit, gimme all the things.

Anyway, I've switched to Home Assistant the last few months, and I'm kicking myself I didn't make the switch earlier. As a programmer, it's great: you get the most capability than pretty much any other smart home platform (integrations have been written for most devices and technologies out there today), it's easier to debug, and when you want to go bigger than just simple lights on/off, HA has some really powerful stuff behind it.

I use Home Assistant in conjunction with Docker and Docker Compose; since the config is extracted out, upgrades are usually as easy as a pull of the latest version. I've just started digging into writing integrations for a lesser-used device that I have at home, and HA makes it pretty straightforward to just magically add it to the home network.

It plays well with others, too- we require a VPN connection in to the home network to access our Home Assistant install, and HA has a few tricks to help with that (ignoring the VPN route if you're on a local network, etc). Nice client support for iOS and Android, too.

Anyway, big fan of Home Assistant if you want to go beyond simple home automations and setup. Wish I would have done it a lot earlier. Also, big fan of jumping into all this if you have the time and interest to do so- it's been tickling a different part of my code brain than I've had access to in awhile, and that's been fun in and of itself.

See more
Daniel Quinn
Daniel Quinn
Senior Developer at Founders4Schools · | 14 upvotes · 2.1K views
atThe Paperless Project
Docker

We use Docker because Paperless is a stand-alone application with some complicated dependencies. Rather than expecting our users to figure out how to install & setup Tesseract to run on their local systems (if said systems even have support for it), we can just tell them to run docker-compose up and everything else is just magic!

See more

Docker's features

  • Automating the packaging and deployment of applications
  • Creation of lightweight, private PAAS environments
  • Automated testing and continuous integration/deployment
  • Deploying and scaling web apps, databases and backend services

Docker Alternatives & Comparisons

What are some alternatives to Docker?
LXC
LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.
rkt
Rocket is a cli for running App Containers. The goal of rocket is to be composable, secure, and fast.
Cloud Foundry
Cloud Foundry is an open platform as a service (PaaS) that provides a choice of clouds, developer frameworks, and application services. Cloud Foundry makes it faster and easier to build, test, deploy, and scale applications.
containerd
An industry-standard container runtime with an emphasis on simplicity, robustness, and portability
LXD
LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.
See all alternatives

Docker's Stats