Docker vs ZeroVM: What are the differences?
What is Docker? Enterprise Container Platform for High-Velocity Innovation. The Docker Platform is the industry-leading container platform for continuous, high-velocity innovation, enabling organizations to seamlessly build and share any application — from legacy to what comes next — and securely run them anywhere.
What is ZeroVM? Open-source lightweight virtualization platform. ZeroVM is an open source virtualization technology that is based on the Chromium Native Client (NaCl) project. ZeroVM creates a secure and isolated execution environment which can run a single thread or application. ZeroVM is designed to be lightweight, portable, and can easily be embedded inside of existing storage systems.
Docker and ZeroVM can be primarily classified as "Virtual Machine Platforms & Containers" tools.
Some of the features offered by Docker are:
- Integrated developer tools
- open, portable images
- shareable, reusable apps
On the other hand, ZeroVM provides the following key features:
- Small, Light, Fast - ZeroVM is extremely small, lightweight, and fast. An execution environment can start in as little as 5 milliseconds.
- Secure - ZeroVM security is derived from the Chromium Native Client (NaCl) project and is based on the concept of software fault isolation.
- Hyper-Scalable - ZeroVM makes it easy to create large clusters of instances, aggregating the compute power of many individual physical servers into a single execution environment.
Docker and ZeroVM are both open source tools. It seems that Docker with 54K GitHub stars and 15.6K forks on GitHub has more adoption than ZeroVM with 738 GitHub stars and 71 GitHub forks.
What is Docker?
What is ZeroVM?
Want advice about which of these to choose?Ask the StackShare community!
Why do developers choose ZeroVM?
Sign up to add, upvote and see more prosMake informed product decisions
What are the cons of using ZeroVM?
Sign up to get full access to all the companiesMake informed product decisions
What tools integrate with ZeroVM?
Sign up to get full access to all the tool integrationsMake informed product decisions
Docker is the new kid on the block disrupting virtualization nowadays. You're able to save up to 70% of your development cost on AWS (or any other cloud) switching to Docker. For example instead of paying for many small VMs you can spin up a large one with many Docker containers to drastically lower your cost. That alone is only one of the reasons why Docker is the future and it's not even the best feature: isolation, testability, reproducibility, standardization, security, and upgrading / downgrading / application versions to name a few. You can spin up 1000's of Docker containers on an ordinary Laptop, but you would have trouble spinning up 100's of VMs. If you haven't already checked out Docker you're missing out on a huge opportunity to join the movement that will change development/production environments forever
The support for macOS is a fake.
I can't work with docker in macOS because de network and comunications with the container don't works correctly.
Currently experimenting. The idea is to isolate any services where I'm not confident yet in their security/quality. The hope is that if there is an exploit in a given service that an attacker won't be able break out of the docker container and cause damage to my systems.
An example of a service I would isolate in a docker container would be a minecraft browser map application I use. I don't know who wrote it, I don't know who's vetting it, I don't know the source code. I would feel a lot better putting this in a container before I expose it to the internet.
I believe I will follow this process for anything that's not properly maintained (not in an trusted apt-repo or some other sort of confidence)
We are testing out docker at the moment, building images from successful staging builds for all our APIs. Since we operate in a SOA (not quite microservices), developers have a dockerfile that they can run to build the entirety of our api infrastructure on their machines. We use the successful builds from staging to power these instances allowing them to do some more manual integration testing across systems.
Each component of the app was launched in a separate container, so that they wouldn't have to share resources: the front end in one, the back end in another, a third for celery, a fourth for celery-beat, and a fifth for RabbitMQ. Actually, we ended up running four front-end containers and eight back-end, due to load constraints.
Linux containers are so much more lightweight than VMs which is quite important for my limited budget. However, Docker has much more support and tooling for it unlike LXC, hence why I use it. rkt is interesting, although I will probably stick with Docker due to being more widespread.