What is NixOS and what are its top alternatives?
Top Alternatives to NixOS
- CoreOS
It is designed for security, consistency, and reliability. Instead of installing packages via yum or apt, it uses Linux containers to manage your services at a higher level of abstraction. A single service's code and all dependencies are packaged within a container that can be run on one or many machines. ...
- Ubuntu
Ubuntu is an ancient African word meaning ‘humanity to others’. It also means ‘I am what I am because of who we all are’. The Ubuntu operating system brings the spirit of Ubuntu to the world of computers. ...
- Docker
The Docker Platform is the industry-leading container platform for continuous, high-velocity innovation, enabling organizations to seamlessly build and share any application — from legacy to what comes next — and securely run them anywhere ...
- Manjaro
It is an accessible, friendly, open-source Linux distribution and community. Based on Arch Linux, it provides all the benefits of cutting-edge software combined with a focus on getting started quickly, automated tools to require less manual intervention, and help readily available when needed. ...
- Debian
Debian systems currently use the Linux kernel or the FreeBSD kernel. Linux is a piece of software started by Linus Torvalds and supported by thousands of programmers worldwide. FreeBSD is an operating system including a kernel and other software. ...
- Kubernetes
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. ...
- Ansible
Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. Ansible’s goals are foremost those of simplicity and maximum ease of use. ...
- CentOS
The CentOS Project is a community-driven free software effort focused on delivering a robust open source ecosystem. For users, we offer a consistent manageable platform that suits a wide variety of deployments. For open source communities, we offer a solid, predictable base to build upon, along with extensive resources to build, test, release, and maintain their code. ...
NixOS alternatives & related posts
- Container management20
- Lightweight15
- Systemd11
- End-of-lifed3
related CoreOS posts
- Free to use228
- Easy setup for testing discord bot96
- Gateway Linux Distro57
- Simple interface53
- Don't need driver installation in most cases8
- Many active communities5
- Open Source4
- Easy to custom3
- Great OotB Linux Shell Experience1
- Many flavors/distros based on ubuntu1
- Software Availability1
- Lightweight container base OS1
- Demanding system requirements4
- Adds overhead and unnecessary complexity over Debian3
- Systemd1
- Snapd installed by default1
related Ubuntu posts
We use Debian and its derivative Ubuntu because the apt ecosystem and toolchain for Debian packages is far superior to the yum-based system used by Fedora and RHEL. This is large part due to a huge amount of investment into tools like debhelper/dh over the years by the Debian community. I haven't dealt with RPM in the last couple years, but every experience I've had with RPM is that the RPM tools are slower, have less useful options, and it's more work to package software for them (and one makes more compromises in doing so).
I think everyone has seen the better experience using Ubuntu in the shift of prevalence from RHEL to Ubuntu in what most new companies are deploying on their servers, and I expect that trend to continue as long as Red Hat is using the RPM system (and I don't really see them as having a path to migrate).
The experience with Ubuntu and Debian stable releases is pretty similar: A solid release every 2 years that's supported for a few years. (While Ubuntu in theory releases every 6 months, their non-LTS releases are effectively betas: They're often unstable, only have 9 months of support, etc. I wouldn't recommend them to anyone not actively participating in Ubuntu the development community). Ubuntu has better integration of non-free drivers, which may be important if you have hardware that requires them. But it's also the case that most bugs I experience when using Ubuntu are Ubuntu-specific issues, especially on servers (in part because Ubuntu has a bunch of "cloud management" stuff pre-installed that is definitely a regression if you're not using Canonical's cloud management products).
There is a question coming... I am using Oracle VirtualBox to spawn 3 Ubuntu Linux virtual machines (VM). VM1 is being used as a data lake - just a place to store flat files. VM2 hosts Apache NiFi. VM3 hosts PostgreSQL. I have built a NiFi pipeline that reads flat files on VM1 and then pipes the data over to and inserts it into the Postgresql database. I left this setup alone for a while, and then something hiccupped on VM3, and I had to rebuild it. Now I cannot make a remote connection to Postgresql on VM3. I was using pgAdmin3 on VM3, but it kept throwing errors - I found out it went end-of-life in 2018 and uninstalled it. pgAdmin4 is out, but for some reason, I cannot get the APT utility to find/install it. I am trying to figure out the pgAdmin4 install problem and looking for a good alternative for pgAdmin4 that I can use to diagnose the remote database connection problem. Does anyone have any suggestions? Thanks in advance.
Docker
- Rapid integration and build up824
- Isolation692
- Open source521
- Testability and reproducibility505
- Lightweight460
- Standardization218
- Scalable185
- Upgrading / downgrading / application versions106
- Security88
- Private paas environments85
- Portability34
- Limit resource usage26
- Game changer17
- I love the way docker has changed virtualization16
- Fast14
- Concurrency12
- Docker's Compose tools8
- Easy setup6
- Fast and Portable6
- Because its fun5
- Makes shipping to production very simple4
- Highly useful3
- It's dope3
- Its cool2
- MacOS support FAKE2
- Simplicity, isolation, resource effective2
- Open source and highly configurable2
- Does a nice job hogging memory2
- Package the environment with the application2
- Very easy to setup integrate and build2
- HIgh Throughput2
- Docker hub for the FTW2
- Super2
- Asdfd0
- New versions == broken features8
- Unreliable networking6
- Documentation not always in sync6
- Moves quickly4
- Not Secure3
related Docker posts
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
- Good for beginners9
- AUR is huge8
- Very stable6
- Friendly community5
- Pacman is very fast3
- Highly customizable2
- Nice-looking bootloader2
- Would you give your grandma linux?6
- Occasional freezes if wrongly configured3
- Not highly stable2
- High data requirement frequently1
related Manjaro posts
At labinator.com, we use HTML5, CSS 3, Sass, Vanilla.JS and PHP when building our premium WordPress themes and plugins. When writing our codes, we use Sublime Text and Visual Studio Code depending on the project. We run Manjaro and Debian operating systems in our office. Manjaro is a great desktop operating system for all range of tasks while Debian is a solid choice for servers.
WordPress became a very popular choice when it comes to content management systems and building websites. It is easy to learn and has a great community behind it. The high number of plugins as well that are available for WordPress allows any user to customize it depending on his/her needs.
For development, HTML5 with Sass is our go-to choice when building our themes.
Main Advantages Of Sass:
- It's CSS syntax friendly
- It offers variables
- It uses a nested syntax
- It includes mixins
- Great community and online support.
- Great documentation that is easy to read and follow.
As for PHP, we always thrive to use PHP 7.3+. After the introduction of PHP 7, the WordPress development process became more stable and reliable than before. If you a developer considering PHP 7.3+ for your project, it would be good to note the following benefits.
The Benefits Of Using PHP:
- Open Source.
- Highly Extendible.
- Easy to learn and read.
- Platform independent.
- Compatible with APACHE.
- Low development and maintenance cost.
- Great community and support.
- Detailed documentation that has everything you need!
Why PHP 7.3+?
- Flexible Heredoc & Nowdoc Syntaxes - Two key methods for defining strings within PHP. They also became easier to read and more reliable.
- A good boost in performance speed which is extremely important when it comes to WordPress development.
- Massively supported53
- Stable48
- Reliable19
- Turnkey linux use it8
- Aptitude8
- Customizable7
- It is free7
- Works on all architectures5
- Old versions of software9
- Can be difficult to set up on vanilla Debian2
related Debian posts
At labinator.com, we use HTML5, CSS 3, Sass, Vanilla.JS and PHP when building our premium WordPress themes and plugins. When writing our codes, we use Sublime Text and Visual Studio Code depending on the project. We run Manjaro and Debian operating systems in our office. Manjaro is a great desktop operating system for all range of tasks while Debian is a solid choice for servers.
WordPress became a very popular choice when it comes to content management systems and building websites. It is easy to learn and has a great community behind it. The high number of plugins as well that are available for WordPress allows any user to customize it depending on his/her needs.
For development, HTML5 with Sass is our go-to choice when building our themes.
Main Advantages Of Sass:
- It's CSS syntax friendly
- It offers variables
- It uses a nested syntax
- It includes mixins
- Great community and online support.
- Great documentation that is easy to read and follow.
As for PHP, we always thrive to use PHP 7.3+. After the introduction of PHP 7, the WordPress development process became more stable and reliable than before. If you a developer considering PHP 7.3+ for your project, it would be good to note the following benefits.
The Benefits Of Using PHP:
- Open Source.
- Highly Extendible.
- Easy to learn and read.
- Platform independent.
- Compatible with APACHE.
- Low development and maintenance cost.
- Great community and support.
- Detailed documentation that has everything you need!
Why PHP 7.3+?
- Flexible Heredoc & Nowdoc Syntaxes - Two key methods for defining strings within PHP. They also became easier to read and more reliable.
- A good boost in performance speed which is extremely important when it comes to WordPress development.
We use Debian and its derivative Ubuntu because the apt ecosystem and toolchain for Debian packages is far superior to the yum-based system used by Fedora and RHEL. This is large part due to a huge amount of investment into tools like debhelper/dh over the years by the Debian community. I haven't dealt with RPM in the last couple years, but every experience I've had with RPM is that the RPM tools are slower, have less useful options, and it's more work to package software for them (and one makes more compromises in doing so).
I think everyone has seen the better experience using Ubuntu in the shift of prevalence from RHEL to Ubuntu in what most new companies are deploying on their servers, and I expect that trend to continue as long as Red Hat is using the RPM system (and I don't really see them as having a path to migrate).
The experience with Ubuntu and Debian stable releases is pretty similar: A solid release every 2 years that's supported for a few years. (While Ubuntu in theory releases every 6 months, their non-LTS releases are effectively betas: They're often unstable, only have 9 months of support, etc. I wouldn't recommend them to anyone not actively participating in Ubuntu the development community). Ubuntu has better integration of non-free drivers, which may be important if you have hardware that requires them. But it's also the case that most bugs I experience when using Ubuntu are Ubuntu-specific issues, especially on servers (in part because Ubuntu has a bunch of "cloud management" stuff pre-installed that is definitely a regression if you're not using Canonical's cloud management products).
Kubernetes
- Leading docker container management solution164
- Simple and powerful128
- Open source106
- Backed by google76
- The right abstractions58
- Scale services25
- Replication controller20
- Permission managment11
- Cheap8
- Supports autoscaling8
- Simple8
- No cloud platform lock-in5
- Reliable5
- Self-healing5
- Quick cloud setup4
- Promotes modern/good infrascture practice4
- Scalable4
- Open, powerful, stable4
- Runs on azure3
- Captain of Container Ship3
- Cloud Agnostic3
- Custom and extensibility3
- Backed by Red Hat3
- A self healing environment with rich metadata3
- Gke2
- Everything of CaaS2
- Sfg2
- Expandable2
- Golang2
- Easy setup2
- Poor workflow for development15
- Steep learning curve15
- Orchestrates only infrastructure8
- High resource requirements for on-prem clusters4
- Too heavy for simple systems2
- Additional vendor lock-in (Docker)1
- More moving parts to secure1
- Additional Technology Overhead1
related Kubernetes posts
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
https://eng.uber.com/distributed-tracing/
(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)
Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark
Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.
Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.
After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...
Ansible
- Agentless284
- Great configuration210
- Simple199
- Powerful176
- Easy to learn155
- Flexible69
- Doesn't get in the way of getting s--- done55
- Makes sense35
- Super efficient and flexible30
- Powerful27
- Dynamic Inventory11
- Backed by Red Hat9
- Works with AWS7
- Cloud Oriented6
- Easy to maintain6
- Vagrant provisioner4
- Simple and powerful4
- Multi language4
- Simple4
- Because SSH4
- Procedural or declarative, or both4
- Easy4
- Consistency3
- Well-documented2
- Masterless2
- Debugging is simple2
- Merge hash to get final configuration similar to hiera2
- Fast as hell2
- Manage any OS1
- Work on windows, but difficult to manage1
- Certified Content1
- Dangerous8
- Hard to install5
- Doesn't Run on Windows3
- Bloated3
- Backward compatibility3
- No immutable infrastructure2
related Ansible posts
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
Heroku was a decent choice to start a business, but at some point our platform was too big, too complex & too heterogenic, so Heroku started to be a constraint, not a benefit. First, we've started containerizing our apps with Docker to eliminate "works in my machine" syndrome & uniformize the environment setup. The first orchestration was composed with Docker Compose , but at some point it made sense to move it to Kubernetes. Fortunately, we've made a very good technical decision when starting our work with containers - all the container configuration & provisions HAD (since the beginning) to be done in code (Infrastructure as Code) - we've used Terraform & Ansible for that (correspondingly). This general trend of containerisation was accompanied by another, parallel & equally big project: migrating environments from Heroku to AWS: using Amazon EC2 , Amazon EKS, Amazon S3 & Amazon RDS.
- Stable16
- Free to use9
- Reliable9
- Has epel packages6
- Good support6
- Great Community5
- I've moved from gentoo to centos2
- Yum is a horrible package manager1
related CentOS posts
Since #ATComputing is a vendor independent Linux and open source specialist, we do not have a favorite Linux distribution. We mainly use Ubuntu , Centos Debian , Red Hat Enterprise Linux and Fedora during our daily work. These are also the distributions we see most often used in our customers environments.
For our #ci/cd training, we use an open source pipeline that is build around Visual Studio Code , Jenkins , VirtualBox , GitHub , Docker Kubernetes and Google Compute Engine.
For #ServerConfigurationAndAutomation, we have embraced and contributed to Ansible mainly because it is not only flexible and powerful, but also straightforward and easier to learn than some other (open source) solutions. On the other hand: we are not affraid of Puppet Labs and Chef either.
Currently, our most popular #programming #Language course is Python . The reason Python is so popular has to do with it's versatility, but also with its low complexity. This helps sysadmins to write scripts or simple programs to make their job less repetitive and automating things more fun. Python is also widely used to communicate with (REST) API's and for data analysis.
Hello guys
I am confused between choosing CentOS7 or centos8 for OpenStack tripleo undercloud deployment. Which one should I use? There is another option to use OpenStack, Ubuntu, or MicroStack.
We wanted to use this deployment to build our home cloud or private cloud infrastructure. I heard that centOS is always the best choice through a little research, but still not sure. As centos8 from Redhat is not supported for OpenStack tripleo deployments anymore, I had to upgrade to CentosStream.