Jenkins vs Notepad++: What are the differences?
What is Jenkins? An extendable open source continuous integration server. In a nutshell Jenkins CI is the leading open-source continuous integration server. Built with Java, it provides over 300 plugins to support building and testing virtually any project.
What is Notepad++? Free source code editor and Notepad replacement. Notepad++ is a free (as in "free speech" and also as in "free beer") source code editor and Notepad replacement that supports several languages. Running in the MS Windows environment, its use is governed by GPL License.
Jenkins belongs to "Continuous Integration" category of the tech stack, while Notepad++ can be primarily classified under "Text Editor".
Some of the features offered by Jenkins are:
- Easy installation
- Easy configuration
- Change set support
On the other hand, Notepad++ provides the following key features:
- Syntax Highlighting and Syntax Folding
- User Defined Syntax Highlighting and Folding: screenshot 1, screenshot 2, screenshot 3 and screenshot 4
- PCRE (Perl Compatible Regular Expression) Search/Replace
"Hosted internally", "Free open source" and "Great to build, deploy or launch anything async" are the key factors why developers consider Jenkins; whereas "Syntax for all languages that i use", "Tabbed ui" and "Great code editor" are the primary reasons why Notepad++ is favored.
Jenkins is an open source tool with 13.3K GitHub stars and 5.48K GitHub forks. Here's a link to Jenkins's open source repository on GitHub.
Facebook, Netflix, and Instacart are some of the popular companies that use Jenkins, whereas Notepad++ is used by Implisit, Adsia, and Dashlane. Jenkins has a broader approval, being mentioned in 1774 company stacks & 1526 developers stacks; compared to Notepad++, which is listed in 187 company stacks and 499 developer stacks.
What is Jenkins?
What is Notepad++?
Need advice about which tool to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
Sign up to add, upvote and see more consMake informed product decisions
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear
vagrant up or
vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as
vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
We use GitLab CI because of the great native integration as a part of the GitLab framework and the linting-capabilities it offers. The visualization of complex pipelines and the embedding within the project overview made Gitlab CI even more convenient. We use it for all projects, all deployments and as a part of GitLab Pages.
While we initially used the Shell-executor, we quickly switched to the Docker-executor and use it exclusively now.
We formerly used Jenkins but preferred to handle everything within GitLab . Aside from the unification of our infrastructure another motivation was the "configuration-in-file"-approach, that Gitlab CI offered, while Jenkins support of this concept was very limited and users had to resort to using the webinterface. Since the file is included within the repository, it is also version controlled, which was a huge plus for us.
Regarding Continuous Integration - we've started with something very easy to set up - CircleCI , but with time we're adding more & more complex pipelines - we use Jenkins to configure & run those. It's much more effort, but at some point we had to pay for the flexibility we expected. Our source code version control is Git (which probably doesn't require a rationale these days) and we keep repos in GitHub - since the very beginning & we never considered moving out. Our primary monitoring these days is in New Relic (Ruby & SPA apps) and AppSignal (Elixir apps) - we're considering unifying it in New Relic , but this will require some improvements in Elixir app observability. For error reporting we use Sentry (a very popular choice in this class) & we collect our distributed logs using Logentries (to avoid semi-manual handling here).
I'd recommend to go with Jenkins .
It allows a lot of flexibility and additional plugins that provide extra features, quite often not possible to find elsewhere unless you want to spend time on providing that by yourself.
One of key features are pipelines that allow to easily chain different jobs even across different repos / projects.
The only downside is you have to deploy it by yourself.
I have chosen Visual Studio Code after testing a lot of other editors like Atom, Sublime Text (with legal license), Vim or even Notepad++ because it is the sum of all their virtues and none of their defects. It's fast, it has all the tools and plugins I need to work, and it's pretty and very good optimized. It has what I need to work and nothing more. And the main plugins works like a charm. Developing for React or Flutter is amazing. Even the TypeScript plugin works great. I like how IntelliSense works, and all the extra tools to code remotely using #ssh, access #RESTfulAPI or event manage projects or collaborating remotely. Thanks #Microsoft for Visual Studio Code.
All of our pull requests are automatically tested using Jenkins' integration with GitHub, and we provision and deploy our servers using Jenkins' interface. This is integrated with HipChat, immediately notifying us if anything goes wrong with a deployment.
Jenkins is our go-to devops automation tool. We use it for automated test builds, all the way up to server updates and deploys. It really helps maintain our homegrown continuous-integration suite. It even does our blue/green deploys.
- Continuous Deploy
- Dev stage: autodeploy by trigger push request from 'develop' branch of Gitlab
- Staging and production stages: Build and rollback quicly with Ansistrano playbook
- Sending messages of job results to Chatwork.
Currently serves as the location that our QA team builds various automated testing jobs.
At one point we were using it for builds, but we ended up migrating away from them to Code Pipelines.
We use Jenkins to schedule our Browser and API Based regression and acceptance tests on a regular bases. We use additionally to Jenkins GitlabCI for unit and component testing.
When some regex or hardly scriptable but pattern-type problem occurs, I always go to notepad++. Also nice for file inspection (like image-meta).
Development Tool code editor - Open Source: Supports a wide selection of programming languages( C/C++, C#, Java, PHP, Python, or .NET.)