Avatar of Stephen Badger | Vital Beats

Stephen Badger | Vital Beats

Senior DevOps Engineer at Vital Beats
Senior DevOps Engineer at Vital Beats·

I can't speak for the NestJS vs ExpressJS discussion, but I can given a viewpoint on databases.

The main thing to consider around database choice, is what "shape" the data will be in, and the kind of read/write patterns you expect of that data. The blog example shows up so much for DBMS like MongoDB, because it showcases what NoSQL / document storage is very scalable and performant in: mostly isolated documents with a few views / ways to order them and filter them. In your case, I can imagine a number of "relations" already, which suggest a more traditional SQL solution would work well: You have restaurants, they have maybe a few menus (regular, gluten-free etc), with menu items in, which have different prices over time (25% discount on christmas food just after christmas, 50% off pizzas on wednesdays). Then there's a whole different set of "relations" for people ordering, like showing them past orders, which need to refer to the restaurant etc, and credit card transaction information for refunds etc. That to me suggests PostgreSQL, which will scale quite well if you database design is okay.

PostgreSQL also offers you some extensions, which are just amazing for your use-case. https://postgis.net/ for example will let you query for restaurants based on location, without the big cost that comes from constantly using something like Google Maps API to work out which restaurants are near to someone ordering. Partitioning and window functions will be great for your own use internally too, like answering questions of "What types of takeways perform the best for us, Italian, Mexican?" or in combination with PostGIS, answering questions like "What kind of takeways do we need to market to, to improve our selection?".

While these things can all be implemented in MongoDB, you tend to lose some of the convenience of ACID or have to deal with things like eventual consistency, which requires more thinking on the part of your engineers. PostgreSQL offers decent (if more complex) scalablity and redundancy solutions, and is honestly very well proven and plenty of documentation exists on optimising queries.

READ MORE
9 upvotes·252.9K views
Senior DevOps Engineer at Vital Beats·

Jira's great strength is almost it's great weakness: Customisation of ticket types and workflow.

Hand Jira over to an engineer, and they will quite quickly try to build a workflow for tickets that tries to model every possible state your work might be in, every possible holding point etc. The end result will be a workflow no-one understands well enough to navigate. Hand Jira over to a manager, and they'll throw it back at you, as the mechanism for changing workflows is not drag and drop, and reporting relies on Jira query language, across fields they don't understand are used or not.

Unfortunately both approaches miss the main purpose of work planning and tickets as a process. You are not trying to model software development in full. Software development is messy, and honestly it is too difficult to reason over such detail. Likewise, your software development workflow does not change often. Maybe ... once, twice a year or so?

If you have a sit down, and ask "What is the minimum useful information the team and it's stakeholders need to see?", and model that at as high a level as possible, you will end up with a Jira system that is simple, easy to use, and actually matches your working practices.

The reason I have laid that out, is because it helps explain why Vital Beats went with Jira, and not with GitHub issues, or ClickUp.

GitHub issues provides a fairly repository centric idea about tickets, and their workflows. At Vital Beats, multiple repositories are glued together to contribute to different solutions, and so it is not the case that one repository = one product or team.

ClickUp prescribes a particular workflow. It is a broad and fairly common workflow, and works if you don't have a prior practice in mind ... but it's not Vital Beats workflow. When boiled down, our workflow had both less fields and different workflow states than ClickUp offered, and for stakeholders we wanted to offer a higher abstraction, to answer those broad questions like "How close are we to release?" without getting into putting date deadlines on tickets and getting into hourly estimation.

Jira let us stay in a good level of agile thinking, while providing different views for different audiences. We stripped it down to it's bare minimum, and it works without being "I need to spend time doing backlog monkey work". As our engineering team expands, our workflow too can expand if needed ... or encompass distinct workflows that match team styles, while still offering that higher level view for stakeholders.

Configure once, configure simply, configure infrequently, and you will have the satisfaction with Jira that we at Vital Beats have had!

READ MORE
9 upvotes·17.1K views
Senior DevOps Engineer at Vital Beats·

A question you might want to think about is "What kind of experience do I want to gain, by using a DBMS?". If your aim is to have experience with SQL and any related libraries and frameworks for your language of choice (python, I think?), then it kind of doesn't matter too much which you pick so much. As others have said, SQLite would offer you the ability to very easily get started, and would give you a reasonably standard (if a little basic) SQL dialect to work with.

If your aim is actually to have a bit of "operational" experience, in terms of things like what command line tools might be available as standard for the DBMS, understanding how the DBMS handles multiple databases, when to use multiple schemas vs multiple databases, some basic privilege management etc. Then I would recommend PostgreSQL. SQLite's simplicity actually avoids most of these experiences, which is not helpful to you if that is what you hope to learn. MySQL has a few "quirks" to how it manages things like multiple databases, which may lead you to making less good decisions if you tried to take your experience over to different DBMS, especially in bigger enterprise roles. PostgreSQL is kind of a happy middle ground here, with the ability to start PostgreSQL servers via docker or docker-compose making the actual day-to-day management pretty easy, while still giving you experience of the kinds of considerations I have listed above.

At Vital Beats we make use of PostgreSQL, largely because it offers us a happy balance between good management and backup of data, and good standard command line tools, which is essential for us where we are deploying our solutions within Kubernetes / docker, and so more graphical tools are not always appropriate for us. PostgreSQL is also pretty universally supported in terms of language libraries and frameworks, without having to make compromises on how we want to store and layout our data.

READ MORE
6 upvotes·1 comment·269.1K views
Dimelo Waterson
Dimelo Waterson
·
November 10th 2020 at 4:24AM

I appreciate the perspective on operational use- I couldn't quite phrase it, but that was the direction I was interested in. You hit the nail on the head when addressing SQLite's potentially too-convenient structure, which was the heart of my concern when I requested advice. To another point, I spend a lot of time on the command line in Docker instances when I'm trying to test or compare functionality, so while I didn't expect there to be deployment issues with any flavor, it's good to know that PostgreSQL is well-suited for that. I know little enough about the database field that it didn't occur to me that there were even graphical tools to use, so knowing powerful command line resources are available is a definite plus, since I would be working essentially entirely by CLI. Thanks for your perspective.

·
Reply
Senior DevOps Engineer at Vital Beats·

Within our deployment pipeline, we have a need to deploy to multiple customer environments, and manage secrets specifically in a way that integrates well with AWS, Kubernetes Secrets, Terraform and our pipelines ourselves.

Jenkins offered us the ability to choose one of a number of credentials/secrets management approaches, and models secrets as a more dynamic concept that GitHub Actions provided.

Additionally, we are operating Jenkins within our development Kubernetes cluster as a kind of system-wide orchestrator, allowing us to use Kubernetes pods as build agents, avoiding the ongoing direct costs associated with GitHub Actions minutes / per-user pricing. Obviously as a consequence we take on the indirect costs of maintain Jenkins itself, patching it, upgrading etc. However our experience with managing Jenkins via Kubernetes and declarative Jenkins configuration has led us to believe that this cost is small, particularly as the majority of actual building and testing is handled inside docker containers and Kubernetes, alleviating the need for less supported plugins that may make Jenkins administration more difficult.

READ MORE
2 upvotes·217.5K views