Avatar of Joshua Dean Küpper

Joshua Dean Küpper

CEO at Scrayos UG (haftungsbeschränkt)
CEO at Scrayos UG (haftungsbeschränkt)·

We use GitLab CI because of the great native integration as a part of the GitLab framework and the linting-capabilities it offers. The visualization of complex pipelines and the embedding within the project overview made Gitlab CI even more convenient. We use it for all projects, all deployments and as a part of GitLab Pages.

While we initially used the Shell-executor, we quickly switched to the Docker-executor and use it exclusively now.

We formerly used Jenkins but preferred to handle everything within GitLab . Aside from the unification of our infrastructure another motivation was the "configuration-in-file"-approach, that Gitlab CI offered, while Jenkins support of this concept was very limited and users had to resort to using the webinterface. Since the file is included within the repository, it is also version controlled, which was a huge plus for us.

READ MORE
20 upvotes·735.8K views
CEO at Scrayos UG (haftungsbeschränkt)·

We've already been monitoring Agones for a few years now, but we only adapted Kubernetes in mid 2021, so we could never use it until then. Transitioning to Kubernetes has overall been a blast. There's definitely a steep learning curve associated with it, but for us, it was certainly worth it. And Agones plays definitely a part in it.

We previously scheduled our game servers with Docker Compose and Docker Swarm, but that always felt a little brittle and like a really "manual" process, even though everything was already dockerized. For matchmaking, we didn't have any solution yet.

After we did tons of local testing, we deployed our first production-ready Kubernetes cluster with #kubespray and deployed Agones (with Helm) on it. The installation was very easy and the official chart had just the right amount of knobs for us!

The aspect, that we were the most stunned about, is how seamless Agones integrates into the Kubernetes infrastructure. It reuses existing mechanisms like the Health Pings and extends them with more resource states and other properties that are unique to game servers. But you're still free to use it however you like: One GameServer per Game-Session, one GameServer for multiple Game-Sessions (in parallel or reusing existing servers), custom allocation mechanisms, webhook-based scaling, ... we didn't run into any dead ends yet.

One thing, that I was a little worried about in the beginning, was the SDK integration, as there was no official one for Minecraft/Java. And the two available inofficial ones didn't satisfy our requirements for the SDK. Therefore, we went and developed our own SDK and ... it was super easy! Agones does publish their Protobuf files and so we could generate the stubs with #Protoc. The existing documentation regarding Client-SDKs from Agones was a great help in writing our own documentation for the interface methods.

And they even have excellent tooling for testing your own SDK implementations. With the use of Testcontainers we could just spin up the local SDK testing image for each of the integration tests and could confirm that our SDK is working fine. We discovered a very small inconsistency for one of the interface methods, submitted an issue and a corresponding PR and it was merged within less than 24 hours.

We've now been using Agones for a few months and it has proven to be very reliable, easy to manage and just a great tool in general.

READ MORE
12 upvotes·356.4K views
CEO at Scrayos UG (haftungsbeschränkt)·

We primarily use MariaDB but use PostgreSQL as a part of GitLab , Sentry and Nextcloud , which (initially) forced us to use it anyways. While this isn't much of a decision – because we didn't have one (ha ha) – we learned to love the perks and advantages of PostgreSQL anyways. PostgreSQL's extension system makes it even more flexible than a lot of the other SQL-based DBs (that only offer stored procedures) and the additional JOIN options, the enhanced role management and the different authentication options came in really handy, when doing manual maintenance on the databases.

READ MORE
11 upvotes·675.2K views
CEO at Scrayos UG (haftungsbeschränkt)·

We use Checkstyle for all of our Java projects to perform the linting. The setup was very easy and there are amazing integrations into all IDEs. What also came in very handy was that we could start off with just a basic set of rules and tighten the linting-rules step-by-step, continously improving the readability and uniformity of our codebase.

READ MORE
11 upvotes·37.8K views
CEO at Scrayos UG (haftungsbeschränkt)·

We use Sonatype Nexus to store our closed-source java libraries to simplify our deployment and dependency-management. While there are many alternatives, most of them are expensive ( GitLab Enterprise ), monilithic ( JFrog Artifactory ) or only offer SaaS-licences. We preferred the on-premise approach of Nexus and therefore decided to use it.

We exclusively use the Maven-capabilities and are glad that the modular design of Nexus allows us to run it very lightweight.

READ MORE
10 upvotes·317.7K views
CEO at Scrayos UG (haftungsbeschränkt)·

We use Vert.x for our internal and external OpenAPI v3 REST-API that handles our own queries for the App, Launcher and Website aswell as external queries, authenticated through OAuth2.

Vert.x has proven to be a valuable asset and framework during the development of our application and the numberless "addon"-packages (OAuth2, OpenAPI, Redis Cache, SQL, etc.) allow us to test out new stuff very fast.

READ MORE
8 upvotes·17.3K views
CEO at Scrayos UG (haftungsbeschränkt)·

As the access to our global REST-API "Charon" is bound to OAuth2, we use Keycloak inside Quarkus to authenticate and authorize users of our API. It is not possible to perform any un-authenticated requests against this API, so we wanted to make really sure that the authentication/authorization component is absolutely reliable and tested. We found those attributes within Keycloak, so we used it.

READ MORE
7 upvotes·842.9K views
CEO at Scrayos UG (haftungsbeschränkt)·

For our internal team and collaboration panel we use Nuxt.js (with TypeScript that is transpiled into ES6), Webpack and npm. We enjoy the opinionated nature of Nuxt.js over vanilla Vue.js, as we would end up using all of the components Nuxt.js incorporates anyways and we can adhere to the conventions setup by the Nuxt.js project, which allows us to get better support in case we run into any dead ends. Webpack allows us to create reproducable builds and also debug our application with hot reloads, which greately increased the pace at which we are able to perform and test changes. We also incorporated a lot of testing (ESLint, Chai, Jasmine, Nightwatchjs) into our pipelines and can trigger those jobs through GitLab CI. All packages are fetched through npm, so that we can keep our git repositories slim and are notified of new updates aswell as reported security flaws.

READ MORE
7 upvotes·620.7K views
CEO at Scrayos UG (haftungsbeschränkt)·

We use GraphQL for the communication between our Minecraft-Proxies/Load-Balancers and our global Minecraft-Orchestration-Service JCOverseer.

This connection proved to be especially challenging, as there were so many available options and very specific requirements and we tried our hardest to put as little complexity into this interface as possible.

Initially we considered designing our very own Netty based Packet-Protocol. While the performance of this approach probably would've been noteworthy, we would have had to write a lot of packets as the individual payloads would differ a lot and for the protocol specification a new project would've been needed, so we scrapped that idea.

Our second idea was to use a combination of Redis Key/Value store (in particular the ability to write whole, complex sets as the values of keys) for existing data, Redis Pub-Sub for the synchronization of new/changed/deleted data and a Vert.x based REST API for the mutation requests of the clients. While this would certainly have been possible, we decided against it, as redis offers no real other data types than strings and typing was important to us.

So we finally settled for GraphQL as it would allow us to define dynamic queries and mutations and additionally has subscriptions in store, so we would only need one component instead of three separate. The proxies register as subscribers to the server changes channel and fetch the current data set in advance. If they need to request changes, this is done through a mutation in GraphQL aswell.

The status of the invidiual servers is fetched through Docker healthchecks and a Docker client in the orchestration service, that subscribes to changed HEALTHINESS values in docker. If a service becomes unhealthy it is unregistered and synchronized through GraphQL. The healthcheck is comparable to a ping packet that expects a response in a given time frame.

READ MORE
6 upvotes·2.1M views
CEO at Scrayos UG (haftungsbeschränkt)·

We use Hetzner Online AG since the inception of our business, because of the great prices, marvelous support and great interface (especially the new cloud interface). Other options that we tested are DigitalOcean (was more expensive than the new hetzner cloud and didn't offer "huge" dedicated servers), @Vultr (about the same issue as with DigitalOcean , although the prices were better), OVH (Prices, old interface, no "tiny" packages and [at least back at the day] only monthly payment) and Living Bots (Only dedicated servers, too expensive for our needs).

Hetzner offered the best spectrum of servers and has great prices and REALLY great prices in the server auctions.

READ MORE
6 upvotes·93.7K views