Current work stack


  • This is used as our primary reverse proxy allowing us to bypass the firewall restrictions most of our users have by routing all our apis via layer 7.


  • Code hosting and documetation


  • All backend code is done in node.js

    We have a SOA for our systems. It isn't quite Microservices jsut yet, but it does provide domain encapsulation for our systems allowing the leaderboards to fail without affecting the login or education content.

    We've written a few internal modules including a very simple api framework.

    I ended up picking Node.js because the game client is entirely in JavaScript as well. This choice made it a lot easier for developers to cross borders between being "client side" game developers and "server side" game developers. It also meant that the pool of knowledge/best practices is applicable almost across the company.


  • Source control for all teams is in Git.


  • Communication


  • We use redis as a cache. Nothing too fancy here. At one point we were using it to cache character information, but we've since moved that entirely to DynamoDB and are evaluating the performance before we bring redis back in.


  • We are testing out docker at the moment, building images from successful staging builds for all our APIs. Since we operate in a SOA (not quite microservices), developers have a dockerfile that they can run to build the entirety of our api infrastructure on their machines. We use the successful builds from staging to power these instances allowing them to do some more manual integration testing across systems.


  • We use postgresql for the merge between sql/nosql. A lot of our data is unstructured JSON, or JSON that is currently in flux due to some MVP/interation processes that are going on. PostgreSQL gives the capability to do this.

    At the moment PostgreSQL on amazon is only at 9.5 which is one minor version down from support for document fragment updates which is something that we are waiting for. However, that may be some ways away.

    Other than that, we are using PostgreSQL as our main SQL store as a replacement for all the MSSQL databases that we have. Not only does it have great support through RDS (small ops team), but it also has some great ways for us to migrate off RDS to managed EC2 instances down the line if we need to.


  • We are testing out MongoDB at the moment. Currently we are only using a small EC2 setup for a delayed job queue backed by agenda. If it works out well we might look to see where it could become a primary document storage engine for us.


  • S3 is where we upload various assets for the game/website/education content.


  • Sprint/Backlog planning


  • Currently serves as the location that our QA team builds various automated testing jobs.

    At one point we were using it for builds, but we ended up migrating away from them to Code Pipelines.


  • This is our CDN for game assets.


  • We use ElasticSearch for

    • Session Logs
    • Analytics
    • Leaderboards

    We originally self managed the ElasticSearch clusters, but due to our small ops team size we opt to move things to managed AWS services where possible.

    The managed servers, however, do not allow us to manage our own backups and a restore actually requires us to open a support ticket with them. We ended up setting up our own nightly backup since we do per day indexes for the logs/analytics.


  • This is a legacy system requirement. We have some portions of our website written in PHP. Normally this wouldn't be an issue but at the time they decided to use PHP+Windows they were also trying to use MSSQL databases (All the microsoft influence was due to some azure credits the company received early on). The particular driver they ended up picking forced them into using the mssql_* functions instead of PDO. This meant that the majority of the site used these rather outdated calls and replacing them was a rather large endeavour. So while we migrate some of the PHP backend away to various node.js api systems we are simply sustaining the existing PHP portions.


  • CC handler for our membership system. It was an easy choice for how quickly we were able to implement it, as well as the disputed payment process.


  • We are using RDS for managing PostgreSQL and legacy MSSQL databases.

    Unfortunately while RDS works great for managing the PostgreSQL systems, MSSQL is very much a second class citizen and they don't offer very much capability. Infact, in order to upgrade instance storage for MSSQL we actually have to spin up a new cluster and migrate the data over.


  • Primary DNS


  • Just a simple way to monitor some public endpoints.


  • Socket.io is used as our current multiplayer engine. The existing engine is very simplistic and only utilizes the websocket+http fallback transports and serves as a generic world/zone/screen grouping mechanism for displaying users to each other.


  • We originally used CircleCI as our self-contained build system for our internal node modules. It was very easy to set up and configure. Unfortunately we ended up stepping away from it to Jenkins and then CodePipeline due to better integration with our various applications.


  • We use ElasticBeanstalk to manage the applications/deployment/scaling functionality over top of EC2.

    There are a few smaller ec2 instances that are customized for the multiplayer system.


  • Datadog was used as an agent for monitoring and as for the statsd daemon included. This way we are able to have automated system stats and include whatever other metrics we want to track.


28445

Favorite
Views
38
28445
Favorite
Views
38