Trending Feed

Avatar of mickaelallielrookout
DevOps Engineer at Rookout·

In addition to being a lot cheaper, Google Cloud Pub/Sub allowed us to not worry about maintaining any more infrastructure that needed.

We moved from a self-hosted RabbitMQ over to CloudAMQP and decided that since we use GCP anyway, why not try their managed PubSub?

It is one of the better decisions that we made, and we can just focus about building more important stuff!

READ MORE
2 upvotes·3.1K views
Avatar of hcatlin
VP of Engineering at Rent The Runway·

We use Sass because I invented it! No, that's not a joke at all! Well, let me explain. So, we used Sass before I started at Rent the Runway because it's the de-facto industry standard for pre-compiled and pre-processed CSS. We do also use PostCSS for stuff like vendor prefixing and various transformations, but Sass (specifically SCSS) is the main developer-focused language for describing our styling. Some internal apps use styled-components and @Aphrodite, but our main website is allllll Sassy. Oh, but the non-joking part is the inventing part. /shrug

READ MORE
4 upvotes·205.3K views

As Mixmax began to scale super quickly, with more and more customers joining the platform, we started to see that the Meteor app was still having a lot of trouble scaling due to how it tried to provide its reactivity layer. To be honest, this led to a brutal summer of playing Galaxy container whack-a-mole as containers would saturate their CPU and become unresponsive. I’ll never forget hacking away at building a new microservice to relieve the load on the system so that we’d stop getting paged every 30-40 minutes. Luckily, we’ve never had to do that again! After stabilizing the system, we had to build out two more microservices to provide the necessary reactivity and authentication layers as we rebuilt our Meteor app from the ground up in Node.js. This also had the added benefit of being able to deploy the entire application in the same AWS VPCs. Thankfully, AWS had also released their ALB product so that we didn’t have to build and maintain our own websocket layer in Amazon EC2. All of our microservices, except for one special Go one, are now in Node with an nginx frontend on each instance, all behind AWS Elastic Load Balancing (ELB) or ALBs running in AWS Elastic Beanstalk.

READ MORE
How Mixmax Uses Node and Go to Process 250M Events a day - Mixmax Tech Stack (stackshare.io)
5 upvotes·171.4K views
Avatar of ptrthomas
Distinguished Engineer at Intuit·
Shared insights
on
Karate DSLKarate DSLGitGit
at

Karate DSL is extremely effective in those situations where you have a microservice still in development, but the "consumer" web-UI dev team needs to make progress. Just create a mock definition (feature) file, and since it is plain-text - it can easily be shared across teams via Git. Since Karate has a binary stand-alone executable, even teams that are not familiar with Java can use it to stand-up mock services. And the best part is that the mock serves as a "contract" - which the server-side team can use to practice test-driven development.

READ MORE
The World's Smallest Micro Service - ptrthomas Tech Stack (stackshare.io)
18 upvotes·2 comments·106.2K views
Avatar of ojburn
Architect at Atlassian·

We recently added new APIs to Jira to associate information about Builds and Deployments to Jira issues.

The new APIs were developed using a spec-first API approach for speed and sanity. The details of this approach are described in this blog post, and we relied on using Swagger and associated tools like Swagger UI.

A new service was created for managing the data. It provides a REST API for external use, and an internal API based on GraphQL. The service is built using Kotlin for increased developer productivity and happiness, and the Spring-Boot framework. PostgreSQL was chosen for the persistence layer, as we have non-trivial requirements that cannot be easily implemented on top of a key-value store.

The front-end has been built using React and querying the back-end service using an internal GraphQL API. We have plans of providing a public GraphQL API in the future.

New Jira Integrations: Bitbucket CircleCI AWS CodePipeline Octopus Deploy jFrog Azure Pipelines

READ MORE
6 integrations every Jira Software Cloud team NEED... - Atlassian Community (community.atlassian.com)
12 upvotes·291.1K views
Avatar of ecolson
Chief Algorithms Officer at Stitch Fix·

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

READ MORE
Stitch Fix Algorithms Tour (algorithms-tour.stitchfix.com)
19 upvotes·1.3M views
Avatar of jordanschuetz
Developer Advocate at MuleSoft·
Shared insights
on
PubNubPubNubUnityUnity

PubNub is a great tool for developers looking for an easy to use, real-time messaging service. PubNub's Publish/Subscribe APIs are some of the easiest to use in the industry, and their speed and reliability of service are unparrell. While many companies out there offer a wide range of pubsub and message queuing services, I've personally found that PubNub is the easiest to setup and get started with. When I was an indie game developer, I used PubNub as the realtime chat component in my application, and it also powered realtime drawing between players. The cost compared to spinning up my own servers globally was much cheaper, and I was happy that I decided to go with PubNub. While you could build it yourself, why when PubNub makes it so easy to get something up and running. Spend less time coding and more time marketing, that's always been my philosophy. PubNub Unity

READ MORE
6 upvotes·63.3K views
Avatar of deepakk
Sr. DevOps Engineer ·
Shared insights
on
Visual Studio CodeVisual Studio Code
at

I use Visual Studio Code because of community support and popularity it gained in very short period and many more extensions that are being contributed by community every day. I like the Python Engine in VSCode makes my work life productive. My most fav extensions are

  • Gitlense
  • Kubernetes
  • Docker
  • Chef

Themes are always fun and make your development IDE productive especially with colors and error indicators etc..

READ MORE
7 upvotes·66.9K views

I love Python and JavaScript . You can do the same JavaScript async operations in Python by using asyncio. This is particularly useful when you need to do socket programming in Python. With streaming sockets, data can be sent or received at any time. In case your Python program is in the middle of executing some code, other threads can handle the new socket data. Libraries like asyncio implement multiple threads, so your Python program can work in an asynchronous fashion. PubNub makes bi-directional data streaming between devices even easier.

READ MORE
Socket Programming with Python and PubNub - PubNub Tech Stack (stackshare.io)
21 upvotes·2 comments·101K views
Avatar of nzoschke
Engineering Manager at Segment·

We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. A public API is only as good as its #documentation. For the API reference doc we are using Postman.

Postman is an “API development environment”. You download the desktop app, and build API requests by URL and payload. Over time you can build up a set of requests and organize them into a “Postman Collection”. You can generalize a collection with “collection variables”. This allows you to parameterize things like username, password and workspace_name so a user can fill their own values in before making an API call. This makes it possible to use Postman for one-off API tasks instead of writing code.

Then you can add Markdown content to the entire collection, a folder of related methods, and/or every API method to explain how the APIs work. You can publish a collection and easily share it with a URL.

This turns Postman from a personal #API utility to full-blown public interactive API documentation. The result is a great looking web page with all the API calls, docs and sample requests and responses in one place. Check out the results here.

Postman’s powers don’t end here. You can automate Postman with “test scripts” and have it periodically run a collection scripts as “monitors”. We now have #QA around all the APIs in public docs to make sure they are always correct

Along the way we tried other techniques for documenting APIs like ReadMe.io or Swagger UI. These required a lot of effort to customize.

Writing and maintaining a Postman collection takes some work, but the resulting documentation site, interactivity and API testing tools are well worth it.

READ MORE
Announcing Config API: convenient and extensible workspace configuration · Segment Blog (segment.com)
30 upvotes·1 comment·1.2M views
Avatar of idosh
The Elegant Monkeys·

Kubernetes powers our #backend services as it is very easy in terms of #devops (the managed version). We deploy everything using @helm charts as it provides us to manage deployments the same way we manage our code on GitHub . On every commit a CircleCI job is triggered to run the tests, build Docker images and deploy them to the registry. Finally on every master commit CircleCI also deploys the relevant service using Helm chart to our Kubernetes cluster

READ MORE
6 upvotes·216.6K views

I built a project using Quasar Framework with Vue.js, vuex and axios on the frontend and Go, Gin Gonic and PostgreSQL on the backend. Deployment was realized using Docker and Docker Compose. Now I can build the desktop and the mobile app using a single code base on the frontend. UI responsiveness and performance of this stack is amazing.

READ MORE
Migrating from Vuetify to Quasar - Quasar Framework - Medium (medium.com)
8 upvotes·98K views
Avatar of shosti
Senior Architect at Rainforest QA·

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

READ MORE
Why Rainforest QA Moved from Heroku to Google Kubernetes Engine (rainforestqa.com)
12 upvotes·1 comment·413K views
Avatar of Joseph-Irving
DevOps Engineer at uSwitch·
Shared insights
on
KubernetesKubernetesEnvoyEnvoyGoGo
at

At uSwitch we wanted a way to load balance between our multiple Kubernetes clusters in AWS to give us added redundancy. We already had ingresses defined for all our applications so we wanted to build on top of that, instead of creating a new system that would require our various teams to change code/config etc.

Envoy seemed to tick a lot of boxes:

  • Loadbalancing capabilities right out of the box: health checks, circuit breaking, retries etc.
  • Tracing and prometheus metrics support
  • Lightweight
  • Good community support

This was all good but what really sold us was the api that supported dynamic configuration. This would allow us to dynamically configure envoy to route to ingresses and clusters as they were created or destroyed.

To do this we built a tool called Yggdrasil using their Go sdk. Yggdrasil effectively just creates envoy configuration from Kubernetes ingress objects, so you point Yggdrasil at your kube clusters, it generates config from the ingresses and then envoy can loadbalance between your clusters for you. This is all done dynamically so as soon as new ingress is created the envoy nodes get updated with the new config. Importantly this all worked with what we already had, no need to create new config for every application, we just put this on top of it.

READ MORE
(medium.com)
7 upvotes·55.2K views
Avatar of NickCraver
Architecture Lead at Stack Overflow·
Shared insights
on
.NET.NET
at

We use .NET Core for our web socket servers, mail relays, and scheduling applications. Soon, it will power all of Stack Overflow. The ability to run on any platform, further extend and plug especially the ASP.NET bits and treat almost everything as a building block you can move around has been a huge win. We're headed towards an appliance model and with .NET Core we can finally put everything in box...on Linux. We can re-use more code, fit all our deployment scenarios both during the move and after, and also ditch a lot of performance workarounds we had to scale...they're in-box now.

And testing. The ability to fire up a web server and request and access both in a single method is an orders of magnitude improvement over ASP.NET 5. We're looking forward to tremendously improving our automated test coverage in places it's finally reasonable in both time and effort for devs to do so. In short: we're getting a lot more for the same dev time spent in .NET Core.

READ MORE
4 upvotes·1 comment·63.5K views
Avatar of vishalnarkhede
Javascript Developer at getStream.io·

Recently, the team at Stream published a React Native SDK for our new Chat by Stream product. React Native brings the power of JavaScript to the world of mobile development, making it easy to develop apps for multiple platforms. We decided to publish two different endpoints for the SDK – Expo and React Native (non-expo), to avoid the hurdle and setup of using the Expo library in React Native only projects on the consumer side.

The capability of style customization is one a large deal breaker for frontend SDKs. To solve this, we decided to use styled-components in our SDK, which makes it easy to add support for themes on top of our existing components. This practice reduces the maintenance effort for stylings of custom components and keeps the overall codebase clean.

For module bundling, we decided to go with Rollup.js instead of Webpack due to its simplicity and performance in the area of library/module providers. We are using Babel for transpiling code, enabling our team to use JavaScript's next-generation features. Additionally, we are using the React Styleguidist component documentation, which makes documenting the React Native code a breeze.

READ MORE
React Native Chat Tutorial (getstream.io)
19 upvotes·1 comment·303.1K views
Avatar of mbnshtck
Principal Software Architect at Microsoft·

I use Visual Studio Code because its the best IDE for my open source projects using Python, Node.js, TypeScript, Ruby and Scala. Extension exist for everything, great integration with GitHub. It makes development easy and fun.

READ MORE
2 upvotes·36.8K views
Avatar of tim_nolet
Founder, Engineer & Dishwasher at Checkly·

PostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

In a nutshell:cPostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

In a nutshell:

  • We use Postgres on Heroku.
  • We use a "one database, on schema" approach for partitioning customer data.
  • We use an accounts, memberships and users table to create a many-to-many relation between users and accounts.
  • We completely decouple prices, payments and the exact ingredients for a customer's plan.

All the details including a database schema diagram are in the linked blog post.

READ MORE
Building a multi-tenant SaaS data model (blog.checklyhq.com)
8 upvotes·87.3K views
Avatar of ronanlevesque
Software engineer at Algolia·

A few months ago we decided to move our whole static website (www.algolia.com) to a new stack. At the time we were using a website generator called Middleman, written in Ruby. As a team of only front-end developers we didn't feel very comfortable with the language itself, and the time it took to build was not satisfying. We decided to move to Gatsby to take advantage of its use of React , as well as its incredibly high performances in terms of build and page rendering.

READ MORE
18 upvotes·2 comments·189.1K views
Avatar of conor
Tech Brand Mgr, Office of CTO at Uber·

Why we built an open source, distributed training framework for TensorFlow , Keras , and PyTorch:

At Uber, we apply deep learning across our business; from self-driving research to trip forecasting and fraud prevention, deep learning enables our engineers and data scientists to create better experiences for our users.

TensorFlow has become a preferred deep learning library at Uber for a variety of reasons. To start, the framework is one of the most widely used open source frameworks for deep learning, which makes it easy to onboard new users. It also combines high performance with an ability to tinker with low-level model details—for instance, we can use both high-level APIs, such as Keras, and implement our own custom operators using NVIDIA’s CUDA toolkit.

Uber has introduced Michelangelo (https://eng.uber.com/michelangelo/), an internal ML-as-a-service platform that democratizes machine learning and makes it easy to build and deploy these systems at scale. In this article, we pull back the curtain on Horovod, an open source component of Michelangelo’s deep learning toolkit which makes it easier to start—and speed up—distributed deep learning projects with TensorFlow:

https://eng.uber.com/horovod/

(Direct GitHub repo: https://github.com/uber/horovod)

READ MORE
Meet Horovod: Uber's Open Source Distributed Deep Learning Framework (eng.uber.com)
7 upvotes·957.2K views
Avatar of cristoirmac
VP, Engineering at SparkPost·

The recent move of our CI/CD tooling to AWS CodeBuild / AWS CodeDeploy (with GitHub ) as well as moving to Amazon EC2 Container Service / AWS Lambda for our deployment architecture for most of our services has helped us significantly reduce our deployment times while improving both feature velocity and overall reliability. In one extreme case, we got one service down from 90 minutes to a very reasonable 15 minutes. Container-based build and deployments have made so many things simpler and easier and the integration between the tools has been helpful. There is still some work to do on our service mesh & API proxy approach to further simplify our environment.

READ MORE
9 upvotes·2 comments·101.2K views
Shared insights
on
Auth0Auth0

As our most active customers needed to remember five different username-password combinations to use our services, it became painfully clear we needed a single sign on system. We looked at a few different systems, but Auth0 allowed us to use a single system for all our B2C, B2B and B2E requirements, had very reasonable pricing and provided a great deal of flexibility thanks to its use of Rules, Hooks, Extensions and Hosted Pages.

You can use any combination of identity providers, without having to make any changes to your app. You can even enable a different set of providers for different applications. We use passwordless, social and database login and plan to add Active Directory soon too.

Integrating Auth0 is incredibly easy, fast and flexible. With just a few lines of code, you're up and running, no matter if you need OAuth2, OpenID Connect or SAML. It provides great quick starts, clear documentation and quick support, both through the community forum and support desk. We're currently running it with various Node.js, PHP and Ruby applications.

All in all, Auth0 provides us with a common user identity across our applications and allows us to focus on the features of our applications, instead of having to spend hours and hours on creating safe login systems.

READ MORE
12 upvotes·73.4K views
Avatar of tabbott
Founder at Zulip·
Shared insights
on
WebpackWebpack
at
()

We use Webpack because it's the standard toolchain for managing frontend dependencies in 2019, and it's hard to make a nice frontend development user experience without it.

I don't like it -- their configuration system is a mess, requiring a ton of reading or expertise to do things that essentially every project wants to do by default. It has a lot of great features, which is why we use it. But as an example, it's development server hot reloading is really cool, but doesn't handle changes in the webpack configuration file itself (so adding a new file requires a restart).

My hope is that the sheer fact that everyone is using it will eventually lead to these problems being fixed or it being replaced by a similar system with a better design.

READ MORE
6 upvotes·18K views
Avatar of EyasSH
Software Engineer at Google·

One TypeScript / Angular 2 code health recommendation at Google is how to simplify dealing with RxJS Observables. Two common options in Angular are subscribing to an Observable inside of a Component's TypeScript code, versus using something like the AsyncPipe (foo | async) from the template html. We typically recommend the latter for most straightforward use cases (code without side effects, etc.)

I typically review a fair amount of Angular code at work. One thing I typically encourage is using plain Observables in an Angular Component, and using AsyncPipe (foo | async) from the template html to handle subscription, rather than directly subscribing to an observable in a component TS file.

Subscribing in components

Unless you know a subscription you're starting in a component is very finite (e.g. an HTTP request with no retry logic, etc), subscriptions you make in a Component must:

  1. Be closed, stopped, or cancelled when exiting a component (e.g. when navigating away from a page),
  2. Only be opened (subscribed) when a component is actually loaded/visible (i.e. in ngOnInit rather than in a constructor).

AsyncPipe can take care of that for you

Instead of manually implementing component lifecycle hooks, remembering to subscribe and unsubscribe to an Observable, AsyncPipe can do that for you.

I'm sharing a version of this recommendation with some best practices and code samples.

#Typescript #Angular #RXJS #Async #Frontend

READ MORE
Use AsyncPipe When Possible – Eyas's Blog (blog.eyas.sh)
23 upvotes·2 comments·187.5K views