Feed powered byStream Blue Logo Copy 5Created with Sketch.

Trending Feed

Decision at Segment about Datadog, TypeScript, Envoy, gRPC, Go, Security, Json, REST, Framework, Reliability, Observability

Avatar of nzoschke
Engineering Manager at Segment ·
DatadogDatadog
TypeScriptTypeScript
EnvoyEnvoy
gRPCgRPC
GoGo
#Security
#Json
#REST
#Framework
#Reliability
#Observability

We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. Behind the scenes the Config API is built with Go , GRPC and Envoy.

At Segment, we build new services in Go by default. The language is simple so new team members quickly ramp up on a codebase. The tool chain is fast so developers get immediate feedback when they break code, tests or integrations with other systems. The runtime is fast so it performs great at scale.

For the newest round of APIs we adopted the GRPC service #framework.

The Protocol Buffer service definition language makes it easy to design type-safe and consistent APIs, thanks to ecosystem tools like the Google API Design Guide for API standards, uber/prototool for formatting and linting .protos and lyft/protoc-gen-validate for defining field validations, and grpc-gateway for defining REST mapping.

With a well designed .proto, its easy to generate a Go server interface and a TypeScript client, providing type-safe RPC between languages.

For the API gateway and RPC we adopted the Envoy service proxy.

The internet-facing segmentapis.com endpoint is an Envoy front proxy that rate-limits and authenticates every request. It then transcodes a #REST / #JSON request to an upstream GRPC request. The upstream GRPC servers are running an Envoy sidecar configured for Datadog stats.

The result is API #security , #reliability and consistent #observability through Envoy configuration, not code.

We experimented with Swagger service definitions, but the spec is sprawling and the generated clients and server stubs leave a lot to be desired. GRPC and .proto and the Go implementation feels better designed and implemented. Thanks to the GRPC tooling and ecosystem you can generate Swagger from .protos, but it’s effectively impossible to go the other way.

25 upvotes·25.8K views

Decision at Airbnb about Apollo, GraphQL Playground, GraphQL, Prisma, BackendDrivenUI

Avatar of adamrneary
Engineer at Airbnb ·
ApolloApollo
GraphQL PlaygroundGraphQL Playground
GraphQLGraphQL
#Prisma
#BackendDrivenUI

At Airbnb we use GraphQL Unions for a "Backend-Driven UI." We have built a system where a very dynamic page is constructed based on a query that will return an array of some set of possible “sections.” These sections are responsive and define the UI completely.

The central file that manages this would be a generated file. Since the list of possible sections is quite large (~50 sections today for Search), it also presumes we have a sane mechanism for lazy-loading components with server rendering, which is a topic for another post. Suffice it to say, we do not need to package all possible sections in a massive bundle to account for everything up front.

Each section component defines its own query fragment, colocated with the section’s component code. This is the general idea of Backend-Driven UI at Airbnb. It’s used in a number of places, including Search, Trip Planner, Host tools, and various landing pages. We use this as our starting point, and then in the demo show how to (1) make and update to an existing section, and (2) add a new section.

While building your product, you want to be able to explore your schema, discovering field names and testing out potential queries on live development data. We achieve that today with GraphQL Playground, the work of our friends at #Prisma. The tools come standard with Apollo Server.

#BackendDrivenUI

17 upvotes·13.5K views

Decision at Uber Technologies about Nagios, Grafana, Graphite, Prometheus

Avatar of conor
Tech Brand Mgr, Office of CTO at Uber ·
NagiosNagios
GrafanaGrafana
GraphiteGraphite
PrometheusPrometheus

Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:

By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.

To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...

https://eng.uber.com/m3/

(GitHub : https://github.com/m3db/m3)

6 upvotes·31.2K views

Decision at Postman about dbt, Amazon Redshift, Stitch, Looker

Avatar of sobtiankit
CTO at Postman Inc ·
dbtdbt
Amazon RedshiftAmazon Redshift
StitchStitch
LookerLooker

Looker , Stitch , Amazon Redshift , dbt

We recently moved our Data Analytics and Business Intelligence tooling to Looker . It's already helping us create a solid process for reusable SQL-based data modeling, with consistent definitions across the entire organizations. Looker allows us to collaboratively build these version-controlled models and push the limits of what we've traditionally been able to accomplish with analytics with a lean team.

For Data Engineering, we're in the process of moving from maintaining our own ETL pipelines on AWS to a managed ELT system on Stitch. We're also evaluating the command line tool, dbt to manage data transformations. Our hope is that Stitch + dbt will streamline the ELT bit, allowing us to focus our energies on analyzing data, rather than managing it.

8 upvotes·26K views

Decision about Auth0

Avatar of ThijmenDeValk
Auth0Auth0

As our most active customers needed to remember five different username-password combinations to use our services, it became painfully clear we needed a single sign on system. We looked at a few different systems, but Auth0 allowed us to use a single system for all our B2C, B2B and B2E requirements, had very reasonable pricing and provided a great deal of flexibility thanks to its use of Rules, Hooks, Extensions and Hosted Pages.

You can use any combination of identity providers, without having to make any changes to your app. You can even enable a different set of providers for different applications. We use passwordless, social and database login and plan to add Active Directory soon too.

Integrating Auth0 is incredibly easy, fast and flexible. With just a few lines of code, you're up and running, no matter if you need OAuth2, OpenID Connect or SAML. It provides great quick starts, clear documentation and quick support, both through the community forum and support desk. We're currently running it with various Node.js, PHP and Ruby applications.

All in all, Auth0 provides us with a common user identity across our applications and allows us to focus on the features of our applications, instead of having to spend hours and hours on creating safe login systems.

11 upvotes·27.3K views

Decision at Codecov about Visual Studio Code, Vue.js, CoffeeScript, JavaScript, TypeScript

Avatar of hootener
CTO at Codecov ·
Visual Studio CodeVisual Studio Code
Vue.jsVue.js
CoffeeScriptCoffeeScript
JavaScriptJavaScript
TypeScriptTypeScript

We chose TypeScript at Codecov when undergoing a recent rewrite of a legacy front end. Our previous front end was a mishmash of vanilla JavaScript and CoffeeScript , and was expanded upon haphazardly as the need arose. Without a unifying set of paradigms and patterns, the CoffeeScript and JavaScript setup was proving hard to maintain and expand upon by an engineering team. During a move to Vue.js , we decided to also make the move to TypeScript. Integrating TypeScript and Vue.js is fairly well understood at this point, so the setup wasn't all that difficult, and we felt that the benefits of incorporating TypeScript would outweigh the required time to set it up and get our engineering team up to speed.

Choosing to add TypeScript has given us one more layer to rely on to help enforce code quality, good standards, and best practices within our engineering organization. One of the biggest benefits for us as an engineering team has been how well our IDEs and editors (e.g., Visual Studio Code ) integrate with and understand TypeScript . This allows developers to catch many more errors at development time instead of relying on run time. The end result is safer (from a type perspective) code and a more efficient coding experience that helps to catch and remove errors with less developer effort.

9 upvotes·29.1K views

Decision at Soluto about Docker Swarm, Kubernetes, Visual Studio Code, Go, TypeScript, JavaScript, C#, F#, .NET

Avatar of Yshayy
Software Engineer ·
Docker SwarmDocker Swarm
KubernetesKubernetes
Visual Studio CodeVisual Studio Code
GoGo
TypeScriptTypeScript
JavaScriptJavaScript
C#C#
F#F#
.NET.NET

Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

17 upvotes·1 comment·34.3K views

How to Create An SSH Tunnel in Go

 · elliot.land
Sometimes resources (such as database servers) are not publicly accessible. This is critical for security, but it can be pain for writing scripts that need to access these resources for debugging and other ad-hoc tasks. One solution is to create an...

Decision at The New York Times about Kafka, Node.js, GraphQL, Apollo, React, PHP, MySQL, AngularJS

Avatar of nsrockwell
CTO at NY Times ·
KafkaKafka
Node.jsNode.js
GraphQLGraphQL
ApolloApollo
ReactReact
PHPPHP
MySQLMySQL
AngularJSAngularJS

When I joined NYT there was already broad dissatisfaction with the LAMP (AngularJS MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

13 upvotes·1 comment·32.1K views

Decision at Sentry about Redis, PostgreSQL, Celery, Django, InMemoryDatabases, MessageQueue

Avatar of jtcunning
Operations Engineer at Sentry ·
RedisRedis
PostgreSQLPostgreSQL
CeleryCelery
DjangoDjango
#InMemoryDatabases
#MessageQueue

Sentry started as (and remains) an open-source project, growing out of an error logging tool built in 2008. That original build nine years ago was Django and Celery (Python’s asynchronous task codebase), with PostgreSQL as the database and Redis as the power behind Celery.

We displayed a truly shrewd notion of branding even then, giving the project a catchy name that companies the world over remain jealous of to this day: django-db-log. For the longest time, Sentry’s subtitle on GitHub was “A simple Django app, built with love.” A slightly more accurate description probably would have included Starcraft and Soylent alongside love; regardless, this captured what Sentry was all about.

#MessageQueue #InMemoryDatabases

19 upvotes·21.2K views

Slack: Say hello, new logo

 · slackhq.com
Read this post in French, German, Japanese, and Spanish. Today we’re launching a new logo, as we start to refresh our look in general. We loved our old logo, and look, and know many felt the same. And yet, here we are to explain why we decided to evolve it. Firstly, it’s not change for …

Decision at Algolia about React, Gatsby, Ruby, Middleman

Avatar of ronanlevesque
Software engineer at Algolia ·
ReactReact
GatsbyGatsby
RubyRuby
MiddlemanMiddleman

A few months ago we decided to move our whole static website (www.algolia.com) to a new stack. At the time we were using a website generator called Middleman, written in Ruby. As a team of only front-end developers we didn't feel very comfortable with the language itself, and the time it took to build was not satisfying. We decided to move to Gatsby to take advantage of its use of React , as well as its incredibly high performances in terms of build and page rendering.

11 upvotes·2 comments·29.9K views

Decision at Kong about Zapier, GitHub, New Relic, RepoAnalytics, CommunityAnalytics, OpenSourceCommunityAnalytics, GitHubAnalytics

Avatar of coopr
Director of Ecosystem at Kong Inc. ·
ZapierZapier
GitHubGitHub
New RelicNew Relic
#RepoAnalytics
#CommunityAnalytics
#OpenSourceCommunityAnalytics
#GitHubAnalytics

I've used more and more of New Relic Insights here in my work at Kong. New Relic Insights is a "time series event database as a service" with a super-easy API for inserting custom events, and a flexible query language for building visualization widgets and dashboards.

I'm a big fan of New Relic Insights when I have data I know I need to analyze, but perhaps I'm not exactly sure how I want to analyze it in the future. For example, at Kong we recently wanted to get some understanding of our open source community's activity on our GitHub repos. I was able to quickly configure GitHub to send webhooks to Zapier , which in turn posted the JSON to New Relic Insights.

Insights is schema-less and configuration-less - just start posting JSON key value pairs, then start querying your data.

Within minutes, data was flowing from GitHub to Insights, and I was building widgets on my Insights dashboard to help my colleagues visualize the activity of our open source community.

#GitHubAnalytics #OpenSourceCommunityAnalytics #CommunityAnalytics #RepoAnalytics

10 upvotes·29.6K views

Decision at AppAttack about Kubernetes, DigitalOcean, CloudHosting

Avatar of ctbucha
Founder/CEO at AppAttack ·
KubernetesKubernetes
DigitalOceanDigitalOcean
#CloudHosting

I use DigitalOcean because of the simplicity of using their basic offerings, such as droplets. In AppAttack, we need low-level control of our infrastructure so we can rapidly deploy a custom training web application on-demand for each training session, and building a Kubernetes cluster on top of DigitalOcean droplets allowed us to do exactly that.

#CloudHosting

9 upvotes·24.5K views

Decision about JavaScript, Rails, Apollo, React

Avatar of holman
Zach Holman ·
JavaScriptJavaScript
RailsRails
ApolloApollo
ReactReact

Oof. I have truly hated JavaScript for a long time. Like, for over twenty years now. Like, since the Clinton administration. It's always been a nightmare to deal with all of the aspects of that silly language.

But wowza, things have changed. Tooling is just way, way better. I'm primarily web-oriented, and using React and Apollo together the past few years really opened my eyes to building rich apps. And I deeply apologize for using the phrase rich apps; I don't think I've ever said such Enterprisey words before.

But yeah, things are different now. I still love Rails, and still use it for a lot of apps I build. But it's that silly rich apps phrase that's the problem. Users have way more comprehensive expectations than they did even five years ago, and the JS community does a good job at building tools and tech that tackle the problems of making heavy, complicated UI and frontend work.

Obviously there's a lot of things happening here, so just saying "JavaScript isn't terrible" might encompass a huge amount of libraries and frameworks. But if you're like me, yeah, give things another shot- I'm somehow not hating on JavaScript anymore and... gulp... I kinda love it.

15 upvotes·4 comments·35.1K views

Decision at updown.io about BitPay, PayPal, Stripe, BitcoinCash, Payments, Bitcoin

Avatar of adrienjarthon
Founder at updown.io ·
BitPayBitPay
PayPalPayPal
StripeStripe
#BitcoinCash
#Payments
#Bitcoin

To accept payments on updown.io, we first added support for Stripe which is by far the most popular payment gateway for startups and for a good reason. Their service is of awesome quality: the UI is gorgeous, the integration is easy, the documentation is great, the API is super stable and well thought. I can't recommend it enough.

We then added support for PayPal which is pretty popular for people who have money on it and don't know where to spend it (it can make it feel like you're spending less when it comes from PayPal wallet), or for people who prefer not to enter a credit card on a new website. This was pretty well received and we're currently receiving about 25% of our purchases from PayPal. The documentation and integration is much more painful than with Stripe IMO, I can't recommend them for that, but not having it is basically dodging potential sales.

Finally we more recently added support of BitPay for #Bitcoin and BitcoinCash payments, which was a pretty easy process but not worth the time in the end due to the low usage and the always changing conditions of the network: the transaction fees got huge after price raise and bitcoin because unusable for small payments, they then introduced support for BCH and a newer Bitcoin protocol for lower fees, but then you need a special wallet to pay and in the end it's too cumbersome, even for bitcoin users, to pay with it. I think unless you expect a bit number of payments using cryptocurrencies it's not worth implementing this solution, and better to accept them manually.

10 upvotes·18.4K views

Decision at Portainer about Docker, Go

Avatar of deviantony
Co-founder and Software Engineer at Portainer.io at Portainer.io ·
DockerDocker
GoGo

Go was a natural choice for the backend of the Portainer web application. It makes the creation of HTTP API/services a breeze with a lot of standard features available in the ecosystem.

One of the main thing we like with Go is its synergy with Docker and how easy it is to leverage this synergy to easily distribute an efficient software:

  • Go allows to compile a program for multiple platforms and OSes easily (it's just a matter of options when starting the compilation process, no matter the execution context)
  • Go binaries are lightweight, fast and can have a low memory footprint

Combining these points with the empty scratch Docker image and multi-platform images, we can distribute Portainer for any environment that is running Docker. It allows our users to get started using the software in a matter of seconds.

Go is also heavily geared toward the creation of HTTP/API services and is a language that is easy to read and also quite easy to learn, making it a first choice in the context of Portainer.

8 upvotes·13.1K views