Feed powered byStream Blue Logo Copy 5Created with Sketch.
Go

Go

Application and Data / Languages & Frameworks / Languages

Decision at Stream about Go, Stream, Python, Yarn, Babel, Node.js, ES6, JavaScript, Languages, FrameworksFullStack

Avatar of nparsons08
Node.js Engineer & Evangelist at Stream ·
GoGo
StreamStream
PythonPython
YarnYarn
BabelBabel
Node.jsNode.js
ES6ES6
JavaScriptJavaScript
#Languages
#FrameworksFullStack

Winds 2.0 is an open source Podcast/RSS reader developed by Stream with a core goal to enable a wide range of developers to contribute.

We chose JavaScript because nearly every developer knows or can, at the very least, read JavaScript. With ES6 and Node.js v10.x.x, it’s become a very capable language. Async/Await is powerful and easy to use (Async/Await vs Promises). Babel allows us to experiment with next-generation JavaScript (features that are not in the official JavaScript spec yet). Yarn allows us to consistently install packages quickly (and is filled with tons of new tricks)

We’re using JavaScript for everything – both front and backend. Most of our team is experienced with Go and Python, so Node was not an obvious choice for this app.

Sure... there will be haters who refuse to acknowledge that there is anything remotely positive about JavaScript (there are even rants on Hacker News about Node.js); however, without writing completely in JavaScript, we would not have seen the results we did.

#FrameworksFullStack #Languages

26 upvotes·1.4K views

Decision at Segment about Datadog, TypeScript, Envoy, GRPC, Go, Security, Json, REST, Framework, Reliability, Observability

Avatar of nzoschke
Engineering Manager at Segment ·
DatadogDatadog
TypeScriptTypeScript
EnvoyEnvoy
GRPCGRPC
GoGo
#Security
#Json
#REST
#Framework
#Reliability
#Observability

We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. Behind the scenes the Config API is built with Go , GRPC and Envoy.

At Segment, we build new services in Go by default. The language is simple so new team members quickly ramp up on a codebase. The tool chain is fast so developers get immediate feedback when they break code, tests or integrations with other systems. The runtime is fast so it performs great at scale.

For the newest round of APIs we adopted the GRPC service #framework.

The Protocol Buffer service definition language makes it easy to design type-safe and consistent APIs, thanks to ecosystem tools like the Google API Design Guide for API standards, uber/prototool for formatting and linting .protos and lyft/protoc-gen-validate for defining field validations, and grpc-gateway for defining REST mapping.

With a well designed .proto, its easy to generate a Go server interface and a TypeScript client, providing type-safe RPC between languages.

For the API gateway and RPC we adopted the Envoy service proxy.

The internet-facing segmentapis.com endpoint is an Envoy front proxy that rate-limits and authenticates every request. It then transcodes a #REST / #JSON request to an upstream GRPC request. The upstream GRPC servers are running an Envoy sidecar configured for Datadog stats.

The result is API #security , #reliability and consistent #observability through Envoy configuration, not code.

We experimented with Swagger service definitions, but the spec is sprawling and the generated clients and server stubs leave a lot to be desired. GRPC and .proto and the Go implementation feels better designed and implemented. Thanks to the GRPC tooling and ecosystem you can generate Swagger from .protos, but it’s effectively impossible to go the other way.

24 upvotes·15.6K views

Decision at Stream about Go, Jenkins, GitHub, Travis CI, CodeCollaborationVersionControl, ContinuousIntegration

Avatar of tschellenbach
GoGo
JenkinsJenkins
GitHubGitHub
Travis CITravis CI
#CodeCollaborationVersionControl
#ContinuousIntegration

Releasing new versions of our services is done by Travis CI. Travis first runs our test suite. Once it passes, it publishes a new release binary to GitHub.

Common tasks such as installing dependencies for the Go project, or building a binary are automated using plain old Makefiles. (We know, crazy old school, right?) Our binaries are compressed using UPX.

Travis has come a long way over the past years. I used to prefer Jenkins in some cases since it was easier to debug broken builds. With the addition of the aptly named “debug build” button, Travis is now the clear winner. It’s easy to use and free for open source, with no need to maintain anything.

#ContinuousIntegration #CodeCollaborationVersionControl

22 upvotes·1 comment·977 views

Decision at Stream about Go, Faye, Redis, Languages, InMemoryDatabases, ApplicationHosting, DataStores, RealtimeBackendApi

Avatar of tschellenbach
GoGo
FayeFaye
RedisRedis
#Languages
#InMemoryDatabases
#ApplicationHosting
#DataStores
#RealtimeBackendApi

Our real time infrastructure is based on Go , Redis and the excellent gorilla websocket library. It implements the Bayeux protocol.

In terms of architecture it’s very similar to the node based Faye library. It was interesting to read the “Ditching Go for Node.js” post on Hacker News. The author moves from Go to Node to improve performance. We actually did the exact opposite and moved from Node to Go for our real time system. The new Go-based infrastructure handles 8x the traffic per node. #InMemoryDatabases #RealtimeBackendApi #ApplicationHosting #Languages #DataStores

22 upvotes·366 views

Decision at Stream about Go, Cassandra, Python, Databases, DataStores

Avatar of tschellenbach
GoGo
CassandraCassandra
PythonPython
#Databases
#DataStores

After years of optimizing our existing feed technology, we decided to make a larger leap with 2.0 of Stream. While the first iteration of Stream was powered by Python and Cassandra, for Stream 2.0 of our infrastructure we switched to Go.

The main reason why we switched from Python to Go is performance. Certain features of Stream such as aggregation, ranking and serialization were very difficult to speed up using Python.

We’ve been using Go since March 2017 and it’s been a great experience so far. Go has greatly increased the productivity of our development team. Not only has it improved the speed at which we develop, it’s also 30x faster for many components of Stream. Initially we struggled a bit with package management for Go. However, using Dep together with the VG package contributed to creating a great workflow.

Go as a language is heavily focused on performance. The built-in PPROF tool is amazing for finding performance issues. Uber’s Go-Torch library is great for visualizing data from PPROF and will be bundled in PPROF in Go 1.10.

The performance of Go greatly influenced our architecture in a positive way. With Python we often found ourselves delegating logic to the database layer purely for performance reasons. The high performance of Go gave us more flexibility in terms of architecture. This led to a huge simplification of our infrastructure and a dramatic improvement of latency. For instance, we saw a 10 to 1 reduction in web-server count thanks to the lower memory and CPU usage for the same number of requests.

#DataStores #Databases

19 upvotes·583 views

Decision at Soluto about Docker Swarm, Kubernetes, Visual Studio Code, Go, TypeScript, JavaScript, C#, F#, .NET

Avatar of Yshayy
Software Engineer ·
Docker SwarmDocker Swarm
KubernetesKubernetes
Visual Studio CodeVisual Studio Code
GoGo
TypeScriptTypeScript
JavaScriptJavaScript
C#C#
F#F#
.NET.NET

Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

17 upvotes·1 comment·26.2K views

Decision at Thumbtack about C, Go, Rust, Python

Avatar of marcoalmeida
CC
GoGo
RustRust
PythonPython

One important decision for delivering a platform independent solution with low memory footprint and minimal dependencies was the choice of the programming language. We considered a few from Python (there was already a reasonably large Python code base at Thumbtack), to Go (we were taking our first steps with it), and even Rust (too immature at the time).

We ended up writing it in C. It was easy to meet all requirements with only one external dependency for implementing the web server, clearly no challenges running it on any of the Linux distributions we were maintaining, and arguably the implementation with the smallest memory footprint given the choices above.

14 upvotes·390 views

Decision at Stitch about Go, Clojure, JavaScript, Python, Kubernetes, AWS OpsWorks, Amazon EC2, Amazon Redshift, Amazon S3, Amazon RDS

Avatar of jakestein
CEO at Stitch ·
GoGo
ClojureClojure
JavaScriptJavaScript
PythonPython
KubernetesKubernetes
AWS OpsWorksAWS OpsWorks
Amazon EC2Amazon EC2
Amazon RedshiftAmazon Redshift
Amazon S3Amazon S3
Amazon RDSAmazon RDS

Stitch is run entirely on AWS. All of our transactional databases are run with Amazon RDS, and we rely on Amazon S3 for data persistence in various stages of our pipeline. Our product integrates with Amazon Redshift as a data destination, and we also use Redshift as an internal data warehouse (powered by Stitch, of course).

The majority of our services run on stateless Amazon EC2 instances that are managed by AWS OpsWorks. We recently introduced Kubernetes into our infrastructure to run the scheduled jobs that execute Singer code to extract data from various sources. Although we tend to be wary of shiny new toys, Kubernetes has proven to be a good fit for this problem, and its stability, strong community and helpful tooling have made it easy for us to incorporate into our operations.

While we continue to be happy with Clojure for our internal services, we felt that its relatively narrow adoption could impede Singer's growth. We chose Python both because it is well suited to the task, and it seems to have reached critical mass among data engineers. All that being said, the Singer spec is language agnostic, and integrations and libraries have been developed in JavaScript, Go, and Clojure.

13 upvotes·931 views

Decision at Portainer about Docker, Go

Avatar of deviantony
Co-founder and Software Engineer at Portainer.io at Portainer.io ·
DockerDocker
GoGo

Go was a natural choice for the backend of the Portainer web application. It makes the creation of HTTP API/services a breeze with a lot of standard features available in the ecosystem.

One of the main thing we like with Go is its synergy with Docker and how easy it is to leverage this synergy to easily distribute an efficient software:

  • Go allows to compile a program for multiple platforms and OSes easily (it's just a matter of options when starting the compilation process, no matter the execution context)
  • Go binaries are lightweight, fast and can have a low memory footprint

Combining these points with the empty scratch Docker image and multi-platform images, we can distribute Portainer for any environment that is running Docker. It allows our users to get started using the software in a matter of seconds.

Go is also heavily geared toward the creation of HTTP/API services and is a language that is easy to read and also quite easy to learn, making it a first choice in the context of Portainer.

8 upvotes·5.8K views

Decision at Kokoen GmbH about ExpressJS, Node.js, JavaScript, MongoDB, Go, MySQL, Laravel, PHP

Avatar of ASkenny
CEO at Kokoen GmbH ·
ExpressJSExpressJS
Node.jsNode.js
JavaScriptJavaScript
MongoDBMongoDB
GoGo
MySQLMySQL
LaravelLaravel
PHPPHP

Back at the start of 2017, we decided to create a web-based tool for the SEO OnPage analysis of our clients' websites. We had over 2.000 websites to analyze, so we had to perform thousands of requests to get every single page from those websites, process the information and save the big amounts of data somewhere.

Very soon we realized that the initial chosen script language and database, PHP, Laravel and MySQL, was not going to be able to cope efficiently with such a task.

By that time, we were doing some experiments for other projects with a language we had recently get to know, Go , so we decided to get a try and code the crawler using it. It was fantastic, we could process much more data with way less CPU power and in less time. By using the concurrency abilites that the language has to offers, we could also do more Http requests in less time.

Unfortunately, I have no comparison numbers to show about the performance differences between Go and PHP since the difference was so clear from the beginning and that we didn't feel the need to do further comparison tests nor document it. We just switched fully to Go.

There was still a problem: despite the big amount of Data we were generating, MySQL was performing very well, but as we were adding more and more features to the software and with those features more and more different type of data to save, it was a nightmare for the database architects to structure everything correctly on the database, so it was clear what we had to do next: switch to a NoSQL database. So we switched to MongoDB, and it was also fantastic: we were expending almost zero time in thinking how to structure the Database and the performance also seemed to be better, but again, I have no comparison numbers to show due to the lack of time.

We also decided to switch the website from PHP and Laravel to JavaScript and Node.js and ExpressJS since working with the JSON Data that we were saving now in the Database would be easier.

As of now, we don't only use the tool intern but we also opened it for everyone to use for free: https://tool-seo.com

8 upvotes·3K views