What is Apache Thrift and what are its top alternatives?
Top Alternatives to Apache Thrift
- gRPC
gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking... ...
- Protobuf
Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. ...
- REST
An architectural style for developing web services. A distributed system framework that uses Web protocols and technologies. ...
- Avro
It is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format. ...
- GraphQL
GraphQL is a data query language and runtime designed and used at Facebook to request and deliver data to mobile and web apps since 2012. ...
- JSON
JavaScript Object Notation is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language. ...
- Kafka
Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...
- JavaScript
JavaScript is most known as the scripting language for Web pages, but used in many non-browser environments as well such as node.js or Apache CouchDB. It is a prototype-based, multi-paradigm scripting language that is dynamic,and supports object-oriented, imperative, and functional programming styles. ...
Apache Thrift alternatives & related posts
- Higth performance24
- The future of API15
- Easy setup13
- Contract-based5
- Polyglot4
- Garbage2
related gRPC posts
We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. Behind the scenes the Config API is built with Go , GRPC and Envoy.
At Segment, we build new services in Go by default. The language is simple so new team members quickly ramp up on a codebase. The tool chain is fast so developers get immediate feedback when they break code, tests or integrations with other systems. The runtime is fast so it performs great at scale.
For the newest round of APIs we adopted the GRPC service #framework.
The Protocol Buffer service definition language makes it easy to design type-safe and consistent APIs, thanks to ecosystem tools like the Google API Design Guide for API standards, uber/prototool
for formatting and linting .protos and lyft/protoc-gen-validate
for defining field validations, and grpc-gateway
for defining REST mapping.
With a well designed .proto, its easy to generate a Go server interface and a TypeScript client, providing type-safe RPC between languages.
For the API gateway and RPC we adopted the Envoy service proxy.
The internet-facing segmentapis.com
endpoint is an Envoy front proxy that rate-limits and authenticates every request. It then transcodes a #REST / #JSON request to an upstream GRPC request. The upstream GRPC servers are running an Envoy sidecar configured for Datadog stats.
The result is API #security , #reliability and consistent #observability through Envoy configuration, not code.
We experimented with Swagger service definitions, but the spec is sprawling and the generated clients and server stubs leave a lot to be desired. GRPC and .proto and the Go implementation feels better designed and implemented. Thanks to the GRPC tooling and ecosystem you can generate Swagger from .protos, but it’s effectively impossible to go the other way.
I used GraphQL extensively at a previous employer a few years ago and really appreciated the data-driven schema etc alongside the many other benefits it provided. At that time, it seemed like it was set to replace RESTful APIs and many companies were adopting it.
However, as of late, it seems like interest has been waning for GraphQL as opposed to increasing as I had assumed it would. Am I missing something here? What is the current perspective regarding this technology?
Currently, I'm working with gRPC and was curious as to the state of everything now.
related Protobuf posts
We've already been monitoring Agones for a few years now, but we only adapted Kubernetes in mid 2021, so we could never use it until then. Transitioning to Kubernetes has overall been a blast. There's definitely a steep learning curve associated with it, but for us, it was certainly worth it. And Agones plays definitely a part in it.
We previously scheduled our game servers with Docker Compose and Docker Swarm, but that always felt a little brittle and like a really "manual" process, even though everything was already dockerized. For matchmaking, we didn't have any solution yet.
After we did tons of local testing, we deployed our first production-ready Kubernetes cluster with #kubespray and deployed Agones (with Helm) on it. The installation was very easy and the official chart had just the right amount of knobs for us!
The aspect, that we were the most stunned about, is how seamless Agones integrates into the Kubernetes infrastructure. It reuses existing mechanisms like the Health Pings and extends them with more resource states and other properties that are unique to game servers. But you're still free to use it however you like: One GameServer per Game-Session, one GameServer for multiple Game-Sessions (in parallel or reusing existing servers), custom allocation mechanisms, webhook-based scaling, ... we didn't run into any dead ends yet.
One thing, that I was a little worried about in the beginning, was the SDK integration, as there was no official one for Minecraft/Java. And the two available inofficial ones didn't satisfy our requirements for the SDK. Therefore, we went and developed our own SDK and ... it was super easy! Agones does publish their Protobuf files and so we could generate the stubs with #Protoc. The existing documentation regarding Client-SDKs from Agones was a great help in writing our own documentation for the interface methods.
And they even have excellent tooling for testing your own SDK implementations. With the use of Testcontainers we could just spin up the local SDK testing image for each of the integration tests and could confirm that our SDK is working fine. We discovered a very small inconsistency for one of the interface methods, submitted an issue and a corresponding PR and it was merged within less than 24 hours.
We've now been using Agones for a few months and it has proven to be very reliable, easy to manage and just a great tool in general.
- Popularity4
related REST posts
related Avro posts
- Schemas defined by the requests made by the user75
- Will replace RESTful interfaces63
- The future of API's62
- The future of databases49
- Self-documenting13
- Get many resources in a single request12
- Query Language6
- Ask for what you need, get exactly that6
- Fetch different resources in one request3
- Type system3
- Evolve your API without versions3
- Ease of client creation2
- GraphiQL2
- Easy setup2
- "Open" document1
- Fast prototyping1
- Supports subscription1
- Standard1
- Good for apps that query at build time. (SSR/Gatsby)1
- 1. Describe your data1
- Better versioning1
- Backed by Facebook1
- Easy to learn1
- Hard to migrate from GraphQL to another technology4
- More code to type.4
- Takes longer to build compared to schemaless.2
- No support for caching1
- All the pros sound like NFT pitches1
- No support for streaming1
- Works just like any other API at runtime1
- N+1 fetch problem1
- No built in security1
related GraphQL posts
I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery
For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:
Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have
GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.
MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website
When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?
So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.
React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.
Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.
- Simple5
- Widely supported4
related JSON posts
Application and Data: Since my personal website ( https://alisoueidan.com ) is a SPA I've chosen to use Vue.js, as a framework to create it. After a short skeptical phase I immediately felt in love with the single file component concept! I also used vuex for state management, which makes working with several components, which are communicating with each other even more fun and convenient to use. Of course, using Vue requires using JavaScript as well, since it is the basis of it.
For markup and style, I used Pug and Sass, since they’re the perfect match to me. I love the clean and strict syntax of both of them and even more that their structure is almost similar. Also, both of them come with an expanded functionality such as mixins, loops and so on related to their “siblings” (HTML and CSS). Both of them require nesting and prevent untidy code, which can be a huge advantage when working in teams. I used JSON to store data (since the data quantity on my website is moderate) – JSON works also good in combo with Pug, using for loops, based on the JSON Objects for example.
To send my contact form I used PHP, since sending emails using PHP is still relatively convenient, simple and easy done.
DevOps: Of course, I used Git to do my version management (which I even do in smaller projects like my website just have an additional backup of my code). On top of that I used GitHub since it now supports private repository for free accounts (which I am using for my own). I use Babel to use ES6 functionality such as arrow functions and so on, and still don’t losing cross browser compatibility.
Side note: I used npm for package management. 🎉
*Business Tools: * I use Asana to organize my project. This is a big advantage to me, even if I work alone, since “private” projects can get interrupted for some time. By using Asana I still know (even after month of not touching a project) what I’ve done, on which task I was at last working on and what still is to do. Working in Teams (for enterprise I’d take on Jira instead) of course Asana is a Tool which I really love to use as well. All the graphics on my website are SVG which I have created with Adobe Illustrator and adjusted within the SVG code or by using JavaScript or CSS (SASS).
I use Visual Studio Code because at this time is a mature software and I can do practically everything using it.
It's free and open source: The project is hosted on GitHub and it’s free to download, fork, modify and contribute to the project.
Multi-platform: You can download binaries for different platforms, included Windows (x64), MacOS and Linux (
.rpm
and.deb
packages)LightWeight: It runs smoothly in different devices. It has an average memory and CPU usage. Starts almost immediately and it’s very stable.
Extended language support: Supports by default the majority of the most used languages and syntax like JavaScript, HTML, C#, Swift, Java, PHP, Python and others. Also, VS Code supports different file types associated to projects like
.ini
,.properties
, XML and JSON files.Integrated tools: Includes an integrated terminal, debugger, problem list and console output inspector. The project navigator sidebar is simple and powerful: you can manage your files and folders with ease. The command palette helps you find commands by text. The search widget has a powerful auto-complete feature to search and find your files.
Extensible and configurable: There are many extensions available for every language supported, including syntax highlighters, IntelliSense and code completion, and debuggers. There are also extension to manage application configuration and architecture like Docker and Jenkins.
Integrated with Git: You can visually manage your project repositories, pull, commit and push your changes, and easy conflict resolution.( there is support for SVN (Subversion) users by plugin)
- High-throughput126
- Distributed119
- Scalable92
- High-Performance86
- Durable66
- Publish-Subscribe38
- Simple-to-use19
- Open source18
- Written in Scala and java. Runs on JVM12
- Message broker + Streaming system9
- KSQL4
- Avro schema integration4
- Robust4
- Suport Multiple clients3
- Extremely good parallelism constructs2
- Partioned, replayable log2
- Simple publisher / multi-subscriber model1
- Fun1
- Flexible1
- Non-Java clients are second-class citizens32
- Needs Zookeeper29
- Operational difficulties9
- Terrible Packaging5
related Kafka posts
When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?
So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.
React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.
Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
JavaScript
- Can be used on frontend/backend1.7K
- It's everywhere1.5K
- Lots of great frameworks1.2K
- Fast898
- Light weight745
- Flexible425
- You can't get a device today that doesn't run js392
- Non-blocking i/o286
- Ubiquitousness237
- Expressive191
- Extended functionality to web pages55
- Relatively easy language49
- Executed on the client side46
- Relatively fast to the end user30
- Pure Javascript25
- Functional programming21
- Async15
- Full-stack13
- Setup is easy12
- Future Language of The Web12
- Its everywhere12
- Because I love functions11
- JavaScript is the New PHP11
- Like it or not, JS is part of the web standard10
- Expansive community9
- Everyone use it9
- Can be used in backend, frontend and DB9
- Easy9
- Most Popular Language in the World8
- Powerful8
- Can be used both as frontend and backend as well8
- For the good parts8
- No need to use PHP8
- Easy to hire developers8
- Agile, packages simple to use7
- Love-hate relationship7
- Photoshop has 3 JS runtimes built in7
- Evolution of C7
- It's fun7
- Hard not to use7
- Versitile7
- Its fun and fast7
- Nice7
- Popularized Class-Less Architecture & Lambdas7
- Supports lambdas and closures7
- It let's me use Babel & Typescript6
- Can be used on frontend/backend/Mobile/create PRO Ui6
- 1.6K Can be used on frontend/backend6
- Client side JS uses the visitors CPU to save Server Res6
- Easy to make something6
- Clojurescript5
- Promise relationship5
- Stockholm Syndrome5
- Function expressions are useful for callbacks5
- Scope manipulation5
- Everywhere5
- Client processing5
- What to add5
- Because it is so simple and lightweight4
- Only Programming language on browser4
- Test1
- Hard to learn1
- Test21
- Not the best1
- Easy to understand1
- Subskill #41
- Easy to learn1
- Hard 彤0
- A constant moving target, too much churn22
- Horribly inconsistent20
- Javascript is the New PHP15
- No ability to monitor memory utilitization9
- Shows Zero output in case of ANY error8
- Thinks strange results are better than errors7
- Can be ugly6
- No GitHub3
- Slow2
- HORRIBLE DOCUMENTS, faulty code, repo has bugs0
related JavaScript posts
Oof. I have truly hated JavaScript for a long time. Like, for over twenty years now. Like, since the Clinton administration. It's always been a nightmare to deal with all of the aspects of that silly language.
But wowza, things have changed. Tooling is just way, way better. I'm primarily web-oriented, and using React and Apollo together the past few years really opened my eyes to building rich apps. And I deeply apologize for using the phrase rich apps; I don't think I've ever said such Enterprisey words before.
But yeah, things are different now. I still love Rails, and still use it for a lot of apps I build. But it's that silly rich apps phrase that's the problem. Users have way more comprehensive expectations than they did even five years ago, and the JS community does a good job at building tools and tech that tackle the problems of making heavy, complicated UI and frontend work.
Obviously there's a lot of things happening here, so just saying "JavaScript isn't terrible" might encompass a huge amount of libraries and frameworks. But if you're like me, yeah, give things another shot- I'm somehow not hating on JavaScript anymore and... gulp... I kinda love it.
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
https://eng.uber.com/distributed-tracing/
(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)
Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark