Alternatives to Apache Thrift logo

Alternatives to Apache Thrift

gRPC, Protobuf, REST, Avro, and GraphQL are the most popular alternatives and competitors to Apache Thrift.
108
185
+ 1
0

What is Apache Thrift and what are its top alternatives?

The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.
Apache Thrift is a tool in the Serialization Frameworks category of a tech stack.
Apache Thrift is an open source tool with 8.7K GitHub stars and 3.7K GitHub forks. Here’s a link to Apache Thrift's open source repository on GitHub

Top Alternatives to Apache Thrift

  • gRPC

    gRPC

    gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking... ...

  • Protobuf

    Protobuf

    Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. ...

  • REST

    REST

    An architectural style for developing web services. A distributed system framework that uses Web protocols and technologies. ...

  • Avro

    Avro

    It is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format. ...

  • GraphQL

    GraphQL

    GraphQL is a data query language and runtime designed and used at Facebook to request and deliver data to mobile and web apps since 2012. ...

  • JSON

    JSON

    JavaScript Object Notation is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language. ...

  • Kafka

    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • MessagePack

    MessagePack

    It is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it's faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves. ...

Apache Thrift alternatives & related posts

gRPC logo

gRPC

827
918
46
A high performance, open-source universal RPC framework
827
918
+ 1
46
PROS OF GRPC
  • 19
    Higth performance
  • 10
    The future of API
  • 10
    Easy setup
  • 4
    Contract-based
  • 3
    Polyglot
CONS OF GRPC
    Be the first to leave a con

    related gRPC posts

    Shared insights
    on
    gRPC
    SignalR
    .NET

    We need to interact from several different Web applications (remote) to a client-side application (.exe in .NET Framework, Windows.Console under our controlled environment). From the web applications, we need to send and receive data and invoke methods to client-side .exe on javascript events like users onclick. SignalR is one of the .Net alternatives to do that, but it adds overhead for what we need. Is it better to add SignalR at both client-side application and remote web application, or use gRPC as it sounds lightest and is multilingual?

    SignalR or gRPC are always sending and receiving data on the client-side (from browser to .exe and back to browser). And web application is used for graphical visualization of data to the user. There is no need for local .exe to send or interact with remote web API. Which architecture or framework do you suggest to use in this case?

    See more
    Shared insights
    on
    Kafka
    gRPC
    at

    By mid-2015, Uber’s rider growth coupled with its cadence of releasing new services, like Eats and Freight, was pressuring the infrastructure. To allow the decoupling of consumption from production, and to add an abstraction layer between users, developers, and infrastructure, Uber built Catalyst, a serverless internal service mesh.

    Uber decided to build their own severless solution, rather that using something like AWS Lambda, speed for its global production environments as well as introspectability.

    See more
    Protobuf logo

    Protobuf

    438
    268
    0
    Google's data interchange format
    438
    268
    + 1
    0
    PROS OF PROTOBUF
      Be the first to leave a pro
      CONS OF PROTOBUF
        Be the first to leave a con

        related Protobuf posts

        REST logo

        REST

        20
        151
        0
        A software architectural style
        20
        151
        + 1
        0
        PROS OF REST
        • 2
          Popularity
        CONS OF REST
          Be the first to leave a con

          related REST posts

          Avro logo

          Avro

          121
          135
          0
          A data serialization framework
          121
          135
          + 1
          0
          PROS OF AVRO
            Be the first to leave a pro
            CONS OF AVRO
              Be the first to leave a con

              related Avro posts

              GraphQL logo

              GraphQL

              21.5K
              17.6K
              292
              A data query language and runtime
              21.5K
              17.6K
              + 1
              292
              PROS OF GRAPHQL
              • 69
                Schemas defined by the requests made by the user
              • 62
                Will replace RESTful interfaces
              • 59
                The future of API's
              • 47
                The future of databases
              • 12
                Self-documenting
              • 11
                Get many resources in a single request
              • 5
                Ask for what you need, get exactly that
              • 4
                Query Language
              • 3
                Evolve your API without versions
              • 3
                Fetch different resources in one request
              • 3
                Type system
              • 2
                GraphiQL
              • 2
                Ease of client creation
              • 2
                Easy setup
              • 1
                Good for apps that query at build time. (SSR/Gatsby)
              • 1
                Backed by Facebook
              • 1
                Easy to learn
              • 1
                "Open" document
              • 1
                Better versioning
              • 1
                Standard
              • 1
                1. Describe your data
              • 1
                Fast prototyping
              CONS OF GRAPHQL
              • 3
                Hard to migrate from GraphQL to another technology
              • 3
                More code to type.
              • 1
                Works just like any other API at runtime
              • 1
                Takes longer to build compared to schemaless.

              related GraphQL posts

              Shared insights
              on
              Node.js
              GraphQL
              MongoDB

              I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery

              For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:

              1. Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have

              2. GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.

              3. MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website

              See more
              Nick Rockwell
              SVP, Engineering at Fastly · | 44 upvotes · 1.7M views

              When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

              So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

              React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

              Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

              See more
              JSON logo

              JSON

              1.4K
              1.1K
              8
              A lightweight data-interchange format
              1.4K
              1.1K
              + 1
              8
              PROS OF JSON
              • 4
                Simple
              • 4
                Widely supported
              CONS OF JSON
                Be the first to leave a con

                related JSON posts

                Ali Soueidan
                Creative Web Developer at Ali Soueidan · | 18 upvotes · 788.8K views

                Application and Data: Since my personal website ( https://alisoueidan.com ) is a SPA I've chosen to use Vue.js, as a framework to create it. After a short skeptical phase I immediately felt in love with the single file component concept! I also used vuex for state management, which makes working with several components, which are communicating with each other even more fun and convenient to use. Of course, using Vue requires using JavaScript as well, since it is the basis of it.

                For markup and style, I used Pug and Sass, since they’re the perfect match to me. I love the clean and strict syntax of both of them and even more that their structure is almost similar. Also, both of them come with an expanded functionality such as mixins, loops and so on related to their “siblings” (HTML and CSS). Both of them require nesting and prevent untidy code, which can be a huge advantage when working in teams. I used JSON to store data (since the data quantity on my website is moderate) – JSON works also good in combo with Pug, using for loops, based on the JSON Objects for example.

                To send my contact form I used PHP, since sending emails using PHP is still relatively convenient, simple and easy done.

                DevOps: Of course, I used Git to do my version management (which I even do in smaller projects like my website just have an additional backup of my code). On top of that I used GitHub since it now supports private repository for free accounts (which I am using for my own). I use Babel to use ES6 functionality such as arrow functions and so on, and still don’t losing cross browser compatibility.

                Side note: I used npm for package management. 🎉

                *Business Tools: * I use Asana to organize my project. This is a big advantage to me, even if I work alone, since “private” projects can get interrupted for some time. By using Asana I still know (even after month of not touching a project) what I’ve done, on which task I was at last working on and what still is to do. Working in Teams (for enterprise I’d take on Jira instead) of course Asana is a Tool which I really love to use as well. All the graphics on my website are SVG which I have created with Adobe Illustrator and adjusted within the SVG code or by using JavaScript or CSS (SASS).

                See more

                I use Visual Studio Code because at this time is a mature software and I can do practically everything using it.

                • It's free and open source: The project is hosted on GitHub and it’s free to download, fork, modify and contribute to the project.

                • Multi-platform: You can download binaries for different platforms, included Windows (x64), MacOS and Linux (.rpm and .deb packages)

                • LightWeight: It runs smoothly in different devices. It has an average memory and CPU usage. Starts almost immediately and it’s very stable.

                • Extended language support: Supports by default the majority of the most used languages and syntax like JavaScript, HTML, C#, Swift, Java, PHP, Python and others. Also, VS Code supports different file types associated to projects like .ini, .properties, XML and JSON files.

                • Integrated tools: Includes an integrated terminal, debugger, problem list and console output inspector. The project navigator sidebar is simple and powerful: you can manage your files and folders with ease. The command palette helps you find commands by text. The search widget has a powerful auto-complete feature to search and find your files.

                • Extensible and configurable: There are many extensions available for every language supported, including syntax highlighters, IntelliSense and code completion, and debuggers. There are also extension to manage application configuration and architecture like Docker and Jenkins.

                • Integrated with Git: You can visually manage your project repositories, pull, commit and push your changes, and easy conflict resolution.( there is support for SVN (Subversion) users by plugin)

                See more
                Kafka logo

                Kafka

                15.2K
                14.4K
                562
                Distributed, fault tolerant, high throughput pub-sub messaging system
                15.2K
                14.4K
                + 1
                562
                PROS OF KAFKA
                • 120
                  High-throughput
                • 114
                  Distributed
                • 86
                  Scalable
                • 79
                  High-Performance
                • 64
                  Durable
                • 35
                  Publish-Subscribe
                • 18
                  Simple-to-use
                • 14
                  Open source
                • 10
                  Written in Scala and java. Runs on JVM
                • 6
                  Message broker + Streaming system
                • 4
                  Avro schema integration
                • 2
                  Suport Multiple clients
                • 2
                  Robust
                • 2
                  KSQL
                • 2
                  Partioned, replayable log
                • 1
                  Fun
                • 1
                  Extremely good parallelism constructs
                • 1
                  Simple publisher / multi-subscriber model
                • 1
                  Flexible
                CONS OF KAFKA
                • 27
                  Non-Java clients are second-class citizens
                • 26
                  Needs Zookeeper
                • 7
                  Operational difficulties
                • 2
                  Terrible Packaging

                related Kafka posts

                Eric Colson
                Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2M views

                The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

                Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

                At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

                For more info:

                #DataScience #DataStack #Data

                See more
                John Kodumal

                As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

                We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

                See more
                MessagePack logo

                MessagePack

                18
                60
                1
                A binary serialization format
                18
                60
                + 1
                1
                PROS OF MESSAGEPACK
                • 1
                  Lightweight
                CONS OF MESSAGEPACK
                  Be the first to leave a con

                  related MessagePack posts