Alternatives to gRPC logo

Alternatives to gRPC

GraphQL, RabbitMQ, Kafka, REST, and MQTT are the most popular alternatives and competitors to gRPC.
472
539
+ 1
24

What is gRPC and what are its top alternatives?

gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking...
gRPC is a tool in the Remote Procedure Call (RPC) category of a tech stack.
gRPC is an open source tool with 26.3K GitHub stars and 6.6K GitHub forks. Here’s a link to gRPC's open source repository on GitHub

Top Alternatives of gRPC

gRPC alternatives & related posts

related GraphQL posts

Shared insights
on
Node.jsNode.jsGraphQLGraphQLMongoDBMongoDB

I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery

For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:

  1. Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have

  2. GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.

  3. MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website

See more
Nick Rockwell
Nick Rockwell
CTO at NY Times · | 37 upvotes · 1M views

When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

See more

related RabbitMQ posts

James Cunningham
James Cunningham
Operations Engineer at Sentry · | 18 upvotes · 673.5K views
Shared insights
on
CeleryCeleryRabbitMQRabbitMQ
at

As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.

Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.

#MessageQueue

See more
Tim Abbott
Tim Abbott
Founder at Zulip · | 14 upvotes · 473.1K views
Shared insights
on
RabbitMQRabbitMQPythonPythonRedisRedis
at

We've been using RabbitMQ as Zulip's queuing system since we needed a queuing system. What I like about it is that it scales really well and has good libraries for a wide range of platforms, including our own Python. So aside from getting it running, we've had to put basically 0 effort into making it scale for our needs.

However, there's several things that could be better about it: * It's error messages are absolutely terrible; if ever one of our users ends up getting an error with RabbitMQ (even for simple things like a misconfigured hostname), they always end up needing to get help from the Zulip team, because the errors logs are just inscrutable. As an open source project, we've handled this issue by really carefully scripting the installation to be a failure-proof configuration (in this case, setting the RabbitMQ hostname to 127.0.0.1, so that no user-controlled configuration can break it). But it was a real pain to get there and the process of determining we needed to do that caused a significant amount of pain to folks installing Zulip. * The pika library for Python takes a lot of time to startup a RabbitMQ connection; this means that Zulip server restarts are more disruptive than would be ideal. * It's annoying that you need to run the rabbitmqctl management commands as root.

But overall, I like that it has clean, clear semanstics and high scalability, and haven't been tempted to do the work to migrate to something like Redis (which has its own downsides).

See more
Kafka logo

Kafka

8.1K
8K
525
8.1K
8K
+ 1
525
Distributed, fault tolerant, high throughput pub-sub messaging system
Kafka logo
Kafka
VS
gRPC logo
gRPC

related Kafka posts

Eric Colson
Eric Colson
Chief Algorithms Officer at Stitch Fix · | 19 upvotes · 1.1M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
John Kodumal
John Kodumal
CTO at LaunchDarkly · | 16 upvotes · 737.6K views

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more
REST logo

REST

20
82
0
20
82
+ 1
0
A software architectural style
REST logo
REST
VS
gRPC logo
gRPC
SignalR logo

SignalR

251
459
64
251
459
+ 1
64
A new library for ASP.NET developers that makes developing real-time web functionality easy.
SignalR logo
SignalR
VS
gRPC logo
gRPC
Protobuf logo

Protobuf

206
158
0
206
158
+ 1
0
Google's data interchange format
    Be the first to leave a pro
    Protobuf logo
    Protobuf
    VS
    gRPC logo
    gRPC
    JSON-RPC logo

    JSON-RPC

    22
    12
    0
    22
    12
    + 1
    0
    A remote procedure call protocol encoded in JSON
      Be the first to leave a pro
      JSON-RPC logo
      JSON-RPC
      VS
      gRPC logo
      gRPC