Alternatives to GraphQL logo

Alternatives to GraphQL

gRPC, Falcor, React, graphql.js, and MongoDB are the most popular alternatives and competitors to GraphQL.
34.1K
27.9K
+ 1
309

What is GraphQL and what are its top alternatives?

GraphQL is a data query language and runtime designed and used at Facebook to request and deliver data to mobile and web apps since 2012.
GraphQL is a tool in the Query Languages category of a tech stack.
GraphQL is an open source tool with GitHub stars and GitHub forks. Here’s a link to GraphQL's open source repository on GitHub

Top Alternatives to GraphQL

  • gRPC
    gRPC

    gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking... ...

  • Falcor
    Falcor

    Falcor lets you represent all your remote data sources as a single domain model via a virtual JSON graph. You code the same way no matter where the data is, whether in memory on the client or over the network on the server. ...

  • React
    React

    Lots of people use React as the V in MVC. Since React makes no assumptions about the rest of your technology stack, it's easy to try it out on a small feature in an existing project. ...

  • graphql.js
    graphql.js

    Lightest GraphQL client with intelligent features. You can download graphql.js directly, or you can use Bower or NPM. ...

  • MongoDB
    MongoDB

    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...

  • REST
    REST

    An architectural style for developing web services. A distributed system framework that uses Web protocols and technologies. ...

  • Elasticsearch
    Elasticsearch

    Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack). ...

  • OData
    OData

    It is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. It helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. ...

GraphQL alternatives & related posts

gRPC logo

gRPC

2.2K
64
A high performance, open-source universal RPC framework
2.2K
64
PROS OF GRPC
  • 25
    Higth performance
  • 15
    The future of API
  • 13
    Easy setup
  • 5
    Contract-based
  • 4
    Polyglot
  • 2
    Garbage
CONS OF GRPC
    Be the first to leave a con

    related gRPC posts

    Noah Zoschke
    Engineering Manager at Segment · | 30 upvotes · 506.2K views

    We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. Behind the scenes the Config API is built with Go , GRPC and Envoy.

    At Segment, we build new services in Go by default. The language is simple so new team members quickly ramp up on a codebase. The tool chain is fast so developers get immediate feedback when they break code, tests or integrations with other systems. The runtime is fast so it performs great at scale.

    For the newest round of APIs we adopted the GRPC service #framework.

    The Protocol Buffer service definition language makes it easy to design type-safe and consistent APIs, thanks to ecosystem tools like the Google API Design Guide for API standards, uber/prototool for formatting and linting .protos and lyft/protoc-gen-validate for defining field validations, and grpc-gateway for defining REST mapping.

    With a well designed .proto, its easy to generate a Go server interface and a TypeScript client, providing type-safe RPC between languages.

    For the API gateway and RPC we adopted the Envoy service proxy.

    The internet-facing segmentapis.com endpoint is an Envoy front proxy that rate-limits and authenticates every request. It then transcodes a #REST / #JSON request to an upstream GRPC request. The upstream GRPC servers are running an Envoy sidecar configured for Datadog stats.

    The result is API #security , #reliability and consistent #observability through Envoy configuration, not code.

    We experimented with Swagger service definitions, but the spec is sprawling and the generated clients and server stubs leave a lot to be desired. GRPC and .proto and the Go implementation feels better designed and implemented. Thanks to the GRPC tooling and ecosystem you can generate Swagger from .protos, but it’s effectively impossible to go the other way.

    See more
    Dylan Krupp
    Shared insights
    on
    gRPCgRPCGraphQLGraphQL

    I used GraphQL extensively at a previous employer a few years ago and really appreciated the data-driven schema etc alongside the many other benefits it provided. At that time, it seemed like it was set to replace RESTful APIs and many companies were adopting it.

    However, as of late, it seems like interest has been waning for GraphQL as opposed to increasing as I had assumed it would. Am I missing something here? What is the current perspective regarding this technology?

    Currently, I'm working with gRPC and was curious as to the state of everything now.

    See more
    Falcor logo

    Falcor

    28
    14
    A JavaScript library for efficient data fetching, created by Netflix
    28
    14
    PROS OF FALCOR
    • 2
      Promotes microservices
    • 2
      Small API
    • 2
      Data is the API
    • 2
      One Model Everywhere
    • 1
      efficient data fetching
    • 1
      Bind to the Cloud
    • 1
      Virtual JSON Resource
    • 1
      Simple
    • 1
      Backed by Netflix
    • 1
      JSON Graph
    CONS OF FALCOR
      Be the first to leave a con

      related Falcor posts

      React logo

      React

      175.4K
      4.1K
      A JavaScript library for building user interfaces
      175.4K
      4.1K
      PROS OF REACT
      • 837
        Components
      • 673
        Virtual dom
      • 579
        Performance
      • 509
        Simplicity
      • 442
        Composable
      • 186
        Data flow
      • 166
        Declarative
      • 128
        Isn't an mvc framework
      • 120
        Reactive updates
      • 115
        Explicit app state
      • 50
        JSX
      • 29
        Learn once, write everywhere
      • 22
        Easy to Use
      • 22
        Uni-directional data flow
      • 17
        Works great with Flux Architecture
      • 11
        Great perfomance
      • 10
        Javascript
      • 9
        Built by Facebook
      • 8
        TypeScript support
      • 6
        Speed
      • 6
        Server Side Rendering
      • 6
        Scalable
      • 5
        Easy to start
      • 5
        Feels like the 90s
      • 5
        Awesome
      • 5
        Props
      • 5
        Cross-platform
      • 5
        Closer to standard JavaScript and HTML than others
      • 5
        Easy as Lego
      • 5
        Functional
      • 5
        Excellent Documentation
      • 5
        Hooks
      • 4
        Scales super well
      • 4
        Allows creating single page applications
      • 4
        Sdfsdfsdf
      • 4
        Start simple
      • 4
        Strong Community
      • 4
        Super easy
      • 4
        Server side views
      • 4
        Fancy third party tools
      • 3
        Rich ecosystem
      • 3
        Has arrow functions
      • 3
        Very gentle learning curve
      • 3
        Beautiful and Neat Component Management
      • 3
        Just the View of MVC
      • 3
        Simple, easy to reason about and makes you productive
      • 3
        Fast evolving
      • 3
        SSR
      • 3
        Great migration pathway for older systems
      • 3
        Simple
      • 3
        Has functional components
      • 3
        Every decision architecture wise makes sense
      • 2
        Sharable
      • 2
        Permissively-licensed
      • 2
        HTML-like
      • 2
        Image upload
      • 2
        Recharts
      • 2
        Fragments
      • 2
        Split your UI into components with one true state
      • 1
        React hooks
      • 1
        Datatables
      CONS OF REACT
      • 41
        Requires discipline to keep architecture organized
      • 30
        No predefined way to structure your app
      • 29
        Need to be familiar with lots of third party packages
      • 13
        JSX
      • 10
        Not enterprise friendly
      • 6
        One-way binding only
      • 3
        State consistency with backend neglected
      • 3
        Bad Documentation
      • 2
        Error boundary is needed
      • 2
        Paradigms change too fast

      related React posts

      Johnny Bell

      I was building a personal project that I needed to store items in a real time database. I am more comfortable with my Frontend skills than my backend so I didn't want to spend time building out anything in Ruby or Go.

      I stumbled on Firebase by #Google, and it was really all I needed. It had realtime data, an area for storing file uploads and best of all for the amount of data I needed it was free!

      I built out my application using tools I was familiar with, React for the framework, Redux.js to manage my state across components, and styled-components for the styling.

      Now as this was a project I was just working on in my free time for fun I didn't really want to pay for hosting. I did some research and I found Netlify. I had actually seen them at #ReactRally the year before and deployed a Gatsby site to Netlify already.

      Netlify was very easy to setup and link to my GitHub account you select a repo and pretty much with very little configuration you have a live site that will deploy every time you push to master.

      With the selection of these tools I was able to build out my application, connect it to a realtime database, and deploy to a live environment all with $0 spent.

      If you're looking to build out a small app I suggest giving these tools a go as you can get your idea out into the real world for absolutely no cost.

      See more
      Collins Ogbuzuru
      Front-end dev at Evolve credit · | 48 upvotes · 351.5K views

      Your tech stack is solid for building a real-time messaging project.

      React and React Native are excellent choices for the frontend, especially if you want to have both web and mobile versions of your application share code.

      ExpressJS is an unopinionated framework that affords you the flexibility to use it's features at your term, which is a good start. However, I would recommend you explore Sails.js as well. Sails.js is built on top of Express.js and it provides additional features out of the box, especially the Websocket integration that your project requires.

      Don't forget to set up Graphql codegen, this would improve your dev experience (Add Typescript, if you can too).

      I don't know much about databases but you might want to consider using NO-SQL. I used Firebase real-time db and aws dynamo db on a few of my personal projects and I love they're easy to work with and offer more flexibility for a chat application.

      See more
      graphql.js logo

      graphql.js

      84
      0
      A Simple and Isomorphic GraphQL Client for JavaScript
      84
      0
      PROS OF GRAPHQL.JS
        Be the first to leave a pro
        CONS OF GRAPHQL.JS
          Be the first to leave a con

          related graphql.js posts

          MongoDB logo

          MongoDB

          94.3K
          4.1K
          The database for giant ideas
          94.3K
          4.1K
          PROS OF MONGODB
          • 829
            Document-oriented storage
          • 594
            No sql
          • 554
            Ease of use
          • 465
            Fast
          • 410
            High performance
          • 255
            Free
          • 219
            Open source
          • 180
            Flexible
          • 145
            Replication & high availability
          • 112
            Easy to maintain
          • 42
            Querying
          • 39
            Easy scalability
          • 38
            Auto-sharding
          • 37
            High availability
          • 31
            Map/reduce
          • 27
            Document database
          • 25
            Easy setup
          • 25
            Full index support
          • 16
            Reliable
          • 15
            Fast in-place updates
          • 14
            Agile programming, flexible, fast
          • 12
            No database migrations
          • 8
            Easy integration with Node.Js
          • 8
            Enterprise
          • 6
            Enterprise Support
          • 5
            Great NoSQL DB
          • 4
            Support for many languages through different drivers
          • 3
            Schemaless
          • 3
            Aggregation Framework
          • 3
            Drivers support is good
          • 2
            Fast
          • 2
            Managed service
          • 2
            Easy to Scale
          • 2
            Awesome
          • 2
            Consistent
          • 1
            Good GUI
          • 1
            Acid Compliant
          CONS OF MONGODB
          • 6
            Very slowly for connected models that require joins
          • 3
            Not acid compliant
          • 2
            Proprietary query language

          related MongoDB posts

          Jeyabalaji Subramanian

          Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

          We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

          Based on the above criteria, we selected the following tools to perform the end to end data replication:

          We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

          We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

          In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

          Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

          In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

          See more
          Robert Zuber

          We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

          As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

          When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

          See more
          REST logo

          REST

          20
          0
          A software architectural style
          20
          0
          PROS OF REST
          • 4
            Popularity
          CONS OF REST
            Be the first to leave a con

            related REST posts

            Elasticsearch logo

            Elasticsearch

            34.8K
            1.6K
            Open Source, Distributed, RESTful Search Engine
            34.8K
            1.6K
            PROS OF ELASTICSEARCH
            • 329
              Powerful api
            • 315
              Great search engine
            • 231
              Open source
            • 214
              Restful
            • 200
              Near real-time search
            • 98
              Free
            • 85
              Search everything
            • 54
              Easy to get started
            • 45
              Analytics
            • 26
              Distributed
            • 6
              Fast search
            • 5
              More than a search engine
            • 4
              Awesome, great tool
            • 4
              Great docs
            • 3
              Highly Available
            • 3
              Easy to scale
            • 2
              Nosql DB
            • 2
              Document Store
            • 2
              Great customer support
            • 2
              Intuitive API
            • 2
              Reliable
            • 2
              Potato
            • 2
              Fast
            • 2
              Easy setup
            • 2
              Great piece of software
            • 1
              Open
            • 1
              Scalability
            • 1
              Not stable
            • 1
              Easy to get hot data
            • 1
              Github
            • 1
              Elaticsearch
            • 1
              Actively developing
            • 1
              Responsive maintainers on GitHub
            • 1
              Ecosystem
            • 0
              Community
            CONS OF ELASTICSEARCH
            • 7
              Resource hungry
            • 6
              Diffecult to get started
            • 5
              Expensive
            • 4
              Hard to keep stable at large scale

            related Elasticsearch posts

            Tim Abbott

            We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

            We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

            And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

            I can't recommend it highly enough.

            See more
            Tymoteusz Paul
            Devops guy at X20X Development LTD · | 23 upvotes · 10.3M views

            Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

            It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

            I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

            We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

            If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

            The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

            Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

            See more
            OData logo

            OData

            57
            35
            A REST-based protocol for querying and updating data
            57
            35
            PROS OF ODATA
            • 7
              Patterns for paging, sorting, filtering
            • 5
              ISO Standard
            • 4
              Query Language
            • 3
              RESTful
            • 3
              No overfetching, no underfetching
            • 2
              Get many resources in a single request
            • 2
              Self-documenting
            • 2
              Batch requests
            • 2
              Bulk requests ("array upsert")
            • 2
              Ask for what you need, get exactly that
            • 1
              Evolve your API by following the compatibility rules
            • 1
              Resource model defines conventional operations
            • 1
              Resource Modification Language
            CONS OF ODATA
            • 1
              Overwhelming, no "baby steps" documentation

            related OData posts