What is GraphQL and what are its top alternatives?
Top Alternatives to GraphQL
- gRPC
gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking... ...
- Falcor
Falcor lets you represent all your remote data sources as a single domain model via a virtual JSON graph. You code the same way no matter where the data is, whether in memory on the client or over the network on the server. ...
- React
Lots of people use React as the V in MVC. Since React makes no assumptions about the rest of your technology stack, it's easy to try it out on a small feature in an existing project. ...
- graphql.js
Lightest GraphQL client with intelligent features. You can download graphql.js directly, or you can use Bower or NPM. ...
- MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...
- REST
An architectural style for developing web services. A distributed system framework that uses Web protocols and technologies. ...
- Elasticsearch
Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack). ...
- OData
It is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. It helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. ...
GraphQL alternatives & related posts
- Higth performance25
- The future of API15
- Easy setup13
- Contract-based5
- Polyglot4
- Garbage2
related gRPC posts
We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. Behind the scenes the Config API is built with Go , GRPC and Envoy.
At Segment, we build new services in Go by default. The language is simple so new team members quickly ramp up on a codebase. The tool chain is fast so developers get immediate feedback when they break code, tests or integrations with other systems. The runtime is fast so it performs great at scale.
For the newest round of APIs we adopted the GRPC service #framework.
The Protocol Buffer service definition language makes it easy to design type-safe and consistent APIs, thanks to ecosystem tools like the Google API Design Guide for API standards, uber/prototool
for formatting and linting .protos and lyft/protoc-gen-validate
for defining field validations, and grpc-gateway
for defining REST mapping.
With a well designed .proto, its easy to generate a Go server interface and a TypeScript client, providing type-safe RPC between languages.
For the API gateway and RPC we adopted the Envoy service proxy.
The internet-facing segmentapis.com
endpoint is an Envoy front proxy that rate-limits and authenticates every request. It then transcodes a #REST / #JSON request to an upstream GRPC request. The upstream GRPC servers are running an Envoy sidecar configured for Datadog stats.
The result is API #security , #reliability and consistent #observability through Envoy configuration, not code.
We experimented with Swagger service definitions, but the spec is sprawling and the generated clients and server stubs leave a lot to be desired. GRPC and .proto and the Go implementation feels better designed and implemented. Thanks to the GRPC tooling and ecosystem you can generate Swagger from .protos, but it’s effectively impossible to go the other way.
I used GraphQL extensively at a previous employer a few years ago and really appreciated the data-driven schema etc alongside the many other benefits it provided. At that time, it seemed like it was set to replace RESTful APIs and many companies were adopting it.
However, as of late, it seems like interest has been waning for GraphQL as opposed to increasing as I had assumed it would. Am I missing something here? What is the current perspective regarding this technology?
Currently, I'm working with gRPC and was curious as to the state of everything now.
- Promotes microservices2
- Small API2
- Data is the API2
- One Model Everywhere2
- efficient data fetching1
- Bind to the Cloud1
- Virtual JSON Resource1
- Simple1
- Backed by Netflix1
- JSON Graph1
related Falcor posts
- Components837
- Virtual dom673
- Performance579
- Simplicity509
- Composable442
- Data flow186
- Declarative166
- Isn't an mvc framework128
- Reactive updates120
- Explicit app state115
- JSX50
- Learn once, write everywhere29
- Easy to Use22
- Uni-directional data flow22
- Works great with Flux Architecture17
- Great perfomance11
- Javascript10
- Built by Facebook9
- TypeScript support8
- Speed6
- Server Side Rendering6
- Scalable6
- Easy to start5
- Feels like the 90s5
- Awesome5
- Props5
- Cross-platform5
- Closer to standard JavaScript and HTML than others5
- Easy as Lego5
- Functional5
- Excellent Documentation5
- Hooks5
- Scales super well4
- Allows creating single page applications4
- Sdfsdfsdf4
- Start simple4
- Strong Community4
- Super easy4
- Server side views4
- Fancy third party tools4
- Rich ecosystem3
- Has arrow functions3
- Very gentle learning curve3
- Beautiful and Neat Component Management3
- Just the View of MVC3
- Simple, easy to reason about and makes you productive3
- Fast evolving3
- SSR3
- Great migration pathway for older systems3
- Simple3
- Has functional components3
- Every decision architecture wise makes sense3
- Sharable2
- Permissively-licensed2
- HTML-like2
- Image upload2
- Recharts2
- Fragments2
- Split your UI into components with one true state2
- React hooks1
- Datatables1
- Requires discipline to keep architecture organized41
- No predefined way to structure your app30
- Need to be familiar with lots of third party packages29
- JSX13
- Not enterprise friendly10
- One-way binding only6
- State consistency with backend neglected3
- Bad Documentation3
- Error boundary is needed2
- Paradigms change too fast2
related React posts
I was building a personal project that I needed to store items in a real time database. I am more comfortable with my Frontend skills than my backend so I didn't want to spend time building out anything in Ruby or Go.
I stumbled on Firebase by #Google, and it was really all I needed. It had realtime data, an area for storing file uploads and best of all for the amount of data I needed it was free!
I built out my application using tools I was familiar with, React for the framework, Redux.js to manage my state across components, and styled-components for the styling.
Now as this was a project I was just working on in my free time for fun I didn't really want to pay for hosting. I did some research and I found Netlify. I had actually seen them at #ReactRally the year before and deployed a Gatsby site to Netlify already.
Netlify was very easy to setup and link to my GitHub account you select a repo and pretty much with very little configuration you have a live site that will deploy every time you push to master.
With the selection of these tools I was able to build out my application, connect it to a realtime database, and deploy to a live environment all with $0 spent.
If you're looking to build out a small app I suggest giving these tools a go as you can get your idea out into the real world for absolutely no cost.
Your tech stack is solid for building a real-time messaging project.
React and React Native are excellent choices for the frontend, especially if you want to have both web and mobile versions of your application share code.
ExpressJS is an unopinionated framework that affords you the flexibility to use it's features at your term, which is a good start. However, I would recommend you explore Sails.js as well. Sails.js is built on top of Express.js and it provides additional features out of the box, especially the Websocket integration that your project requires.
Don't forget to set up Graphql codegen, this would improve your dev experience (Add Typescript, if you can too).
I don't know much about databases but you might want to consider using NO-SQL. I used Firebase real-time db and aws dynamo db on a few of my personal projects and I love they're easy to work with and offer more flexibility for a chat application.
related graphql.js posts
- Document-oriented storage829
- No sql594
- Ease of use554
- Fast465
- High performance410
- Free255
- Open source219
- Flexible180
- Replication & high availability145
- Easy to maintain112
- Querying42
- Easy scalability39
- Auto-sharding38
- High availability37
- Map/reduce31
- Document database27
- Easy setup25
- Full index support25
- Reliable16
- Fast in-place updates15
- Agile programming, flexible, fast14
- No database migrations12
- Easy integration with Node.Js8
- Enterprise8
- Enterprise Support6
- Great NoSQL DB5
- Support for many languages through different drivers4
- Schemaless3
- Aggregation Framework3
- Drivers support is good3
- Fast2
- Managed service2
- Easy to Scale2
- Awesome2
- Consistent2
- Good GUI1
- Acid Compliant1
- Very slowly for connected models that require joins6
- Not acid compliant3
- Proprietary query language2
related MongoDB posts
Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.
We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient
Based on the above criteria, we selected the following tools to perform the end to end data replication:
We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.
We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.
In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.
Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.
In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!
We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.
As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).
When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.
related REST posts
- Powerful api329
- Great search engine315
- Open source231
- Restful214
- Near real-time search200
- Free98
- Search everything85
- Easy to get started54
- Analytics45
- Distributed26
- Fast search6
- More than a search engine5
- Awesome, great tool4
- Great docs4
- Highly Available3
- Easy to scale3
- Nosql DB2
- Document Store2
- Great customer support2
- Intuitive API2
- Reliable2
- Potato2
- Fast2
- Easy setup2
- Great piece of software2
- Open1
- Scalability1
- Not stable1
- Easy to get hot data1
- Github1
- Elaticsearch1
- Actively developing1
- Responsive maintainers on GitHub1
- Ecosystem1
- Community0
- Resource hungry7
- Diffecult to get started6
- Expensive5
- Hard to keep stable at large scale4
related Elasticsearch posts
We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.
We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).
And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.
I can't recommend it highly enough.
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
- Patterns for paging, sorting, filtering7
- ISO Standard5
- Query Language4
- RESTful3
- No overfetching, no underfetching3
- Get many resources in a single request2
- Self-documenting2
- Batch requests2
- Bulk requests ("array upsert")2
- Ask for what you need, get exactly that2
- Evolve your API by following the compatibility rules1
- Resource model defines conventional operations1
- Resource Modification Language1
- Overwhelming, no "baby steps" documentation1