Alternatives to GraphiQL logo

Alternatives to GraphiQL

GraphQL, Apollo, Postman, REST, and MySQL are the most popular alternatives and competitors to GraphiQL.
192
12

What is GraphiQL and what are its top alternatives?

GraphiQL is a powerful in-browser IDE for exploring GraphQL APIs. It provides an interactive interface for executing queries, viewing documentation, and exploring the schema. However, GraphiQL has limitations such as lack of advanced features like code autocompletion and doesn't support multiple query executions simultaneously.

  1. Insomnia: Insomnia is a popular alternative to GraphiQL that supports not only GraphQL but also other API types. It offers features like code autocompletion, multiple tabs, and the ability to save and share requests. Pros include versatility and a user-friendly interface, while cons may include a learning curve for beginners to API testing.
  2. Altair: Altair is an open-source GraphQL client that provides a clean and intuitive interface for executing queries and mutations. Features include query history, response visualization, and customizable themes. Pros of Altair include its simplicity and flexibility, but some users may miss advanced features offered by other tools.
  3. GraphQL Playground: GraphQL Playground is an interactive GraphQL IDE that offers features like automatic schema introspection, query history, and real-time error highlighting. Pros include robust debugging capabilities and customizable settings, while cons may include a cluttered interface for some users.
  4. Postman: Postman is a versatile API client that supports various protocols including GraphQL. It offers features like test automation, collections for organizing requests, and real-time collaboration. Pros include a wide range of features for API testing, while cons may include a steeper learning curve compared to specialized GraphQL clients.
  5. Paw: Paw is a powerful API client for macOS that supports GraphQL along with other API types. It offers features like dynamic values, code generation, and request chaining. Pros include advanced functionalities for API testing and customization options, but some users may find it overwhelming due to its complexity.
  6. GraphCMS: GraphCMS is a headless CMS and GraphQL API that provides a built-in GraphiQL interface for querying and exploring the schema. It offers features like content modeling, role-based permissions, and webhooks. Pros include seamless integration with a CMS platform, while cons may include limited customization options compared to standalone clients.
  7. Postwoman: Postwoman is an open-source API request builder that supports GraphQL along with other protocols. It offers features like environment variables, scriptable requests, and response visualization. Pros include a lightweight and customizable interface, while cons may include fewer advanced features compared to specialized GraphQL clients.
  8. Hoppscotch: Hoppscotch is an open-source API client that supports GraphQL and other protocols. It offers features like batch execution, request templating, and response validation. Pros include a user-friendly interface and robust documentation, while cons may include limited integrations and functionalities compared to specialized GraphQL clients.
  9. GraphQL Editor: GraphQL Editor is a visual tool for designing GraphQL schemas and generating code. It offers features like drag-and-drop schema building, syntax highlighting, and collaborative editing. Pros include a visual approach to GraphQL development, while cons may include limited support for executing queries directly within the tool.
  10. GraphiQL.app: GraphiQL.app is a standalone desktop application for using GraphiQL without the need for a browser. It offers features similar to the in-browser version, such as syntax highlighting, autocomplete, and schema exploration. Pros include a dedicated application for interacting with GraphQL APIs, while cons may include potential limitations compared to more feature-rich alternatives.

Top Alternatives to GraphiQL

  • GraphQL
    GraphQL

    GraphQL is a data query language and runtime designed and used at Facebook to request and deliver data to mobile and web apps since 2012. ...

  • Apollo
    Apollo

    Build a universal GraphQL API on top of your existing REST APIs, so you can ship new application features fast without waiting on backend changes. ...

  • Postman
    Postman

    It is the only complete API development environment, used by nearly five million developers and more than 100,000 companies worldwide. ...

  • REST
    REST

    An architectural style for developing web services. A distributed system framework that uses Web protocols and technologies. ...

  • MySQL
    MySQL

    The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...

  • PostgreSQL
    PostgreSQL

    PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions. ...

  • MongoDB
    MongoDB

    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...

  • Redis
    Redis

    Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. ...

GraphiQL alternatives & related posts

GraphQL logo

GraphQL

33.7K
27.7K
310
A data query language and runtime
33.7K
27.7K
+ 1
310
PROS OF GRAPHQL
  • 75
    Schemas defined by the requests made by the user
  • 63
    Will replace RESTful interfaces
  • 62
    The future of API's
  • 49
    The future of databases
  • 13
    Self-documenting
  • 12
    Get many resources in a single request
  • 6
    Query Language
  • 6
    Ask for what you need, get exactly that
  • 3
    Fetch different resources in one request
  • 3
    Type system
  • 3
    Evolve your API without versions
  • 2
    Ease of client creation
  • 2
    GraphiQL
  • 2
    Easy setup
  • 1
    "Open" document
  • 1
    Fast prototyping
  • 1
    Supports subscription
  • 1
    Standard
  • 1
    Good for apps that query at build time. (SSR/Gatsby)
  • 1
    1. Describe your data
  • 1
    Better versioning
  • 1
    Backed by Facebook
  • 1
    Easy to learn
CONS OF GRAPHQL
  • 4
    Hard to migrate from GraphQL to another technology
  • 4
    More code to type.
  • 2
    Takes longer to build compared to schemaless.
  • 1
    No support for caching
  • 1
    All the pros sound like NFT pitches
  • 1
    No support for streaming
  • 1
    Works just like any other API at runtime
  • 1
    N+1 fetch problem
  • 1
    No built in security

related GraphQL posts

Shared insights
on
Node.jsNode.jsGraphQLGraphQLMongoDBMongoDB

I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery

For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:

  1. Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have

  2. GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.

  3. MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website

See more
Nick Rockwell
SVP, Engineering at Fastly · | 46 upvotes · 4.1M views

When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

See more
Apollo logo

Apollo

2.4K
1.8K
25
GraphQL server for Express, Connect, Hapi, Koa and more
2.4K
1.8K
+ 1
25
PROS OF APOLLO
  • 12
    From the creators of Meteor
  • 8
    Great documentation
  • 3
    Open source
  • 2
    Real time if use subscription
CONS OF APOLLO
  • 1
    File upload is not supported
  • 1
    Increase in complexity of implementing (subscription)

related Apollo posts

Nick Rockwell
SVP, Engineering at Fastly · | 46 upvotes · 4.1M views

When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

See more
Adam Neary

At Airbnb we use GraphQL Unions for a "Backend-Driven UI." We have built a system where a very dynamic page is constructed based on a query that will return an array of some set of possible “sections.” These sections are responsive and define the UI completely.

The central file that manages this would be a generated file. Since the list of possible sections is quite large (~50 sections today for Search), it also presumes we have a sane mechanism for lazy-loading components with server rendering, which is a topic for another post. Suffice it to say, we do not need to package all possible sections in a massive bundle to account for everything up front.

Each section component defines its own query fragment, colocated with the section’s component code. This is the general idea of Backend-Driven UI at Airbnb. It’s used in a number of places, including Search, Trip Planner, Host tools, and various landing pages. We use this as our starting point, and then in the demo show how to (1) make and update to an existing section, and (2) add a new section.

While building your product, you want to be able to explore your schema, discovering field names and testing out potential queries on live development data. We achieve that today with GraphQL Playground, the work of our friends at #Prisma. The tools come standard with Apollo Server.

#BackendDrivenUI

See more
Postman logo

Postman

94.4K
80.9K
1.8K
Only complete API development environment
94.4K
80.9K
+ 1
1.8K
PROS OF POSTMAN
  • 490
    Easy to use
  • 369
    Great tool
  • 276
    Makes developing rest api's easy peasy
  • 156
    Easy setup, looks good
  • 144
    The best api workflow out there
  • 53
    It's the best
  • 53
    History feature
  • 44
    Adds real value to my workflow
  • 43
    Great interface that magically predicts your needs
  • 35
    The best in class app
  • 12
    Can save and share script
  • 10
    Fully featured without looking cluttered
  • 8
    Collections
  • 8
    Option to run scrips
  • 8
    Global/Environment Variables
  • 7
    Shareable Collections
  • 7
    Dead simple and useful. Excellent
  • 7
    Dark theme easy on the eyes
  • 6
    Awesome customer support
  • 6
    Great integration with newman
  • 5
    Documentation
  • 5
    Simple
  • 5
    The test script is useful
  • 4
    Saves responses
  • 4
    This has simplified my testing significantly
  • 4
    Makes testing API's as easy as 1,2,3
  • 4
    Easy as pie
  • 3
    API-network
  • 3
    I'd recommend it to everyone who works with apis
  • 3
    Mocking API calls with predefined response
  • 2
    Now supports GraphQL
  • 2
    Postman Runner CI Integration
  • 2
    Easy to setup, test and provides test storage
  • 2
    Continuous integration using newman
  • 2
    Pre-request Script and Test attributes are invaluable
  • 2
    Runner
  • 2
    Graph
  • 1
    <a href="http://fixbit.com/">useful tool</a>
CONS OF POSTMAN
  • 10
    Stores credentials in HTTP
  • 9
    Bloated features and UI
  • 8
    Cumbersome to switch authentication tokens
  • 7
    Poor GraphQL support
  • 5
    Expensive
  • 3
    Not free after 5 users
  • 3
    Can't prompt for per-request variables
  • 1
    Import swagger
  • 1
    Support websocket
  • 1
    Import curl

related Postman posts

Noah Zoschke
Engineering Manager at Segment · | 30 upvotes · 3M views

We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. A public API is only as good as its #documentation. For the API reference doc we are using Postman.

Postman is an “API development environment”. You download the desktop app, and build API requests by URL and payload. Over time you can build up a set of requests and organize them into a “Postman Collection”. You can generalize a collection with “collection variables”. This allows you to parameterize things like username, password and workspace_name so a user can fill their own values in before making an API call. This makes it possible to use Postman for one-off API tasks instead of writing code.

Then you can add Markdown content to the entire collection, a folder of related methods, and/or every API method to explain how the APIs work. You can publish a collection and easily share it with a URL.

This turns Postman from a personal #API utility to full-blown public interactive API documentation. The result is a great looking web page with all the API calls, docs and sample requests and responses in one place. Check out the results here.

Postman’s powers don’t end here. You can automate Postman with “test scripts” and have it periodically run a collection scripts as “monitors”. We now have #QA around all the APIs in public docs to make sure they are always correct

Along the way we tried other techniques for documenting APIs like ReadMe.io or Swagger UI. These required a lot of effort to customize.

Writing and maintaining a Postman collection takes some work, but the resulting documentation site, interactivity and API testing tools are well worth it.

See more
Simon Reymann
Senior Fullstack Developer at QUANTUSflow Software GmbH · | 27 upvotes · 5.1M views

Our whole Node.js backend stack consists of the following tools:

  • Lerna as a tool for multi package and multi repository management
  • npm as package manager
  • NestJS as Node.js framework
  • TypeScript as programming language
  • ExpressJS as web server
  • Swagger UI for visualizing and interacting with the API’s resources
  • Postman as a tool for API development
  • TypeORM as object relational mapping layer
  • JSON Web Token for access token management

The main reason we have chosen Node.js over PHP is related to the following artifacts:

  • Made for the web and widely in use: Node.js is a software platform for developing server-side network services. Well-known projects that rely on Node.js include the blogging software Ghost, the project management tool Trello and the operating system WebOS. Node.js requires the JavaScript runtime environment V8, which was specially developed by Google for the popular Chrome browser. This guarantees a very resource-saving architecture, which qualifies Node.js especially for the operation of a web server. Ryan Dahl, the developer of Node.js, released the first stable version on May 27, 2009. He developed Node.js out of dissatisfaction with the possibilities that JavaScript offered at the time. The basic functionality of Node.js has been mapped with JavaScript since the first version, which can be expanded with a large number of different modules. The current package managers (npm or Yarn) for Node.js know more than 1,000,000 of these modules.
  • Fast server-side solutions: Node.js adopts the JavaScript "event-loop" to create non-blocking I/O applications that conveniently serve simultaneous events. With the standard available asynchronous processing within JavaScript/TypeScript, highly scalable, server-side solutions can be realized. The efficient use of the CPU and the RAM is maximized and more simultaneous requests can be processed than with conventional multi-thread servers.
  • A language along the entire stack: Widely used frameworks such as React or AngularJS or Vue.js, which we prefer, are written in JavaScript/TypeScript. If Node.js is now used on the server side, you can use all the advantages of a uniform script language throughout the entire application development. The same language in the back- and frontend simplifies the maintenance of the application and also the coordination within the development team.
  • Flexibility: Node.js sets very few strict dependencies, rules and guidelines and thus grants a high degree of flexibility in application development. There are no strict conventions so that the appropriate architecture, design structures, modules and features can be freely selected for the development.
See more
REST logo

REST

20
194
0
A software architectural style
20
194
+ 1
0
PROS OF REST
  • 4
    Popularity
CONS OF REST
    Be the first to leave a con

    related REST posts

    MySQL logo

    MySQL

    125.3K
    106K
    3.8K
    The world's most popular open source database
    125.3K
    106K
    + 1
    3.8K
    PROS OF MYSQL
    • 800
      Sql
    • 679
      Free
    • 562
      Easy
    • 528
      Widely used
    • 490
      Open source
    • 180
      High availability
    • 160
      Cross-platform support
    • 104
      Great community
    • 79
      Secure
    • 75
      Full-text indexing and searching
    • 26
      Fast, open, available
    • 16
      Reliable
    • 16
      SSL support
    • 15
      Robust
    • 9
      Enterprise Version
    • 7
      Easy to set up on all platforms
    • 3
      NoSQL access to JSON data type
    • 1
      Relational database
    • 1
      Easy, light, scalable
    • 1
      Sequel Pro (best SQL GUI)
    • 1
      Replica Support
    CONS OF MYSQL
    • 16
      Owned by a company with their own agenda
    • 3
      Can't roll back schema changes

    related MySQL posts

    Nick Rockwell
    SVP, Engineering at Fastly · | 46 upvotes · 4.1M views

    When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

    So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

    React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

    Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

    See more
    Tim Abbott

    We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

    We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

    And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

    I can't recommend it highly enough.

    See more
    PostgreSQL logo

    PostgreSQL

    98.2K
    82.2K
    3.5K
    A powerful, open source object-relational database system
    98.2K
    82.2K
    + 1
    3.5K
    PROS OF POSTGRESQL
    • 763
      Relational database
    • 510
      High availability
    • 439
      Enterprise class database
    • 383
      Sql
    • 304
      Sql + nosql
    • 173
      Great community
    • 147
      Easy to setup
    • 131
      Heroku
    • 130
      Secure by default
    • 113
      Postgis
    • 50
      Supports Key-Value
    • 48
      Great JSON support
    • 34
      Cross platform
    • 33
      Extensible
    • 28
      Replication
    • 26
      Triggers
    • 23
      Multiversion concurrency control
    • 23
      Rollback
    • 21
      Open source
    • 18
      Heroku Add-on
    • 17
      Stable, Simple and Good Performance
    • 15
      Powerful
    • 13
      Lets be serious, what other SQL DB would you go for?
    • 11
      Good documentation
    • 9
      Scalable
    • 8
      Free
    • 8
      Reliable
    • 8
      Intelligent optimizer
    • 7
      Transactional DDL
    • 7
      Modern
    • 6
      One stop solution for all things sql no matter the os
    • 5
      Relational database with MVCC
    • 5
      Faster Development
    • 4
      Full-Text Search
    • 4
      Developer friendly
    • 3
      Excellent source code
    • 3
      Free version
    • 3
      Great DB for Transactional system or Application
    • 3
      Relational datanbase
    • 3
      search
    • 3
      Open-source
    • 2
      Text
    • 2
      Full-text
    • 1
      Can handle up to petabytes worth of size
    • 1
      Composability
    • 1
      Multiple procedural languages supported
    • 0
      Native
    CONS OF POSTGRESQL
    • 10
      Table/index bloatings

    related PostgreSQL posts

    Simon Reymann
    Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11.2M views

    Our whole DevOps stack consists of the following tools:

    • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
    • Respectively Git as revision control system
    • SourceTree as Git GUI
    • Visual Studio Code as IDE
    • CircleCI for continuous integration (automatize development process)
    • Prettier / TSLint / ESLint as code linter
    • SonarQube as quality gate
    • Docker as container management (incl. Docker Compose for multi-container application management)
    • VirtualBox for operating system simulation tests
    • Kubernetes as cluster management for docker containers
    • Heroku for deploying in test environments
    • nginx as web server (preferably used as facade server in production environment)
    • SSLMate (using OpenSSL) for certificate management
    • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
    • PostgreSQL as preferred database system
    • Redis as preferred in-memory database/store (great for caching)

    The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

    • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
    • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
    • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
    • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
    • Scalability: All-in-one framework for distributed systems.
    • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
    See more
    Jeyabalaji Subramanian

    Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

    We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

    Based on the above criteria, we selected the following tools to perform the end to end data replication:

    We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

    We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

    In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

    Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

    In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

    See more
    MongoDB logo

    MongoDB

    93.5K
    80.7K
    4.1K
    The database for giant ideas
    93.5K
    80.7K
    + 1
    4.1K
    PROS OF MONGODB
    • 828
      Document-oriented storage
    • 593
      No sql
    • 553
      Ease of use
    • 464
      Fast
    • 410
      High performance
    • 255
      Free
    • 218
      Open source
    • 180
      Flexible
    • 145
      Replication & high availability
    • 112
      Easy to maintain
    • 42
      Querying
    • 39
      Easy scalability
    • 38
      Auto-sharding
    • 37
      High availability
    • 31
      Map/reduce
    • 27
      Document database
    • 25
      Easy setup
    • 25
      Full index support
    • 16
      Reliable
    • 15
      Fast in-place updates
    • 14
      Agile programming, flexible, fast
    • 12
      No database migrations
    • 8
      Easy integration with Node.Js
    • 8
      Enterprise
    • 6
      Enterprise Support
    • 5
      Great NoSQL DB
    • 4
      Support for many languages through different drivers
    • 3
      Schemaless
    • 3
      Aggregation Framework
    • 3
      Drivers support is good
    • 2
      Fast
    • 2
      Managed service
    • 2
      Easy to Scale
    • 2
      Awesome
    • 2
      Consistent
    • 1
      Good GUI
    • 1
      Acid Compliant
    CONS OF MONGODB
    • 6
      Very slowly for connected models that require joins
    • 3
      Not acid compliant
    • 2
      Proprietary query language

    related MongoDB posts

    Jeyabalaji Subramanian

    Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

    We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

    Based on the above criteria, we selected the following tools to perform the end to end data replication:

    We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

    We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

    In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

    Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

    In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

    See more
    Robert Zuber

    We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

    As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

    When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

    See more
    Redis logo

    Redis

    59.4K
    45.7K
    3.9K
    Open source (BSD licensed), in-memory data structure store
    59.4K
    45.7K
    + 1
    3.9K
    PROS OF REDIS
    • 886
      Performance
    • 542
      Super fast
    • 513
      Ease of use
    • 444
      In-memory cache
    • 324
      Advanced key-value cache
    • 194
      Open source
    • 182
      Easy to deploy
    • 164
      Stable
    • 155
      Free
    • 121
      Fast
    • 42
      High-Performance
    • 40
      High Availability
    • 35
      Data Structures
    • 32
      Very Scalable
    • 24
      Replication
    • 22
      Great community
    • 22
      Pub/Sub
    • 19
      "NoSQL" key-value data store
    • 16
      Hashes
    • 13
      Sets
    • 11
      Sorted Sets
    • 10
      NoSQL
    • 10
      Lists
    • 9
      Async replication
    • 9
      BSD licensed
    • 8
      Bitmaps
    • 8
      Integrates super easy with Sidekiq for Rails background
    • 7
      Keys with a limited time-to-live
    • 7
      Open Source
    • 6
      Lua scripting
    • 6
      Strings
    • 5
      Awesomeness for Free
    • 5
      Hyperloglogs
    • 4
      Transactions
    • 4
      Outstanding performance
    • 4
      Runs server side LUA
    • 4
      LRU eviction of keys
    • 4
      Feature Rich
    • 4
      Written in ANSI C
    • 4
      Networked
    • 3
      Data structure server
    • 3
      Performance & ease of use
    • 2
      Dont save data if no subscribers are found
    • 2
      Automatic failover
    • 2
      Easy to use
    • 2
      Temporarily kept on disk
    • 2
      Scalable
    • 2
      Existing Laravel Integration
    • 2
      Channels concept
    • 2
      Object [key/value] size each 500 MB
    • 2
      Simple
    CONS OF REDIS
    • 15
      Cannot query objects directly
    • 3
      No secondary indexes for non-numeric data types
    • 1
      No WAL

    related Redis posts

    Russel Werner
    Lead Engineer at StackShare · | 32 upvotes · 2.8M views

    StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

    Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

    #StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

    See more
    Simon Reymann
    Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11.2M views

    Our whole DevOps stack consists of the following tools:

    • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
    • Respectively Git as revision control system
    • SourceTree as Git GUI
    • Visual Studio Code as IDE
    • CircleCI for continuous integration (automatize development process)
    • Prettier / TSLint / ESLint as code linter
    • SonarQube as quality gate
    • Docker as container management (incl. Docker Compose for multi-container application management)
    • VirtualBox for operating system simulation tests
    • Kubernetes as cluster management for docker containers
    • Heroku for deploying in test environments
    • nginx as web server (preferably used as facade server in production environment)
    • SSLMate (using OpenSSL) for certificate management
    • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
    • PostgreSQL as preferred database system
    • Redis as preferred in-memory database/store (great for caching)

    The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

    • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
    • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
    • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
    • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
    • Scalability: All-in-one framework for distributed systems.
    • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
    See more