How Algolia Reduces Latency For 21B Searches Per Month

Developer-friendly hosted search service. API clients for all major frameworks and languages. REST, JSON & detailed documentation.

By Josh Dzielak, Developer Advocate at Algolia.

Algolia Paris meeting room

Algolia helps developers build search. At the core of Algolia is a built-from-scratch search engine exposed via a JSON API. In February 2017, we processed 21 billion queries and 27 billion indexing operations for 8,000+ live integrations. Some more numbers:

  • Query volume: 1B/day peak, 750M/day average (13K/s during peak hours)
  • Indexing operations: 10B/day peak, 1B/day average (spikes can be over 1M/s)
  • Number of API servers: 800+
  • Total memory in production: 64TB
  • Total I/O per day: 3.9PB
  • Total SSD storage capacity: 566TB

We’ve written about our stack before and are big fans of StackShare and the community here. In this post we‘ll look at how our stack is designed from the ground up to reduce latency and the tools we use to monitor latency in production.

I’m Josh and I’m a Developer Advocate at Algolia, formerly the VP Engineering at Keen IO. Being a developer advocate is pretty cool. I get to code, write and speak. I also get to converse daily with developers using Algolia.

Frequently, I get asked what Algolia’s API tech stack looks like. Many people are surprised when I tell them:

  1. The Algolia search engine is written in C++ and runs inside of nginx. All searches start and finish inside of our nginx module.

  2. API clients connect directly to the nginx host where the search happens. There are no load balancers or network hops.

  3. Algolia runs on hand-picked bare metal. We use high-frequency CPUs like the 3.9Ghz Intel Xeon E5–1650v4 and load machines with 256GB of RAM.

  4. Algolia uses a hybrid-tenancy model. Some clusters are shared between customers and some are dedicated, so we can use hardware efficiently while providing full isolation to customers who need it.

  5. Algolia doesn’t use AWS or any cloud-based hosting for the API. We have our own servers spanning 47 datacenters in 15 global regions.

Algolia architecture diagram

Why this infrastructure?

The primary design goal for our stack is to aggressively reduce latency. For the kinds of searches that Algolia powers—suited to demanding consumers who are used to Google, Amazon and Facebook—latency is a UX killer. Search-as-you-type experiences, which have become the norm since Google announced instant search in 2011, have demanding requirements. Any more than 100ms from end-to-end can be perceived as sluggish, glitchy and distracting. But at 50ms or less the experience feels magical. We prefer magic.


Our monitoring stack helps us keep an eye on latency across all of our clusters. We use Wavefront to collect metrics from every machine. We like Wavefront because it’s simple to integrate (we have it plugged in to StatsD and collectd), provides good dashboards, and has integrated alerting.

We use PagerDuty to fire alerts for abnormalities like CPU depletion, resource exhaustion and long-running indexing jobs. For non-urgent alerts, like single process crashes, we dump and collect the core for further investigation. If the same non-urgent alert repeats more than a set number of times, we do trigger a PagerDuty alert. We keep only the last 5 core dumps to avoid filling up the disk.

When a query takes more than 1 second we send an alert into Slack. From there, someone on our Core Engineering Squad will investigate. On a typical day, we might see as few as 1 or even 0 of these, so Slack has been a good fit.


We have probes in 45 locations around the world to measure the latency and the availability of our production clusters. We host the probes with 12 different providers, not necessarily the same as where our API servers are. The results from these probes are publicly visible at We use a custom internal API to aggregate the large amount of data that probes fetch from each cluster and turn it into a single value per region.

Algolia probes

Downed Machines

Downed machines are detected within 30 seconds by a custom Ruby application. Once a machine is detected to be down, we push a DNS change to take it out of the cluster. The upper bound of propagation for that change is 2 minutes (DNS TTL). During this time, API clients implement their internal retry strategy to connect to healthy machines in the cluster, so there is no customer impact.

Debugging Slow Queries

When a query takes abnormally long - more than 1 second - we dump everything about it to a file. We keep everything we need to rerun it including the application ID, index name and all query parameters. High-level profiling information is also stored - with it, we can figure out where time is spent in the heaviest 10% of query processing. A syscall called getrusage analyzes resource utilization of the calling process and its children.

For the kernel, we record the number of major page faults (ru_majflt), number of block inputs, number of context switches, elapsed wall clock time (using gettimeofday, so that we don’t skip counting time on a blocking I/O like a major page fault since we’re using memory mapped files) and a variety of other statistics that help us determine the root cause.

With data in hand, the investigation proceeds in this order:

  1. The hardware
  2. The software
  3. Operating system and production environment


The easiest problem to detect is a hardware issue. We see burned SSDs, broken memory modules and overheated CPUs. We automate the reporting of the most common failures like SSDs by alerting on S.M.A.R.T. data. For infrequent errors, we might need to run a suite of specific tools to narrow down the root cause, like mbw for uncovering memory bandwidth issues. And of course, there is always syslog which logs most hardware failures.

Individual machine failures will not have a customer impact because each cluster has 3 machines. Where it’s possible in a given geographical region, each machine is located in a different datacenter and attached to a different network provider. This provides further insulation from network or datacenter loss.


We have some close-to-zero cost profiling information obtained from the getrusage syscall. Sometimes that’s enough to diagnose an issue with the engine code. If not, we need to look to profiling. We can’t run a profiler in production for performance reasons, but we can do this after the fact.

An external binary is attached to a profiler, containing exactly the same code as the module running inside of nginx. The profiler uses information obtained by google-perftools, a very accurate stack-sampling profiler, to simulate the exact conditions of the production machine.

OS / Environment

If we can rule out hardware and software failure, the problem might have been with the operating environment at that point in time. That means analyzing system-wide data in the hope of discovering an anomaly.

Once we discovered that defragmentation of huge pages in the kernel could block our process for several hundred milliseconds. This defragmentation isn’t necessary because we keep large memory pools like nginx. Now we make sure it doesn’t happen, to the benefit of more consistent latency for all of our customers.


Every Algolia application runs on a cluster of 3 machines for redundancy and increased throughput. Each indexing operation is replicated across the machines using a durable queue.

Clusters can be mirrored to other global regions across Algolia’s Distributed Search Network (DSN). Global coverage is critical for delivering low latency to users coming from different continents. You can think of DSN like a CDN without caching - every query is running against a live, up-to-date copy of the index.

Early Detection

When we release a new version of the code that powers the API, we do it in an incremental, cluster-aware way so we can rollback immediately if something goes wrong.

Automated by a set of custom deployment scripts, the order of the rolling deploy looks like this:

  • Testing machines
  • Staging machines
  • ⅓ of production machines
  • Another ⅓ of production machines
  • The final ⅓ of production machines

First, we test the new code with unit tests and functional tests on a host that with an exact production configuration. During the API deployment process we use a custom set of scripts to run the tests, but in other areas of our stack we’re using Travis CI.

One thing we guard against is a network issue that produces a split-brain partition during a rolling deployment. Our deployment strategy considers every new version as unstable until it has consensus from every server, and it will continue to retry the deploy until the network partition heals.

Before deployment begins, another process has encrypted our binaries and uploaded them to an S3 bucket. The S3 bucket sits behind CloudFlare to make downloading the binaries fast from anywhere.

We use a custom shell script to do deployments. The script launches the new binaries and then checks to make sure that the new process is running. If it’s not, the script assumes that something has gone wrong and automatically rolls back to the previous version. Even if the previous version also can’t come up, we still won’t have a customer impact while we troubleshoot because the other machines in the cluster can still service requests.


For a search engine, there are two basic dimensions of scaling:

  • Search capacity - how many searches can be performed?
  • Storage capacity - how many records can the index hold?

To increase your search capacity with Algolia, you can replicate your data to additional clusters using the point-and-click DSN feature. Once a new DSN cluster is provisioned and brought up-to-date with data, it will automatically begin to process queries.

Scaling storage capacity is a bit more complicated.

Multiple Clusters

Today, Algolia customers who cannot fit on one cluster need to provision a separate cluster and create logic at the application layer to balance between them. This is often needed by SaaS companies who have customers growing at different rates, and sometimes one customer can be 10x or 100x compared to the others, so you need to move that customer to somewhere they can fit.

Soon we’ll be releasing a feature that takes this complexity behind the API. Algolia will automatically balance data a customer’s available clusters based on a few key pieces of information. The way it works is similar to sharding but without the limitation of shards being pinned to a specific node. Shards can be moved between clusters dynamically. This avoids a very serious problem encountered by many search engines - if the original shard key guess was wrong, the entire cluster will have to be rebuilt down the road.


Our humans and our bots congregate on Slack. Last year we had some growing pains, but now we have a prefix-based naming convention that works pretty well. Our channels are named #team-engineering, #help-engineering, #notif-github, etc.. The #team- channels are for members of a team, #help- channels are for getting help from a team, and #notif- channels are for collecting automatic notifications.

Algolia Zoom Room

It would be hard to count the number of Zoom meetings we have on a given day. Our two main offices are in Paris and San Francisco, making 7am-10am PST the busiest time of day for video calls. We now have dedicated "Zoom Rooms" with iPads, high-resolution cameras and big TVs that make the experience really smooth. With new offices in New York and Atlanta, Zoom will become an even more important part of our collaboration stack which also includes Github, Trello and Asana.


When you're an API, performance and scalability are customer-facing features. The work that our engineers do directly affects the 15,000+ developers that rely on our API. Being developers ourselves, we’re very passionate about open source and staying active with our community.

Algolia values

We’re hiring! Come help us make building search a rewarding experience. Algolia teammates come from a diverse range of backgrounds and 15 different countries. Our values are Care, Humility, Trust, Candor and Grit. Employees are encouraged to travel to different offices - Paris, San Francisco, or now Atlanta - at least once a year, to build strong personal connections inside of the company.

See our open positions on StackShare.

Questions about our stack? We love to talk tech. Comment below or ask us on our Discourse forum.

Thanks to Julien Lemoine, Adam Surak, Rémy-Christophe Schermesser, Jason Harris and Raphael Terrier for their much-appreciated help on this post.

Developer-friendly hosted search service. API clients for all major frameworks and languages. REST, JSON & detailed documentation.
Tools mentioned in article
Open jobs at Algolia
PHP Full-Stack Engineer
We're looking for a PHP Full-Stack Engineer to join our quickly growing team and help us fundamentally change the way developers are implementing search. Your goal is to design, architecture & implement integrations of Algolia into various frameworks & platforms. We want you to build state of the art libraries used by millions of users using our search engine. You should be comfortable with autonomy and ownership of large areas of the product. We're looking for resilient learners and builders who aren't afraid to get their hands dirty and want to understand the ins and outs of selling to a tech audience. If you know your strengths, have the humility to accept and work on your weaknesses and continuously strive to improve both personally and professionally, we would love to hear from you! 
Ruby Full-Stack Engineer
We're looking for a Ruby Full-Stack Engineer to join our quickly growing team and help us fundamentally change the way developers are implementing search. Your goal is to design, architecture & implement integrations of Algolia into various frameworks & platforms. We want you to build state of the art libraries used by millions of users using our search engine. You should be comfortable with autonomy and ownership of large areas of the product. Are you a resilient problem solver who isn't afraid to think outside the box and get their hands dirty? We expect you to take ownership of your projects, and be able to execute without defined processes nor explicit direction. We're looking for candidates who raise the level of our teams. You should value and practice transparency, have the humility to accept your weaknesses and continuously strive to improve both personally and professionally. Are you ready for the challenge?
  • Take ownership on our Shopify (Ruby on Rails), Zendesk (Ruby) integrations and our Ruby + Rails API clients
  • Design, architecture & implement integrations of Algolia into various frameworks & platforms
  • Build clean and scalable libraries
  • Help design the outstanding JavaScript-based search UI/UX
  • Maintain, support and animate the open-source community on GitHub & StackOverflow
  • 2+ years of programming experience in Ruby
  • Strong knowledge of front-end technologies (JavaScript, CSS3, HTML5)
  • Experience using APIs and webhooks
  • Practical knowledge of shell scripting
  • A passion for shipping quality code
  • Great oral and written communication in English
  • Experience with Git
  • Degree in Software Engineering or related field
  • Experience with developing for Shopify
  • Experience with working in a very fast-paced and continuously changing environment
  • Experience with working in an open-source environment
  • Site Reliability Engineer
    Site Reliability Engineers (SRE) at Algolia are both software and systems engineers that ensure we can reliably serve billions of queries every day, for users all around the world, despite datacenters being unavailable and undersea cables being cut. As we operate many services including our Search API, Places, DocSearch and Analytics, you’ll keep learning new things everyday and share what you have learned. The platform we develop uses both virtual and bare-metal systems spanning over 50+ datacenters in 15 different regions serving millions of users from every corner of the globe. Since search is a critical component of many applications, the SRE team maintains a high level of expertise in system failures in order to prevent them and provide reliable service to our customers. No two problems are the same because all the systems evolve all the time. We expect you to be a resilient problem solver who isn’t afraid to think outside of the box and use the knowledge of system interactions in your favor. You’ll also take ownership of complete projects and execute them. The team is composed of engineers with different backgrounds and experience both in the industry and academia. The diversity works in our favor and you should increase it by bringing your experience, your knowledge and your point of view. Thinking differently is a plus, not a minus. We’re transparent with each other and to other teams both about our success and our failures. This way we learn, we accept our weaknesses and continuously strive to improve both personally and professionally.
  • Develop and maintain the monitoring & log processing pipeline
  • Evaluate performance of current and future systems, both software and hardware
  • Participate in design of new systems
  • Work with other teams to identify, troubleshoot, and resolve high impact issues

  • 5+ years of engineering experience
  • Proficiency in Scala and/or Go (and willingness to learn if you do not know it)
  • Knowledge of others programming languages (Java, Javascript, etc.)
  • Knowledge of Shell scripting and at least one scripting language (Python, Ruby, etc.)
  • Proven experience in a very fast-paced and continuously changing environment
  • Ability to make independent decisions and taking ownership for them
  • Full professional English proficiency
  • EU work permit
  • Ability to use a configuration management tool (Chef, Ansible, etc.)
  • Understanding of Linux systems: I/O, process scheduling, filesystems
  • Understanding of computer networks: TCP/IP, DNS, load-balancing
  • Knowledge of low level principles of computers and network components
  • Administration of various *aaS (AWS, GCP, etc.)
  • Performance profiling of applications both in development and production

  • Junior Software Engineer
    Algolia was built to help users deliver an intuitive search-as-you-type experience on their websites and mobile apps. We provide a search API used by thousands of customers in more than 100 countries. Billions of search queries are answered every month thanks to the code we push every day into production. We're looking for a junior JavaScript engineer to help us build the tools and libraries that will reinforce Algolia as the go-to solution for every search focused UI. Are you willing to contribute to open-source projects to help our users to  code a search UI in ten minutes? You will work with a team of four JavaScript developers: Alex - @bobylito, Marie - @mthuret, Haroen - @Haroenv and Vincent - @vvo. We are part of the empowerment squad responsible for the development of all the open-source libraries to make the Algolia integration smoother for our users. We have a daily impact on our user's developer experience.
  • As we did with Vanilla Js and React, develop new versions of instantsearch.js and autocomplete.js for Angular, Vue.js and any other popular front-end frameworks
  • Implement new features, solve issues and analyse user's feedback on our popular open-source projects

  • You will work within a small software engineering team, gaining experience working in a modern JavaScript environment.
  • We’re not dedicated to a specific framework. We're dedicated to providing an incredible experience for our users.
  • First experience with coding (JavaScript is a plus).
  • You love programming and you can prove it.
  • You're a fast learner and have a story or two to support that.

  • Verified by
    Marketing Specialist
    Engineering Lead
    Co-Founder & CEO
    Software Engineer
    Software Engineer
    Front-end developer
    Software engineer
    Frontend Engineer
    Frontend Developer
    Information Technology
    Content & Education
    VP of Engineering
    Software engineer
    Customer Solutions Engineer
    Software Engineer
    Senior JavaScript Engineer
    You may also like
    Building a Kubernetes Platform at Pinterest
    Rust at OneSignal
    How to Practically Use Performance API to Measure Performance
    Nine Experimentation Best Practices