How Algolia Reduces Latency For 21B Searches Per Month

Developer-friendly hosted search service. API clients for all major frameworks and languages. REST, JSON & detailed documentation.

By Josh Dzielak, Developer Advocate at Algolia.

Algolia Paris meeting room

Algolia helps developers build search. At the core of Algolia is a built-from-scratch search engine exposed via a JSON API. In February 2017, we processed 21 billion queries and 27 billion indexing operations for 8,000+ live integrations. Some more numbers:

  • Query volume: 1B/day peak, 750M/day average (13K/s during peak hours)
  • Indexing operations: 10B/day peak, 1B/day average (spikes can be over 1M/s)
  • Number of API servers: 800+
  • Total memory in production: 64TB
  • Total I/O per day: 3.9PB
  • Total SSD storage capacity: 566TB

We’ve written about our stack before and are big fans of StackShare and the community here. In this post we‘ll look at how our stack is designed from the ground up to reduce latency and the tools we use to monitor latency in production.

I’m Josh and I’m a Developer Advocate at Algolia, formerly the VP Engineering at Keen IO. Being a developer advocate is pretty cool. I get to code, write and speak. I also get to converse daily with developers using Algolia.

Frequently, I get asked what Algolia’s API tech stack looks like. Many people are surprised when I tell them:

  1. The Algolia search engine is written in C++ and runs inside of nginx. All searches start and finish inside of our nginx module.

  2. API clients connect directly to the nginx host where the search happens. There are no load balancers or network hops.

  3. Algolia runs on hand-picked bare metal. We use high-frequency CPUs like the 3.9Ghz Intel Xeon E5–1650v4 and load machines with 256GB of RAM.

  4. Algolia uses a hybrid-tenancy model. Some clusters are shared between customers and some are dedicated, so we can use hardware efficiently while providing full isolation to customers who need it.

  5. Algolia doesn’t use AWS or any cloud-based hosting for the API. We have our own servers spanning 47 datacenters in 15 global regions.

Algolia architecture diagram

Why this infrastructure?

The primary design goal for our stack is to aggressively reduce latency. For the kinds of searches that Algolia powers—suited to demanding consumers who are used to Google, Amazon and Facebook—latency is a UX killer. Search-as-you-type experiences, which have become the norm since Google announced instant search in 2011, have demanding requirements. Any more than 100ms from end-to-end can be perceived as sluggish, glitchy and distracting. But at 50ms or less the experience feels magical. We prefer magic.


Our monitoring stack helps us keep an eye on latency across all of our clusters. We use Wavefront to collect metrics from every machine. We like Wavefront because it’s simple to integrate (we have it plugged in to StatsD and collectd), provides good dashboards, and has integrated alerting.

We use PagerDuty to fire alerts for abnormalities like CPU depletion, resource exhaustion and long-running indexing jobs. For non-urgent alerts, like single process crashes, we dump and collect the core for further investigation. If the same non-urgent alert repeats more than a set number of times, we do trigger a PagerDuty alert. We keep only the last 5 core dumps to avoid filling up the disk.

When a query takes more than 1 second we send an alert into Slack. From there, someone on our Core Engineering Squad will investigate. On a typical day, we might see as few as 1 or even 0 of these, so Slack has been a good fit.


We have probes in 45 locations around the world to measure the latency and the availability of our production clusters. We host the probes with 12 different providers, not necessarily the same as where our API servers are. The results from these probes are publicly visible at We use a custom internal API to aggregate the large amount of data that probes fetch from each cluster and turn it into a single value per region.

Algolia probes

Downed Machines

Downed machines are detected within 30 seconds by a custom Ruby application. Once a machine is detected to be down, we push a DNS change to take it out of the cluster. The upper bound of propagation for that change is 2 minutes (DNS TTL). During this time, API clients implement their internal retry strategy to connect to healthy machines in the cluster, so there is no customer impact.

Debugging Slow Queries

When a query takes abnormally long - more than 1 second - we dump everything about it to a file. We keep everything we need to rerun it including the application ID, index name and all query parameters. High-level profiling information is also stored - with it, we can figure out where time is spent in the heaviest 10% of query processing. A syscall called getrusage analyzes resource utilization of the calling process and its children.

For the kernel, we record the number of major page faults (ru_majflt), number of block inputs, number of context switches, elapsed wall clock time (using gettimeofday, so that we don’t skip counting time on a blocking I/O like a major page fault since we’re using memory mapped files) and a variety of other statistics that help us determine the root cause.

With data in hand, the investigation proceeds in this order:

  1. The hardware
  2. The software
  3. Operating system and production environment


The easiest problem to detect is a hardware issue. We see burned SSDs, broken memory modules and overheated CPUs. We automate the reporting of the most common failures like SSDs by alerting on S.M.A.R.T. data. For infrequent errors, we might need to run a suite of specific tools to narrow down the root cause, like mbw for uncovering memory bandwidth issues. And of course, there is always syslog which logs most hardware failures.

Individual machine failures will not have a customer impact because each cluster has 3 machines. Where it’s possible in a given geographical region, each machine is located in a different datacenter and attached to a different network provider. This provides further insulation from network or datacenter loss.


We have some close-to-zero cost profiling information obtained from the getrusage syscall. Sometimes that’s enough to diagnose an issue with the engine code. If not, we need to look to profiling. We can’t run a profiler in production for performance reasons, but we can do this after the fact.

An external binary is attached to a profiler, containing exactly the same code as the module running inside of nginx. The profiler uses information obtained by google-perftools, a very accurate stack-sampling profiler, to simulate the exact conditions of the production machine.

OS / Environment

If we can rule out hardware and software failure, the problem might have been with the operating environment at that point in time. That means analyzing system-wide data in the hope of discovering an anomaly.

Once we discovered that defragmentation of huge pages in the kernel could block our process for several hundred milliseconds. This defragmentation isn’t necessary because we keep large memory pools like nginx. Now we make sure it doesn’t happen, to the benefit of more consistent latency for all of our customers.


Every Algolia application runs on a cluster of 3 machines for redundancy and increased throughput. Each indexing operation is replicated across the machines using a durable queue.

Clusters can be mirrored to other global regions across Algolia’s Distributed Search Network (DSN). Global coverage is critical for delivering low latency to users coming from different continents. You can think of DSN like a CDN without caching - every query is running against a live, up-to-date copy of the index.

Early Detection

When we release a new version of the code that powers the API, we do it in an incremental, cluster-aware way so we can rollback immediately if something goes wrong.

Automated by a set of custom deployment scripts, the order of the rolling deploy looks like this:

  • Testing machines
  • Staging machines
  • ⅓ of production machines
  • Another ⅓ of production machines
  • The final ⅓ of production machines

First, we test the new code with unit tests and functional tests on a host that with an exact production configuration. During the API deployment process we use a custom set of scripts to run the tests, but in other areas of our stack we’re using Travis CI.

One thing we guard against is a network issue that produces a split-brain partition during a rolling deployment. Our deployment strategy considers every new version as unstable until it has consensus from every server, and it will continue to retry the deploy until the network partition heals.

Before deployment begins, another process has encrypted our binaries and uploaded them to an S3 bucket. The S3 bucket sits behind CloudFlare to make downloading the binaries fast from anywhere.

We use a custom shell script to do deployments. The script launches the new binaries and then checks to make sure that the new process is running. If it’s not, the script assumes that something has gone wrong and automatically rolls back to the previous version. Even if the previous version also can’t come up, we still won’t have a customer impact while we troubleshoot because the other machines in the cluster can still service requests.


For a search engine, there are two basic dimensions of scaling:

  • Search capacity - how many searches can be performed?
  • Storage capacity - how many records can the index hold?

To increase your search capacity with Algolia, you can replicate your data to additional clusters using the point-and-click DSN feature. Once a new DSN cluster is provisioned and brought up-to-date with data, it will automatically begin to process queries.

Scaling storage capacity is a bit more complicated.

Multiple Clusters

Today, Algolia customers who cannot fit on one cluster need to provision a separate cluster and create logic at the application layer to balance between them. This is often needed by SaaS companies who have customers growing at different rates, and sometimes one customer can be 10x or 100x compared to the others, so you need to move that customer to somewhere they can fit.

Soon we’ll be releasing a feature that takes this complexity behind the API. Algolia will automatically balance data a customer’s available clusters based on a few key pieces of information. The way it works is similar to sharding but without the limitation of shards being pinned to a specific node. Shards can be moved between clusters dynamically. This avoids a very serious problem encountered by many search engines - if the original shard key guess was wrong, the entire cluster will have to be rebuilt down the road.


Our humans and our bots congregate on Slack. Last year we had some growing pains, but now we have a prefix-based naming convention that works pretty well. Our channels are named #team-engineering, #help-engineering, #notif-github, etc.. The #team- channels are for members of a team, #help- channels are for getting help from a team, and #notif- channels are for collecting automatic notifications.

Algolia Zoom Room

It would be hard to count the number of Zoom meetings we have on a given day. Our two main offices are in Paris and San Francisco, making 7am-10am PST the busiest time of day for video calls. We now have dedicated "Zoom Rooms" with iPads, high-resolution cameras and big TVs that make the experience really smooth. With new offices in New York and Atlanta, Zoom will become an even more important part of our collaboration stack which also includes Github, Trello and Asana.


When you're an API, performance and scalability are customer-facing features. The work that our engineers do directly affects the 15,000+ developers that rely on our API. Being developers ourselves, we’re very passionate about open source and staying active with our community.

Algolia values

We’re hiring! Come help us make building search a rewarding experience. Algolia teammates come from a diverse range of backgrounds and 15 different countries. Our values are Care, Humility, Trust, Candor and Grit. Employees are encouraged to travel to different offices - Paris, San Francisco, or now Atlanta - at least once a year, to build strong personal connections inside of the company.

See our open positions on StackShare.

Questions about our stack? We love to talk tech. Comment below or ask us on our Discourse forum.

Thanks to Julien Lemoine, Adam Surak, Rémy-Christophe Schermesser, Jason Harris and Raphael Terrier for their much-appreciated help on this post.

Developer-friendly hosted search service. API clients for all major frameworks and languages. REST, JSON & detailed documentation.
Tools mentioned in article
Open jobs at Algolia
Developer Community Manager
San Francisco or Paris
Algolia is hiring a Developer Community Manager to join our team of passionate community builders. The goal of our community efforts is to help people find what they’re searching for, to equip them with the tools and relationships that will help make their next search project successful. As a community manager, you know developers, their needs and challenges, and you live to help them break through problems. You’ve managed social channels before and are passionate about helping our customers excel in their projects as an Algolia user. You are approachable in person and online, and can write comfortably for a variety of mediums, while showing empathy and care. The Developer Community Manager will oversee our day-to-day communication in online community channels such as forums, Stack Overflow, Slack, and social media. You will communicate in public and 1:1 channels frequently, and put programs into place that get many Algolia employees into the conversation. You should feel comfortable multitasking and making decisions on the fly. As a visible Algolia liaison you should strive to be friendly and approachable, even in stressful situations. You’re strategic and eager to use data to inform decisions. We work together as a team, and all of the team’s domain expertise is available to help make you successful. At the same time, everyone at Algolia is an owner, and a big part of how you make your impact is up to you.
  • Oversee the online spaces that our community occupies including forums, Stack Overflow, Github, and social media.
  • Help design and implement strategies for community growth.
  • Measure community engagement and the impact of the community to the business.
  • Coordinate events, meetups and conferences that represent opportunities to bring our community together.
  • Build relationships and execute campaigns alongside our industry and corporate partners.
  • Networks with community members and identifies community champions
  • Communicates and promotes new community features or procedures to members and staff.
  • Works behind the scenes to ensure engagement.
  • Develops and maintains community training resources, guidelines, and policies.
  • Enjoys building strong relationships.
  • You're enthusiastic, friendly and an excellent communicator.
  • Demonstrates excellent customer service and communication skills.
  • Excellent written communication skills.
  • Experience participating in and moderating online communities composed of developers.
  • Recommends and implements new community features as appropriate.
  • Comfortable collecting and using data to inform decisions.
  • Experience writing customer-facing communications with a penchant for detail and friendliness.
  • Experience writing technical content or explaining technical concepts in blog posts, forums, emails, or other channels.
  • Experience working with enterprise-level social media platforms.
  • Experience working with search or search engines.
  • Familiarity with SaaS business fundamentals
  • Software Engineer - Magento
    At Algolia, we are passionate about helping developers & product teams connect their users with what matters most in milliseconds! We want to make the implementation of Algolia as smooth and easy as possible for developers. In this context, we're looking for a Software Engineer to help us fundamentally change the way developers are implementing search in Magento platforms. Your main goal is to design, architecture & implement improvements to our integration of Algolia to Magento 1 and Magento 2. We want you to build a state of the art library used by millions of customers using our search engine. You'll join an existing team of 4 people who value quality, interdependence and ownership. At Algolia, we spell Culture with a capital ‘C’. That’s why as a candidate you should value & practice transparency, take ownership, have the humility to acknowledge your weaknesses and continuously strive to grow both professionally and personally. If you're problem solver, able to think outside the box and eager to work on a project with a strong a product focus, then this is your challenge!
  • Design, architecture & implement Algolia integration for Magento platforms
  • Deliver clean & scalable code
  • Help our users in customizing our service to fit their needs
  • Partner with the other teams to provide a unique on-boarding and support experience
  • Contribute to open-source projects and provide them the unique Algolia search experience
  • 3+ years of programming experience in Magento development
  • Experience with delivering Magento 1 and Magento 2 extensions
  • Rigor in high code quality, automated testing, and other engineering best practices
  • Will to share ownership of the project and its code with all members
  • Great oral and written communication in English
  • Experience of a cloud service provider would be appreciated
  • GRIT - Problem-solving and perseverance capability in an ever-changing and growing environment.
  • TRUST - Willingness to trust our co-workers and to take ownership.
  • CANDOR - Ability to receive and give constructive feedback.
  • CARE - Genuine care about other team members, our clients and the decisions we make in the company.
  • HUMILITY - Aptitude for learning from others, putting ego aside.
  • Private Medical Insurance
  • Life and Disability Insurance 
  • Business Travel Insurance
  • Relocation support
  • Company Canteen (high standard)
  • Flexible work hours and flexible time off
  • Competitive pay and equity
  • Your choice of computer, phone, keyboard, headphones, you name it. Everything you need to be efficient
  • Coaching and sponsorship to participate and speak at leading industry conferences
  • Ongoing professional education opportunities through internal & external workshops, including public speaking, language learning (English/French)
  • Fun: we spend time together — team building, socializing and making tools that encourage getting to know teammates across offices and continents.
  • Charitable contribution matching 
  • Unique referral rewards program: refer a candidate, and we’ll donate to your charity of choice
  • Corporate flats available for the first months of relocation and when you travel to different offices
  • Fully stocked kitchens
  • Team workouts
  • Meals & happy hours
  • Full-Stack PHP Engineer
    We're looking for a PHP Full-Stack Engineer to join our quickly growing team and help us fundamentally change the way developers are implementing search. Your goal is to design, architecture & implement integrations of Algolia into various frameworks & platforms. We want you to build state of the art libraries used by millions of users using our search engine. You should be comfortable with autonomy and ownership of large areas of the product. We're looking for resilient learners and builders who aren't afraid to get their hands dirty and want to understand the ins and outs of selling to a tech audience. If you know your strengths, have the humility to accept and work on your weaknesses and continuously strive to improve both personally and professionally, we would love to hear from you! 
  • Design, architecture & implement integrations of Algolia into various frameworks & platforms
  • Build state of the art, scalable & robust PHP-based indexing pipelines
  • Help design the outstanding JavaScript-based search UI/UX
  • Maintain, support and animate the open-source community on GitHub & StackOverflow
  • 5+ years of programming experience in PHP
  • Expert in web technologies (JavaScript, CSS3, HTML5)
  • Practical knowledge of shell scripting
  • A passion for shipping quality code
  • Great oral and written communication in English
  • Experience with Git
  • Degree in Software Engineering or related field
  • Experience with developing for WordPress, Drupal, Magento, Prestashop or similar software
  • Experience with working in a very fast-paced and continuously changing environment
  • Experience with working in an open-source environment
  • Software Engineer - Analytics
    A Software Engineer working in the Analytics squad at Algolia operates the log processing toolchain and the related APIs. Today this platform handles millions of events on more than 2 TB of data per day, a number that is expected to grow in the coming months. No two problems are the same because all the systems evolve all the time. We expect you to be a resilient problem solver who isn’t afraid to think outside of the box and use the knowledge of system interactions in your favor. You’ll also take ownership of complete projects and execute them. The team is composed of engineers with different backgrounds and experience both in the industry and academia. The diversity works in our favor and you should increase it by bringing your experience, your knowledge and your point of view. Thinking differently is a plus, not a minus. We’re transparent with each other and to other teams both about our success and our failures. This way we learn, we accept our weaknesses and continuously strive to improve both personally and professionally.
  • Develop and maintain the monitoring & log processing pipeline
  • Evaluate performance of current and future systems, both software and hardware
  • Participate in design of new systems
  • Work with other teams to identify, troubleshoot, and resolve high impact issues

  • 5+ years of software engineering experience
  • Ability to work in Java and scripting languages (Python, Ruby, Bash, etc.)
  • Proven experience in a very fast-paced and continuously changing environment
  • Ability to make independent decisions and taking ownership for them
  • Full professional English proficiency
  • EU work permit
  • Knowledge of Go
  • Ability to use a configuration management tool (Chef, Ansible, etc.)
  • Administration of various *aaS (AWS, GCP, etc.)
  • Performance profiling of applications both in development and production

  • Verified by
    Marketing Specialist
    Engineering Lead
    Co-Founder & CEO
    Software Engineer
    Software Engineer
    Front-end developer
    Software engineer
    Frontend Engineer
    Frontend Developer
    Information Technology
    Content & Education
    VP of Engineering
    Software engineer
    Customer Solutions Engineer
    Software Engineer
    Senior JavaScript Engineer
    You may also like