Elasticsearch logo

Elasticsearch

Open Source, Distributed, RESTful Search Engine

What is Elasticsearch?

Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack).
Elasticsearch is a tool in the Search as a Service category of a tech stack.
Elasticsearch is an open source tool with GitHub stars and GitHub forks. Here’s a link to Elasticsearch's open source repository on GitHub

Who uses Elasticsearch?

Companies
4099 companies reportedly use Elasticsearch in their tech stacks, including Uber, Shopify, and Udemy.

Developers
29341 developers on StackShare have stated that they use Elasticsearch.

Elasticsearch Integrations

Kibana, Logstash, Datadog, Contentful, and Couchbase are some of the popular tools that integrate with Elasticsearch. Here's a list of all 97 tools that integrate with Elasticsearch.
Pros of Elasticsearch
328
Powerful api
315
Great search engine
231
Open source
214
Restful
200
Near real-time search
98
Free
85
Search everything
54
Easy to get started
45
Analytics
26
Distributed
6
Fast search
5
More than a search engine
4
Great docs
4
Awesome, great tool
3
Highly Available
3
Easy to scale
2
Potato
2
Document Store
2
Great customer support
2
Intuitive API
2
Nosql DB
2
Great piece of software
2
Reliable
2
Fast
2
Easy setup
1
Open
1
Easy to get hot data
1
Github
1
Elaticsearch
1
Actively developing
1
Responsive maintainers on GitHub
1
Ecosystem
1
Not stable
1
Scalability
0
Community
Decisions about Elasticsearch

Here are some stack decisions, common use cases and reviews by companies and developers who chose Elasticsearch in their tech stack.

Simon Bettison
Managing Director at Bettison.org Limited · | 8 upvotes · 831.6K views
Shared insights
at

In 2012 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.

Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.

PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.

Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!

Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.

We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).

Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service

See more
Needs advice
on
ElasticsearchElasticsearchFaunaFauna
and
MongoDBMongoDB

I would like to assess search functionality along with some analytical use cases like aggregating, faceting etc.,. I would like to know which is the best database to go with among Elasticsearch, MongoDB and FaunaDB.

See more
Needs advice
on
ElasticsearchElasticsearch
and
PostgreSQLPostgreSQL

Hi, I need advice on which Database tool to use in the following scenario:

I work with Cesium, and I need to save and load CZML snapshot and update objects for a recording program that saves files containing several entities (along with the time of the snapshot or update). I need to be able to easily load the files according to the corresponding timeline point (for example, if the update was recorded at 13:15, I should be able to easily load the update file when I click on the 13:15 point on the timeline). I should also be able to make geo-queries relatively easily.

I am currently thinking about Elasticsearch or PostgreSQL, but I am open to suggestions. I tried looking into Time Series Databases like TimescaleDB but found that it is unnecessarily powerful than my needs since the update time is a simple variable.

Thanks for your advice in advance!

See more
Nilesh Akhade
Technical Architect at Self Employed · | 5 upvotes · 554.7K views

We have a Kafka topic having events of type A and type B. We need to perform an inner join on both type of events using some common field (primary-key). The joined events to be inserted in Elasticsearch.

In usual cases, type A and type B events (with same key) observed to be close upto 15 minutes. But in some cases they may be far from each other, lets say 6 hours. Sometimes event of either of the types never come.

In all cases, we should be able to find joined events instantly after they are joined and not-joined events within 15 minutes.

See more
Sunil Chaudhari
Needs advice
on
MetricbeatMetricbeat
and
PrometheusPrometheus

Hi, We have a situation, where we are using Prometheus to get system metrics from PCF (Pivotal Cloud Foundry) platform. We send that as time-series data to Cortex via a Prometheus server and built a dashboard using Grafana. There is another pipeline where we need to read metrics from a Linux server using Metricbeat, CPU, memory, and Disk. That will be sent to Elasticsearch and Grafana will pull and show the data in a dashboard.

Is it OK to use Metricbeat for Linux server or can we use Prometheus?

What is the difference in system metrics sent by Metricbeat and Prometheus node exporters?

Regards, Sunil.

See more

Blog Posts

May 21 2019 at 12:20AM

Elastic

ElasticsearchKibanaLogstash+4
12
5301
GitHubPythonReact+42
49
40934
GitHubPythonNode.js+47
55
72811

Elasticsearch's Features

  • Distributed and Highly Available Search Engine
  • Multi Tenant with Multi Types
  • Various set of APIs including RESTful
  • Clients available in many languages including Java, Python, .NET, C#, Groovy, and more
  • Document oriented
  • Reliable, Asynchronous Write Behind for long term persistency
  • (Near) Real Time Search
  • Built on top of Apache Lucene
  • Per operation consistency
  • Inverted indices with finite state transducers for full-text querying
  • BKD trees for storing numeric and geo data
  • Column store for analytics
  • Compatible with Hadoop using the ES-Hadoop connector
  • Open Source under Apache 2 and Elastic License

Elasticsearch Alternatives & Comparisons

What are some alternatives to Elasticsearch?
Datadog
Datadog is the leading service for cloud-scale monitoring. It is used by IT, operations, and development teams who build and operate applications that run on dynamic or hybrid cloud infrastructure. Start monitoring in minutes with Datadog!
Solr
Solr is the popular, blazing fast open source enterprise search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, near real-time indexing, dynamic clustering, database integration, rich document (e.g., Word, PDF) handling, and geospatial search. Solr is highly reliable, scalable and fault tolerant, providing distributed indexing, replication and load-balanced querying, automated failover and recovery, centralized configuration and more. Solr powers the search and navigation features of many of the world's largest internet sites.
Lucene
Lucene Core, our flagship sub-project, provides Java-based indexing and search technology, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities.
MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Algolia
Our mission is to make you a search expert. Push data to our API to make it searchable in real time. Build your dream front end with one of our web or mobile UI libraries. Tune relevance and get analytics right from your dashboard.
See all alternatives

Elasticsearch's Followers
26883 developers follow Elasticsearch to keep up with related blogs and decisions.