Elasticsearch vs Scalyr: What are the differences?
Elasticsearch: Open Source, Distributed, RESTful Search Engine. Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack); Scalyr: Cloud-based log aggregation, server monitoring, and real-time analysis tool. Scalyr is log search and management so fast you actually use it. Custom dashboards, graphs, alerts and log parsers allow you to monitor what's important to you. We're proud to serve customers like Business Insider, Opendoor, and Grab.
Elasticsearch belongs to "Search as a Service" category of the tech stack, while Scalyr can be primarily classified under "Log Management".
Some of the features offered by Elasticsearch are:
- Distributed and Highly Available Search Engine.
- Multi Tenant with Multi Types.
- Various set of APIs including RESTful
On the other hand, Scalyr provides the following key features:
- Remote log monitoring
- log aggregation
- real-time reporting
"Powerful api" is the primary reason why developers consider Elasticsearch over the competitors, whereas "Speed of queries" was stated as the key factor in picking Scalyr.
Elasticsearch is an open source tool with 41.9K GitHub stars and 14K GitHub forks. Here's a link to Elasticsearch's open source repository on GitHub.
Instacart, Slack, and Stack Exchange are some of the popular companies that use Elasticsearch, whereas Scalyr is used by Codecademy, Property With Potential, and Zola. Elasticsearch has a broader approval, being mentioned in 1976 company stacks & 936 developers stacks; compared to Scalyr, which is listed in 11 company stacks and 3 developer stacks.
What is Elasticsearch?
What is Scalyr?
Need advice about which tool to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
What are the cons of using Scalyr?
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
Although we were using Elasticsearch in the beginning to power our in-app search, we moved this part of our processing over to Algolia a couple of months ago; this has proven to be a fantastic choice, letting us build search-related features with more confidence and speed.
Elasticsearch is only used for searching in internal tooling nowadays; hosting and running it reliably has been a task that took up too much time for us in the past and fine-tuning the results to reach a great user-experience was also never an easy task for us. With Algolia we can flexibly change ranking methods on the fly and can instead focus our time on fine-tuning the experience within our app.
Memcached is used in front of most of the API endpoints to cache responses in order to speed up response times and reduce server-costs on our side.
Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.
I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.
For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.
Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.
Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.
Future improvements / technology decisions included:
Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic
As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.
One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.
Elasticsearch is the engine that powers search on the site. From a high level perspective, it’s a Lucene wrapper that exposes Lucene’s features via a RESTful API. It handles the distribution of data and simplifies scaling, among other things.
Given that we are on AWS, we use an AWS cloud plugin for Elasticsearch that makes it easy to work in the cloud. It allows us to add nodes without much hassle. It will take care of figuring out if a new node has joined the cluster, and, if so, Elasticsearch will proceed to move data to that new node. It works the same way when a node goes down. It will remove that node based on the AWS cluster configuration.
The very first version of the search was just a Postgres database query. It wasn’t terribly efficient, and then at some point, we moved over to ElasticSearch, and then since then, Andrew just did a lot of work with it, so ElasticSearch is amazing, but out of the box, it doesn’t come configured with all the nice things that are there, but you spend a lot of time figuring out how to put it all together to add stemming, auto suggestions, all kinds of different things, like even spelling adjustments and tomato/tomatoes, that would return different results, so Andrew did a ton of work to make it really, really nice and build a very simple Ruby gem called SearchKick.
We use ElasticSearch for
- Session Logs
We originally self managed the ElasticSearch clusters, but due to our small ops team size we opt to move things to managed AWS services where possible.
The managed servers, however, do not allow us to manage our own backups and a restore actually requires us to open a support ticket with them. We ended up setting up our own nightly backup since we do per day indexes for the logs/analytics.
Elasticsearch has good tooling and supports a large api that makes it ideal for denormalizing data. It has a simple to use aggregations api that tends to encompass most of what I need a BI tool to do, especially in the early going (when paired with Kibana). It's also handy when you just want to search some text.
Self taught : acquired knowledge or skill on one's own initiative. Open Source Search & Analytics. -time search and analytics engine. Search engine based on Lucene. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.