Elasticsearch vs Swagger UI

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Elasticsearch
Elasticsearch

8.9K
5.9K
+ 1
1.6K
Swagger UI
Swagger UI

903
519
+ 1
134
Add tool

Elasticsearch vs Swagger UI: What are the differences?

Elasticsearch: Open Source, Distributed, RESTful Search Engine. Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack); Swagger UI: dependency-free collection of HTML, Javascript, and CSS assets that dynamically generate beautiful documentation. Swagger UI is a dependency-free collection of HTML, Javascript, and CSS assets that dynamically generate beautiful documentation and sandbox from a Swagger-compliant API.

Elasticsearch belongs to "Search as a Service" category of the tech stack, while Swagger UI can be primarily classified under "Documentation as a Service & Tools".

"Powerful api" is the primary reason why developers consider Elasticsearch over the competitors, whereas "Open Source" was stated as the key factor in picking Swagger UI.

Elasticsearch is an open source tool with 42.4K GitHub stars and 14.2K GitHub forks. Here's a link to Elasticsearch's open source repository on GitHub.

According to the StackShare community, Elasticsearch has a broader approval, being mentioned in 2003 company stacks & 977 developers stacks; compared to Swagger UI, which is listed in 205 company stacks and 107 developer stacks.

What is Elasticsearch?

Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack).

What is Swagger UI?

Swagger UI is a dependency-free collection of HTML, Javascript, and CSS assets that dynamically generate beautiful documentation and sandbox from a Swagger-compliant API
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Elasticsearch?
Why do developers choose Swagger UI?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
    Jobs that mention Elasticsearch and Swagger UI as a desired skillset
    PinterestPinterest
    San Francisco, CA; Palo Alto, CA
    PinterestPinterest
    San Francisco, CA; Palo Alto, CA
    PinterestPinterest
    San Francisco, CA; Palo Alto, CA
    PinterestPinterest
    San Francisco, CA; Palo Alto, CA
    What companies use Elasticsearch?
    What companies use Swagger UI?

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Elasticsearch?
    What tools integrate with Swagger UI?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to Elasticsearch and Swagger UI?
    Solr
    Solr is the popular, blazing fast open source enterprise search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, near real-time indexing, dynamic clustering, database integration, rich document (e.g., Word, PDF) handling, and geospatial search. Solr is highly reliable, scalable and fault tolerant, providing distributed indexing, replication and load-balanced querying, automated failover and recovery, centralized configuration and more. Solr powers the search and navigation features of many of the world's largest internet sites.
    Lucene
    Lucene Core, our flagship sub-project, provides Java-based indexing and search technology, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities.
    MongoDB
    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
    Algolia
    Our mission is to make you a search expert. Push data to our API to make it searchable in real time. Build your dream front end with one of our web or mobile UI libraries. Tune relevance and get analytics right from your dashboard.
    Splunk
    Splunk Inc. provides the leading platform for Operational Intelligence. Customers use Splunk to search, monitor, analyze and visualize machine data.
    See all alternatives
    Decisions about Elasticsearch and Swagger UI
    Tim Specht
    Tim Specht
    ‎Co-Founder and CTO at Dubsmash · | 16 upvotes · 55K views
    atDubsmashDubsmash
    Memcached
    Memcached
    Algolia
    Algolia
    Elasticsearch
    Elasticsearch
    #SearchAsAService

    Although we were using Elasticsearch in the beginning to power our in-app search, we moved this part of our processing over to Algolia a couple of months ago; this has proven to be a fantastic choice, letting us build search-related features with more confidence and speed.

    Elasticsearch is only used for searching in internal tooling nowadays; hosting and running it reliably has been a task that took up too much time for us in the past and fine-tuning the results to reach a great user-experience was also never an easy task for us. With Algolia we can flexibly change ranking methods on the fly and can instead focus our time on fine-tuning the experience within our app.

    Memcached is used in front of most of the API endpoints to cache responses in order to speed up response times and reduce server-costs on our side.

    #SearchAsAService

    See more
    Noah Zoschke
    Noah Zoschke
    Engineering Manager at Segment · | 29 upvotes · 142.5K views
    atSegmentSegment
    Swagger UI
    Swagger UI
    ReadMe.io
    ReadMe.io
    Markdown
    Markdown
    Postman
    Postman
    #QA
    #Api
    #Documentation

    We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. A public API is only as good as its #documentation. For the API reference doc we are using Postman.

    Postman is an “API development environment”. You download the desktop app, and build API requests by URL and payload. Over time you can build up a set of requests and organize them into a “Postman Collection”. You can generalize a collection with “collection variables”. This allows you to parameterize things like username, password and workspace_name so a user can fill their own values in before making an API call. This makes it possible to use Postman for one-off API tasks instead of writing code.

    Then you can add Markdown content to the entire collection, a folder of related methods, and/or every API method to explain how the APIs work. You can publish a collection and easily share it with a URL.

    This turns Postman from a personal #API utility to full-blown public interactive API documentation. The result is a great looking web page with all the API calls, docs and sample requests and responses in one place. Check out the results here.

    Postman’s powers don’t end here. You can automate Postman with “test scripts” and have it periodically run a collection scripts as “monitors”. We now have #QA around all the APIs in public docs to make sure they are always correct

    Along the way we tried other techniques for documenting APIs like ReadMe.io or Swagger UI. These required a lot of effort to customize.

    Writing and maintaining a Postman collection takes some work, but the resulting documentation site, interactivity and API testing tools are well worth it.

    See more
    Julien DeFrance
    Julien DeFrance
    Full Stack Engineering Manager at ValiMail · | 16 upvotes · 285.6K views
    atSmartZipSmartZip
    Amazon DynamoDB
    Amazon DynamoDB
    Ruby
    Ruby
    Node.js
    Node.js
    AWS Lambda
    AWS Lambda
    New Relic
    New Relic
    Amazon Elasticsearch Service
    Amazon Elasticsearch Service
    Elasticsearch
    Elasticsearch
    Superset
    Superset
    Amazon Quicksight
    Amazon Quicksight
    Amazon Redshift
    Amazon Redshift
    Zapier
    Zapier
    Segment
    Segment
    Amazon CloudFront
    Amazon CloudFront
    Memcached
    Memcached
    Amazon ElastiCache
    Amazon ElastiCache
    Amazon RDS for Aurora
    Amazon RDS for Aurora
    MySQL
    MySQL
    Amazon RDS
    Amazon RDS
    Amazon S3
    Amazon S3
    Docker
    Docker
    Capistrano
    Capistrano
    AWS Elastic Beanstalk
    AWS Elastic Beanstalk
    Rails API
    Rails API
    Rails
    Rails
    Algolia
    Algolia

    Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

    I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

    For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

    Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

    Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

    Future improvements / technology decisions included:

    Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

    As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

    One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

    See more
    Tim Nolet
    Tim Nolet
    Founder, Engineer & Dishwasher at Checkly · | 7 upvotes · 32.8K views
    atChecklyHQChecklyHQ
    Slate
    Slate
    Swagger UI
    Swagger UI
    Vue.js
    Vue.js
    hapi
    hapi
    Node.js
    Node.js
    JavaScript
    JavaScript

    JavaScript Node.js hapi Vue.js Swagger UI Slate

    Two weeks ago we released the public API for Checkly. We already had an API that was serving our frontend Vue.js app. We decided to create an new set of API endpoints and not reuse the already existing one. The blog post linked below details what parts we needed to refactor, what parts we added and how we handled generating API documentation. More specifically, the post dives into:

    • Refactoring the existing Hapi.js based API
    • API key based authentication
    • Refactoring models with Objection.js
    • Validating plan limits
    • Generating Swagger & Slate based documentation
    See more
    Interest over time
    Reviews of Elasticsearch and Swagger UI
    No reviews found
    How developers use Elasticsearch and Swagger UI
    Avatar of imgur
    imgur uses ElasticsearchElasticsearch

    Elasticsearch is the engine that powers search on the site. From a high level perspective, it’s a Lucene wrapper that exposes Lucene’s features via a RESTful API. It handles the distribution of data and simplifies scaling, among other things.

    Given that we are on AWS, we use an AWS cloud plugin for Elasticsearch that makes it easy to work in the cloud. It allows us to add nodes without much hassle. It will take care of figuring out if a new node has joined the cluster, and, if so, Elasticsearch will proceed to move data to that new node. It works the same way when a node goes down. It will remove that node based on the AWS cluster configuration.

    Avatar of Instacart
    Instacart uses ElasticsearchElasticsearch

    The very first version of the search was just a Postgres database query. It wasn’t terribly efficient, and then at some point, we moved over to ElasticSearch, and then since then, Andrew just did a lot of work with it, so ElasticSearch is amazing, but out of the box, it doesn’t come configured with all the nice things that are there, but you spend a lot of time figuring out how to put it all together to add stemming, auto suggestions, all kinds of different things, like even spelling adjustments and tomato/tomatoes, that would return different results, so Andrew did a ton of work to make it really, really nice and build a very simple Ruby gem called SearchKick.

    Avatar of AngeloR
    AngeloR uses ElasticsearchElasticsearch

    We use ElasticSearch for

    • Session Logs
    • Analytics
    • Leaderboards

    We originally self managed the ElasticSearch clusters, but due to our small ops team size we opt to move things to managed AWS services where possible.

    The managed servers, however, do not allow us to manage our own backups and a restore actually requires us to open a support ticket with them. We ended up setting up our own nightly backup since we do per day indexes for the logs/analytics.

    Avatar of Brandon Adams
    Brandon Adams uses ElasticsearchElasticsearch

    Elasticsearch has good tooling and supports a large api that makes it ideal for denormalizing data. It has a simple to use aggregations api that tends to encompass most of what I need a BI tool to do, especially in the early going (when paired with Kibana). It's also handy when you just want to search some text.

    Avatar of Ana Phi Sancho
    Ana Phi Sancho uses ElasticsearchElasticsearch

    Self taught : acquired knowledge or skill on one's own initiative. Open Source Search & Analytics. -time search and analytics engine. Search engine based on Lucene. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

    Avatar of p009922
    p009922 uses Swagger UISwagger UI

    documentation tool for online docu on the REST-Services

    Avatar of dotmos
    dotmos uses Swagger UISwagger UI

    Document our REST API.

    Avatar of Minyoung Kim
    Minyoung Kim uses Swagger UISwagger UI

    REST API 도큐먼트 관리

    How much does Elasticsearch cost?
    How much does Swagger UI cost?
    Pricing unavailable
    News about Swagger UI
    More news