HBase logo

HBase

The Hadoop database, a distributed, scalable, big data store

What is HBase?

Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Apache Hadoop.
HBase is a tool in the Databases category of a tech stack.
HBase is an open source tool with 5.2K GitHub stars and 3.3K GitHub forks. Here’s a link to HBase's open source repository on GitHub

Who uses HBase?

Companies
82 companies reportedly use HBase in their tech stacks, including Pinterest, Hepsiburada, and Hubspot.

Developers
294 developers on StackShare have stated that they use HBase.

HBase Integrations

Apache Flink, Apache Hive, Apache Zeppelin, StreamSets, and Azure HDInsight are some of the popular tools that integrate with HBase. Here's a list of all 13 tools that integrate with HBase.
Pros of HBase
9
Performance
5
OLTP
1
Fast Point Queries
Decisions about HBase

Here are some stack decisions, common use cases and reviews by companies and developers who chose HBase in their tech stack.

Needs advice
on
HBaseHBaseMilvusMilvus
and
RocksDBRocksDB

I am researching different querying solutions to handle ~1 trillion records of data (in the realm of a petabyte). The data is mostly textual. I have identified a few options: Milvus, HBase, RocksDB, and Elasticsearch. I was wondering if there is a good way to compare the performance of these options (or if anyone has already done something like this). I want to be able to compare the speed of ingesting and querying textual data from these tools. Does anyone have information on this or know where I can find some? Thanks in advance!

See more
Needs advice
on
Amazon S3Amazon S3
and
HBaseHBase

Hi, I'm building a machine learning pipelines to store image bytes and image vectors in the backend.

So, when users query for the random access image data (key), we return the image bytes and perform machine learning model operations on it.

I'm currently considering going with Amazon S3 (in the future, maybe add Redis caching layer) as the backend system to store the information (s3 buckets with sharded prefixes).

As the latency of S3 is 100-200ms (get/put) and it has a high throughput of 3500 puts/sec and 5500 gets/sec for a given bucker/prefix. In the future I need to reduce the latency, I can add Redis cache.

Also, s3 costs are way fewer than HBase (on Amazon EC2 instances with 3x replication factor)

I have not personally used HBase before, so can someone help me if I'm making the right choice here? I'm not aware of Hbase latencies and I have learned that the MOB feature on Hbase has to be turned on if we have store image bytes on of the column families as the avg image bytes are 240Kb.

See more

Blog Posts

Jun 24 2020 at 4:42PM

Pinterest

Amazon S3KafkaHBase+4
4
1250
MySQLKafkaApache Spark+6
2
2059

HBase Alternatives & Comparisons

What are some alternatives to HBase?
Cassandra
Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.
Google Cloud Bigtable
Google Cloud Bigtable offers you a fast, fully managed, massively scalable NoSQL database service that's ideal for web, mobile, and Internet of Things applications requiring terabytes to petabytes of data. Unlike comparable market offerings, Cloud Bigtable doesn't require you to sacrifice speed, scale, or cost efficiency when your applications grow. Cloud Bigtable has been battle-tested at Google for more than 10 years—it's the database driving major applications such as Google Analytics and Gmail.
MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Hadoop
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
Druid
Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.
See all alternatives

HBase's Followers
495 developers follow HBase to keep up with related blogs and decisions.