Citus vs RocksDB: What are the differences?
Developers describe Citus as "Worry-free Postgres for SaaS. Built to scale out". Citus is worry-free Postgres for SaaS. Made to scale out, Citus is an extension to Postgres that distributes queries across any number of servers. Citus is available as open source, as on-prem software, and as a fully-managed service. On the other hand, RocksDB is detailed as "Embeddable persistent key-value store for fast storage, developed and maintained by Facebook Database Engineering Team". RocksDB is an embeddable persistent key-value store for fast storage. RocksDB can also be the foundation for a client-server database but our current focus is on embedded workloads. RocksDB builds on LevelDB to be scalable to run on servers with many CPU cores, to efficiently use fast storage, to support IO-bound, in-memory and write-once workloads, and to be flexible to allow for innovation.
Citus and RocksDB can be categorized as "Databases" tools.
Some of the features offered by Citus are:
- Multi-Node Scalable PostgreSQL
- Built-in Replication and High Availability
- Real-time Reads/Writes On Multiple Nodes
On the other hand, RocksDB provides the following key features:
- Designed for application servers wanting to store up to a few terabytes of data on locally attached Flash drives or in RAM
- Optimized for storing small to medium size key-values on fast storage -- flash devices or in-memory
- Scales linearly with number of CPUs so that it works well on ARM processors
"Multi-core Parallel Processing" is the top reason why over 3 developers like Citus, while over 2 developers mention "Very fast" as the leading cause for choosing RocksDB.
Citus and RocksDB are both open source tools. RocksDB with 14.3K GitHub stars and 3.12K forks on GitHub appears to be more popular than Citus with 3.64K GitHub stars and 273 GitHub forks.
What is Citus?
What is RocksDB?
Need advice about which tool to choose?Ask the StackShare community!
What are the cons of using Citus?
What are the cons of using RocksDB?
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
PostgreSQL was an easy early decision for the founding team. The relational data model fit the types of analyses they would be doing: filtering, grouping, joining, etc., and it was the database they knew best.
Shortly after adopting PG, they discovered Citus, which is a tool that makes it easy to distribute queries. Although it was a young project and a fork of Postgres at that point, Dan says the team was very available, highly expert, and it wouldn’t be very difficult to move back to PG if they needed to.
The stuff they forked was in query execution. You could treat the worker nodes like regular PG instances. Citus also gave them a ton of flexibility to make queries fast, and again, they felt the data model was the best fit for their application.
At Heap, we searched for an existing tool that would allow us to express the full range of analyses we needed, index the event definitions that made up the analyses, and was a mature, natively distributed system.
After coming up empty on this search, we decided to compromise on the “maturity” requirement and build our own distributed system around Citus and sharded PostgreSQL. It was at this point that we also introduced Kafka as a queueing layer between the Node.js application servers and Postgres.
If we could go back in time, we probably would have started using Kafka on day one. One of the biggest benefits in adopting Kafka has been the peace of mind that it brings. In an analytics infrastructure, it’s often possible to make data ingestion idempotent.
In Heap’s case, that means that, if anything downstream from Kafka goes down, we won’t lose any data – it’s just going to take a bit longer to get to its destination. We also learned that you want the path between data hitting your servers and your initial persistence layer (in this case, Kafka) to be as short and simple as possible, since that is the surface area where a failure means you can lose customer data. We learned that it’s a very good fit for an analytics tool, since you can handle a huge number of incoming writes with relatively low latency. Kafka also gives you the ability to “replay” the data flow: it’s like a commit log for your whole infrastructure.
#MessageQueue #Databases #FrameworksFullStack