Need advice about which tool to choose?Ask the StackShare community!

InfluxDB

1K
1.2K
+ 1
175
Kafka

23.5K
22K
+ 1
607
Add tool

InfluxDB vs Kafka: What are the differences?

Introduction

In this article, I will provide an overview of the key differences between InfluxDB and Kafka. Both InfluxDB and Kafka are popular open-source technologies used for different purposes in modern data infrastructure setups. Understanding their differences is important for selecting the most suitable solution for specific use cases.

  1. Data Storage and Querying: InfluxDB is a time series database designed for storing and querying time-stamped data efficiently. It provides an optimized storage engine and indexing structure tailored for time series data, making it a suitable choice for applications that require high-performance data retrieval based on timestamps. On the other hand, Kafka is a distributed streaming platform that acts as a message broker and stores data in logs. While Kafka can store historical data for a certain period, it is not optimized for complex querying like InfluxDB.

  2. Data Processing Paradigm: InfluxDB is primarily focused on real-time data ingestion and analysis. It supports a variety of data ingestion methods, such as HTTP API, line protocol, and Telegraf agents, and provides SQL-like query language, known as InfluxQL, for data retrieval and analysis. On the other hand, Kafka is designed for streaming data processing. It enables real-time data streaming from various sources and supports stream processing frameworks like Apache Storm, Apache Flink, and Apache Spark Streaming for data transformations and analytics.

  3. Data Integration and Ecosystem: InfluxDB has a rich ecosystem of integrations and plugins that facilitate its integration with various tools and technologies. It provides official libraries and client drivers for popular programming languages, as well as plugins for data ingestion from external systems like Telegraf and Grafana for data visualization. Kafka, on the other hand, has its own ecosystem and is commonly used as a central data pipeline, integrating with numerous data sources, streaming frameworks, and data sinks. It offers reliable data delivery semantics and fault-tolerant messaging between producers and consumers.

  4. Scalability and Fault Tolerance: InfluxDB is designed to scale vertically, meaning it can handle large amounts of data by adding more resources to a single node. It supports clustering for high availability and replication for fault tolerance. However, Kafka is designed for horizontal scalability by distributing data across multiple nodes and partitions, allowing for higher throughput and fault tolerance through data replication and partitioning.

  5. Data Retention: InfluxDB provides built-in mechanisms for automated data retention policies, allowing data to be automatically purged or downsampled after a specified duration. It includes features like continuous queries and data retention policies for managing time series data efficiently. Kafka, on the other hand, does not provide built-in data retention capabilities. Data retention in Kafka is typically determined by the configuration settings and disk storage capacity of the Kafka cluster.

  6. Use Cases: InfluxDB is commonly used in applications that require real-time monitoring, IoT sensor data collection, real-time analytics, and anomaly detection. It is well-suited for storing high-frequency, time-series data and performing real-time analysis on that data. Kafka, on the other hand, is commonly used for building real-time streaming data pipelines, event sourcing, log aggregation, messaging systems, and commit logs. It enables reliable and scalable data streaming across different components of a distributed system.

In summary, InfluxDB is a time series database optimized for efficient storage and querying of time-stamped data, primarily used in real-time data analysis and monitoring. Kafka, on the other hand, is a distributed streaming platform that enables real-time data processing, streaming, and integration across various components of a data infrastructure.

Advice on InfluxDB and Kafka
Needs advice
on
HadoopHadoopInfluxDBInfluxDB
and
KafkaKafka

I have a lot of data that's currently sitting in a MariaDB database, a lot of tables that weigh 200gb with indexes. Most of the large tables have a date column which is always filtered, but there are usually 4-6 additional columns that are filtered and used for statistics. I'm trying to figure out the best tool for storing and analyzing large amounts of data. Preferably self-hosted or a cheap solution. The current problem I'm running into is speed. Even with pretty good indexes, if I'm trying to load a large dataset, it's pretty slow.

See more
Replies (1)
Recommends
on
DruidDruid

Druid Could be an amazing solution for your use case, My understanding, and the assumption is you are looking to export your data from MariaDB for Analytical workload. It can be used for time series database as well as a data warehouse and can be scaled horizontally once your data increases. It's pretty easy to set up on any environment (Cloud, Kubernetes, or Self-hosted nix system). Some important features which make it a perfect solution for your use case. 1. It can do streaming ingestion (Kafka, Kinesis) as well as batch ingestion (Files from Local & Cloud Storage or Databases like MySQL, Postgres). In your case MariaDB (which has the same drivers to MySQL) 2. Columnar Database, So you can query just the fields which are required, and that runs your query faster automatically. 3. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. 4. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures 5. Gives ana amazing centralized UI to manage data sources, query, tasks.

See more
Needs advice
on
InfluxDBInfluxDBMongoDBMongoDB
and
TimescaleDBTimescaleDB

We are building an IOT service with heavy write throughput and fewer reads (we need downsampling records). We prefer to have good reliability when comes to data and prefer to have data retention based on policies.

So, we are looking for what is the best underlying DB for ingesting a lot of data and do queries easily

See more
Replies (3)
Yaron Lavi
Recommends
on
PostgreSQLPostgreSQL

We had a similar challenge. We started with DynamoDB, Timescale, and even InfluxDB and Mongo - to eventually settle with PostgreSQL. Assuming the inbound data pipeline in queued (for example, Kinesis/Kafka -> S3 -> and some Lambda functions), PostgreSQL gave us a We had a similar challenge. We started with DynamoDB, Timescale and even InfluxDB and Mongo - to eventually settle with PostgreSQL. Assuming the inbound data pipeline in queued (for example, Kinesis/Kafka -> S3 -> and some Lambda functions), PostgreSQL gave us better performance by far.

See more
Recommends
on
DruidDruid

Druid is amazing for this use case and is a cloud-native solution that can be deployed on any cloud infrastructure or on Kubernetes. - Easy to scale horizontally - Column Oriented Database - SQL to query data - Streaming and Batch Ingestion - Native search indexes It has feature to work as TimeSeriesDB, Datawarehouse, and has Time-optimized partitioning.

See more
Ankit Malik
Software Developer at CloudCover · | 3 upvotes · 350.1K views
Recommends
on
Google BigQueryGoogle BigQuery

if you want to find a serverless solution with capability of a lot of storage and SQL kind of capability then google bigquery is the best solution for that.

See more
Needs advice
on
KafkaKafkaRabbitMQRabbitMQ
and
RedisRedis

We are going to develop a microservices-based application. It consists of AngularJS, ASP.NET Core, and MSSQL.

We have 3 types of microservices. Emailservice, Filemanagementservice, Filevalidationservice

I am a beginner in microservices. But I have read about RabbitMQ, but come to know that there are Redis and Kafka also in the market. So, I want to know which is best.

See more
Replies (4)
Maheedhar Aluri
Recommends
on
KafkaKafka

Kafka is an Enterprise Messaging Framework whereas Redis is an Enterprise Cache Broker, in-memory database and high performance database.Both are having their own advantages, but they are different in usage and implementation. Now if you are creating microservices check the user consumption volumes, its generating logs, scalability, systems to be integrated and so on. I feel for your scenario initially you can go with KAFKA bu as the throughput, consumption and other factors are scaling then gradually you can add Redis accordingly.

See more
Recommends
on
AngularAngular

I first recommend that you choose Angular over AngularJS if you are starting something new. AngularJs is no longer getting enhancements, but perhaps you meant Angular. Regarding microservices, I recommend considering microservices when you have different development teams for each service that may want to use different programming languages and backend data stores. If it is all the same team, same code language, and same data store I would not use microservices. I might use a message queue, in which case RabbitMQ is a good one. But you may also be able to simply write your own in which you write a record in a table in MSSQL and one of your services reads the record from the table and processes it. The most challenging part of doing it yourself is writing a service that does a good job of reading the queue without reading the same message multiple times or missing a message; and that is where RabbitMQ can help.

See more
Amit Mor
Software Architect at Payoneer · | 3 upvotes · 815.2K views
Recommends
on
KafkaKafka

I think something is missing here and you should consider answering it to yourself. You are building a couple of services. Why are you considering event-sourcing architecture using Message Brokers such as the above? Won't a simple REST service based arch suffice? Read about CQRS and the problems it entails (state vs command impedance for example). Do you need Pub/Sub or Push/Pull? Is queuing of messages enough or would you need querying or filtering of messages before consumption? Also, someone would have to manage these brokers (unless using managed, cloud provider based solution), automate their deployment, someone would need to take care of backups, clustering if needed, disaster recovery, etc. I have a good past experience in terms of manageability/devops of the above options with Kafka and Redis, not so much with RabbitMQ. Both are very performant. But also note that Redis is not a pure message broker (at time of writing) but more of a general purpose in-memory key-value store. Kafka nowadays is much more than a distributed message broker. Long story short. In my taste, you should go with a minialistic approach and try to avoid either of them if you can, especially if your architecture does not fall nicely into event sourcing. If not I'd examine Kafka. If you need more capabilities than I'd consider Redis and use it for all sorts of other things such as a cache.

See more
Recommends
on
NATSNATS

We found that the CNCF landscape is a good advisor when working going into the cloud / microservices space: https://landscape.cncf.io/fullscreen=yes. When choosing a technology one important criteria to me is if it is cloud native or not. Neither Redis, RabbitMQ nor Kafka is cloud native. The try to adapt but will be replaced eventually with technologies that are cloud native.

We have gone with NATS and have never looked back. We haven't spend a single minute on server maintainance in the last year and the setup of a cluster is way too easy. With the new features NATS incorporates now (and the ones still on the roadmap) it is already and will be sooo much mure than Redis, RabbitMQ and Kafka are. It can replace service discovery, load balancing, global multiclusters and failover, etc, etc.

Your thought might be: But I don't need all of that! Well, at the same time it is much more leightweight than Redis, RabbitMQ and especially Kafka.

See more
Pramod Nikam
Co Founder at Usability Designs · | 2 upvotes · 547.4K views
Needs advice
on
Apache ThriftApache ThriftKafkaKafka
and
NSQNSQ

I am looking into IoT World Solution where we have MQTT Broker. This MQTT Broker Sits in one of the Data Center. We are doing a lot of Alert and Alarm related processing on that Data, Currently, we are looking into Solution which can do distributed persistence of log/alert primarily on remote Disk.

Our primary need is to use lightweight where operational complexity and maintenance costs can be significantly reduced. We want to do it on-premise so we are not considering cloud solutions.

We looked into the following alternatives:

Apache Kafka - Great choice but operation and maintenance wise very complex. Rabbit MQ - High availability is the issue, Apache Pulsar - Operational Complexity. NATS - Absence of persistence. Akka Streams - Big learning curve and operational streams.

So we are looking into a lightweight library that can do distributed persistence preferably with publisher and subscriber model. Preferable on JVM stack.

See more
Replies (1)
Naresh Kancharla
Staff Engineer at Nutanix · | 4 upvotes · 544.8K views
Recommends
on
KafkaKafka

Kafka is best fit here. Below are the advantages with Kafka ACLs (Security), Schema (protobuf), Scale, Consumer driven and No single point of failure.

Operational complexity is manageable with open source monitoring tools.

See more
Needs advice
on
KafkaKafka
and
RabbitMQRabbitMQ

Our backend application is sending some external messages to a third party application at the end of each backend (CRUD) API call (from UI) and these external messages take too much extra time (message building, processing, then sent to the third party and log success/failure), UI application has no concern to these extra third party messages.

So currently we are sending these third party messages by creating a new child thread at end of each REST API call so UI application doesn't wait for these extra third party API calls.

I want to integrate Apache Kafka for these extra third party API calls, so I can also retry on failover third party API calls in a queue(currently third party messages are sending from multiple threads at the same time which uses too much processing and resources) and logging, etc.

Question 1: Is this a use case of a message broker?

Question 2: If it is then Kafka vs RabitMQ which is the better?

See more
Replies (4)
Tarun Batra
Senior Software Developer at Okta · | 7 upvotes · 765K views
Recommends
on
RabbitMQRabbitMQ

RabbitMQ is great for queuing and retrying. You can send the requests to your backend which will further queue these requests in RabbitMQ (or Kafka, too). The consumer on the other end can take care of processing . For a detailed analysis, check this blog about choosing between Kafka and RabbitMQ.

See more
Trevor Rydalch
Software Engineer at InsideSales.com · | 6 upvotes · 764.8K views
Recommends
on
RabbitMQRabbitMQ

Well, first off, it's good practice to do as little non-UI work on the foreground thread as possible, regardless of whether the requests take a long time. You don't want the UI thread blocked.

This sounds like a good use case for RabbitMQ. Primarily because you don't need each message processed by more than one consumer. If you wanted to process a single message more than once (say for different purposes), then Apache Kafka would be a much better fit as you can have multiple consumer groups consuming from the same topics independently.

Have your API publish messages containing the data necessary for the third-party request to a Rabbit queue and have consumers reading off there. If it fails, you can either retry immediately, or publish to a deadletter queue where you can reprocess them whenever you want (shovel them back into the regular queue).

See more
Recommends
on
RabbitMQRabbitMQ

In my opinion RabbitMQ fits better in your case because you don’t have order in queue. You can process your messages in any order. You don’t need to store the data what you sent. Kafka is a persistent storage like the blockchain. RabbitMQ is a message broker. Kafka is not a good solution for the system with confirmations of the messages delivery.

See more
Guillaume Maka
Full Stack Web Developer · | 2 upvotes · 764K views
Recommends
on
RabbitMQRabbitMQ

As far as I understand, Kafka is a like a persisted event state manager where you can plugin various source of data and transform/query them as event via a stream API. Regarding your use case I will consider using RabbitMQ if your intent is to implement service inter-communication kind of thing. RabbitMQ is a good choice for one-one publisher/subscriber (or consumer) and I think you can also have multiple consumers by configuring a fanout exchange. RabbitMQ provide also message retries, message cancellation, durable queue, message requeue, message ACK....

See more
Needs advice
on
KafkaKafkaRabbitMQRabbitMQ
and
RedisRedis

Hello! [Client sends live video frames -> Server computes and responds the result] Web clients send video frames from their webcam then on the back we need to run them through some algorithm and send the result back as a response. Since everything will need to work in a live mode, we want something fast and also suitable for our case (as everyone needs). Currently, we are considering RabbitMQ for the purpose, but recently I have noticed that there is Redis and Kafka too. Could you please help us choose among them or anything more suitable beyond these guys. I think something similar to our product would be people using their webcam to get Snapchat masks on their faces, and the calculated face points are responded on from the server, then the client-side draw the mask on the user's face. I hope this helps. Thank you!

See more
Replies (3)
Jordi Martínez
Senior software architect at Bootloader · | 3 upvotes · 714.1K views
Recommends
on
KafkaKafka

For your use case, the tool that fits more is definitely Kafka. RabbitMQ was not invented to handle data streams, but messages. Plenty of them, of course, but individual messages. Redis is an in-memory database, which is what makes it so fast. Redis recently included features to handle data stream, but it cannot best Kafka on this, or at least not yet. Kafka is not also super fast, it also provides lots of features to help create software to handle those streams.

See more
Recommends
on
RabbitMQRabbitMQ

I've used all of them and Kafka is hard to set up and maintain. Mostly is a Java dinosaur that you can set up and. I've used it with Storm but that is another big dinosaur. Redis is mostly for caching. The queue mechanism is not very scalable for multiple processors. Depending on the speed you need to implement on the reliability I would use RabbitMQ. You can store the frames(if they are too big) somewhere else and just have a link to them. Moving data through any of these will increase cost of transportation. With Rabbit, you can always have multiple consumers and check for redundancy. Hope it clears out your thoughts!

See more
Recommends
on
RabbitMQRabbitMQ

For this kind of use case I would recommend either RabbitMQ or Kafka depending on the needs for scaling, redundancy and how you want to design it.

Kafka's true value comes into play when you need to distribute the streaming load over lot's of resources. If you were passing the video frames directly into the queue then you'd probably want to go with Kafka however if you can just pass a pointer to the frames then RabbitMQ should be fine and will be much simpler to run.

Bear in mind too that Kafka is a persistent log, not just a message bus so any data you feed into it is kept available until it expires (which is configurable). This can be useful if you have multiple clients reading from the queue with their own lifecycle but in your case it doesn't sound like that would be necessary. You could also use a RabbitMQ fanout exchange if you need that in the future.

See more
Decisions about InfluxDB and Kafka
Benoit Larroque
Principal Engineer at Sqreen · | 2 upvotes · 143.9K views

I chose TimescaleDB because to be the backend system of our production monitoring system. We needed to be able to keep track of multiple high cardinality dimensions.

The drawbacks of this decision are our monitoring system is a bit more ad hoc than it used to (New Relic Insights)

We are combining this with Grafana for display and Telegraf for data collection

See more
Manage your open source components, licenses, and vulnerabilities
Learn More
Pros of InfluxDB
Pros of Kafka
  • 59
    Time-series data analysis
  • 30
    Easy setup, no dependencies
  • 24
    Fast, scalable & open source
  • 21
    Open source
  • 20
    Real-time analytics
  • 6
    Continuous Query support
  • 5
    Easy Query Language
  • 4
    HTTP API
  • 4
    Out-of-the-box, automatic Retention Policy
  • 1
    Offers Enterprise version
  • 1
    Free Open Source version
  • 126
    High-throughput
  • 119
    Distributed
  • 92
    Scalable
  • 86
    High-Performance
  • 66
    Durable
  • 38
    Publish-Subscribe
  • 19
    Simple-to-use
  • 18
    Open source
  • 12
    Written in Scala and java. Runs on JVM
  • 9
    Message broker + Streaming system
  • 4
    KSQL
  • 4
    Avro schema integration
  • 4
    Robust
  • 3
    Suport Multiple clients
  • 2
    Extremely good parallelism constructs
  • 2
    Partioned, replayable log
  • 1
    Simple publisher / multi-subscriber model
  • 1
    Fun
  • 1
    Flexible

Sign up to add or upvote prosMake informed product decisions

Cons of InfluxDB
Cons of Kafka
  • 4
    Instability
  • 1
    Proprietary query language
  • 1
    HA or Clustering is only in paid version
  • 32
    Non-Java clients are second-class citizens
  • 29
    Needs Zookeeper
  • 9
    Operational difficulties
  • 5
    Terrible Packaging

Sign up to add or upvote consMake informed product decisions

- No public GitHub repository available -

What is InfluxDB?

InfluxDB is a scalable datastore for metrics, events, and real-time analytics. It has a built-in HTTP API so you don't have to write any server side code to get up and running. InfluxDB is designed to be scalable, simple to install and manage, and fast to get data in and out.

What is Kafka?

Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.

Need advice about which tool to choose?Ask the StackShare community!

What companies use InfluxDB?
What companies use Kafka?
Manage your open source components, licenses, and vulnerabilities
Learn More

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with InfluxDB?
What tools integrate with Kafka?

Sign up to get full access to all the tool integrationsMake informed product decisions

Blog Posts

Dec 22 2021 at 5:41AM

Pinterest

MySQLKafkaDruid+3
3
605
Amazon S3KafkaZookeeper+5
8
1627
Mar 24 2021 at 12:57PM

Pinterest

GitJenkinsKafka+7
3
2203
What are some alternatives to InfluxDB and Kafka?
TimescaleDB
TimescaleDB: An open-source database built for analyzing time-series data with the power and convenience of SQL — on premise, at the edge, or in the cloud.
Redis
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams.
MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Elasticsearch
Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack).
Prometheus
Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
See all alternatives