Kafka vs Starling: What are the differences?
Developers describe Kafka as "Distributed, fault tolerant, high throughput pub-sub messaging system". Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. On the other hand, Starling is detailed as "A light weight server for reliable distributed message passing". Starling is a powerful but simple messaging server that enables reliable distributed queuing with an absolutely minimal overhead. It speaks the MemCache protocol for maximum cross-platform compatibility. Any language that speaks MemCache can take advantage of Starling's queue facilities.
Kafka and Starling belong to "Message Queue" category of the tech stack.
Some of the features offered by Kafka are:
- Written at LinkedIn in Scala
- Used by LinkedIn to offload processing of all page and other views
- Defaults to using persistence, uses OS disk cache for hot data (has higher throughput then any of the above having persistence enabled)
On the other hand, Starling provides the following key features:
- Written by Blaine Cook at Twitter
- Starling is a Message Queue Server based on MemCached
- Written in Ruby
Kafka and Starling are both open source tools. Kafka with 12.7K GitHub stars and 6.81K forks on GitHub appears to be more popular than Starling with 468 GitHub stars and 63 GitHub forks.
What is Kafka?
What is Starling?
Want advice about which of these to choose?Ask the StackShare community!
Why do developers choose Starling?
Sign up to add, upvote and see more prosMake informed product decisions
What are the cons of using Starling?
Sign up to get full access to all the companiesMake informed product decisions
What tools integrate with Starling?
Sign up to get full access to all the tool integrationsMake informed product decisions
Front-end messages are logged to Kafka by our API and application servers. We have batch processing (on the middle-left) and real-time processing (on the middle-right) pipelines to process the experiment data. For batch processing, after daily raw log get to s3, we start our nightly experiment workflow to figure out experiment users groups and experiment metrics. We use our in-house workflow management system Pinball to manage the dependencies of all these MapReduce jobs.
Building out real-time streaming server to present data insights to Coolfront Mobile customers and internal sales and marketing teams.