Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Amazon Redshift
Amazon Redshift

621
309
+ 1
86
Kafka
Kafka

3.5K
2.9K
+ 1
460
Add tool

Amazon Redshift vs Kafka: What are the differences?

What is Amazon Redshift? Fast, fully managed, petabyte-scale data warehouse service. Redshift makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. It is optimized for datasets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

What is Kafka? Distributed, fault tolerant, high throughput pub-sub messaging system. Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.

Amazon Redshift belongs to "Big Data as a Service" category of the tech stack, while Kafka can be primarily classified under "Message Queue".

Some of the features offered by Amazon Redshift are:

  • Optimized for Data Warehousing- It uses columnar storage, data compression, and zone maps to reduce the amount of IO needed to perform queries. Redshift has a massively parallel processing (MPP) architecture, parallelizing and distributing SQL operations to take advantage of all available resources.
  • Scalable- With a few clicks of the AWS Management Console or a simple API call, you can easily scale the number of nodes in your data warehouse up or down as your performance or capacity needs change.
  • No Up-Front Costs- You pay only for the resources you provision. You can choose On-Demand pricing with no up-front costs or long-term commitments, or obtain significantly discounted rates with Reserved Instance pricing.

On the other hand, Kafka provides the following key features:

  • Written at LinkedIn in Scala
  • Used by LinkedIn to offload processing of all page and other views
  • Defaults to using persistence, uses OS disk cache for hot data (has higher throughput then any of the above having persistence enabled)

"Data Warehousing" is the primary reason why developers consider Amazon Redshift over the competitors, whereas "High-throughput" was stated as the key factor in picking Kafka.

Kafka is an open source tool with 12.5K GitHub stars and 6.7K GitHub forks. Here's a link to Kafka's open source repository on GitHub.

According to the StackShare community, Kafka has a broader approval, being mentioned in 501 company stacks & 451 developers stacks; compared to Amazon Redshift, which is listed in 267 company stacks and 63 developer stacks.

- No public GitHub repository available -

What is Amazon Redshift?

Redshift makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. It is optimized for datasets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

What is Kafka?

Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Amazon Redshift?
Why do developers choose Kafka?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
    Jobs that mention Amazon Redshift and Kafka as a desired skillset
    What companies use Amazon Redshift?
    What companies use Kafka?

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Amazon Redshift?
    What tools integrate with Kafka?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to Amazon Redshift and Kafka?
    Google BigQuery
    Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.
    Amazon Athena
    Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
    Amazon DynamoDB
    With it , you can offload the administrative burden of operating and scaling a highly available distributed database cluster, while paying a low price for only what you use.
    Amazon Redshift Spectrum
    With Redshift Spectrum, you can extend the analytic power of Amazon Redshift beyond data stored on local disks in your data warehouse to query vast amounts of unstructured data in your Amazon S3 “data lake” -- without having to load or transform any data.
    Hadoop
    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
    See all alternatives
    Decisions about Amazon Redshift and Kafka
    Adam Rabinovitch
    Adam Rabinovitch
    Global Technical Recruiting Lead & Engineering Evangelist at Beamery · | 3 upvotes · 156.7K views
    atBeameryBeamery
    Kafka
    Kafka
    Redis
    Redis
    Elasticsearch
    Elasticsearch
    MongoDB
    MongoDB
    RabbitMQ
    RabbitMQ
    Go
    Go
    Node.js
    Node.js
    Kubernetes
    Kubernetes
    #Microservices

    Beamery runs a #microservices architecture in the backend on top of Google Cloud with Kubernetes There are a 100+ different microservice split between Node.js and Go . Data flows between the microservices over REST and gRPC and passes through Kafka RabbitMQ as a message bus. Beamery stores data in MongoDB with near-realtime replication to Elasticsearch . In addition, Beamery uses Redis for various memory-optimized tasks.

    See more
    Conor Myhrvold
    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 4 upvotes · 95.4K views
    atUber TechnologiesUber Technologies
    Kafka Manager
    Kafka Manager
    Kafka
    Kafka
    GitHub
    GitHub
    Apache Spark
    Apache Spark
    Hadoop
    Hadoop

    Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

    Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

    https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

    (Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

    See more
    Ankit Sobti
    Ankit Sobti
    CTO at Postman Inc · | 10 upvotes · 60.9K views
    atPostmanPostman
    dbt
    dbt
    Amazon Redshift
    Amazon Redshift
    Stitch
    Stitch
    Looker
    Looker

    Looker , Stitch , Amazon Redshift , dbt

    We recently moved our Data Analytics and Business Intelligence tooling to Looker . It's already helping us create a solid process for reusable SQL-based data modeling, with consistent definitions across the entire organizations. Looker allows us to collaboratively build these version-controlled models and push the limits of what we've traditionally been able to accomplish with analytics with a lean team.

    For Data Engineering, we're in the process of moving from maintaining our own ETL pipelines on AWS to a managed ELT system on Stitch. We're also evaluating the command line tool, dbt to manage data transformations. Our hope is that Stitch + dbt will streamline the ELT bit, allowing us to focus our energies on analyzing data, rather than managing it.

    See more
    Roman Bulgakov
    Roman Bulgakov
    Senior Back-End Developer, Software Architect at Chemondis GmbH · | 3 upvotes · 10.5K views
    Kafka
    Kafka

    I use Kafka because it has almost infinite scaleability in terms of processing events (could be scaled to process hundreds of thousands of events), great monitoring (all sorts of metrics are exposed via JMX).

    Downsides of using Kafka are: - you have to deal with Zookeeper - you have to implement advanced routing yourself (compared to RabbitMQ it has no advanced routing)

    See more
    RabbitMQ
    RabbitMQ
    Kafka
    Kafka

    The question for which Message Queue to use mentioned "availability, distributed, scalability, and monitoring". I don't think that this excludes many options already. I does not sound like you would take advantage of Kafka's strengths (replayability, based on an even sourcing architecture). You could pick one of the AMQP options.

    I would recommend the RabbitMQ message broker, which not only implements the AMQP standard 0.9.1 (it can support 1.x or other protocols as well) but has also several very useful extensions built in. It ticks the boxes you mentioned and on top you will get a very flexible system, that allows you to build the architecture, pick the options and trade-offs that suite your case best.

    For more information about RabbitMQ, please have a look at the linked markdown I assembled. The second half explains many configuration options. It also contains links to managed hosting and to libraries (though it is missing Python's - which should be Puka, I assume).

    See more
    Frédéric MARAND
    Frédéric MARAND
    Core Developer at OSInet · | 2 upvotes · 87.9K views
    atOSInetOSInet
    RabbitMQ
    RabbitMQ
    Beanstalkd
    Beanstalkd
    Kafka
    Kafka

    I used Kafka originally because it was mandated as part of the top-level IT requirements at a Fortune 500 client. What I found was that it was orders of magnitude more complex ...and powerful than my daily Beanstalkd , and far more flexible, resilient, and manageable than RabbitMQ.

    So for any case where utmost flexibility and resilience are part of the deal, I would use Kafka again. But due to the complexities involved, for any time where this level of scalability is not required, I would probably just use Beanstalkd for its simplicity.

    I tend to find RabbitMQ to be in an uncomfortable middle place between these two extremities.

    See more
    Interest over time
    Reviews of Amazon Redshift and Kafka
    No reviews found
    How developers use Amazon Redshift and Kafka
    Avatar of Pinterest
    Pinterest uses KafkaKafka

    http://media.tumblr.com/d319bd2624d20c8a81f77127d3c878d0/tumblr_inline_nanyv6GCKl1s1gqll.png

    Front-end messages are logged to Kafka by our API and application servers. We have batch processing (on the middle-left) and real-time processing (on the middle-right) pipelines to process the experiment data. For batch processing, after daily raw log get to s3, we start our nightly experiment workflow to figure out experiment users groups and experiment metrics. We use our in-house workflow management system Pinball to manage the dependencies of all these MapReduce jobs.

    Avatar of Olo
    Olo uses Amazon RedshiftAmazon Redshift

    Aggressive archiving of historical data to keep the production database as small as possible. Using our in-house soon-to-be-open-sourced ETL library, SharpShifter.

    Avatar of Coolfront Technologies
    Coolfront Technologies uses KafkaKafka

    Building out real-time streaming server to present data insights to Coolfront Mobile customers and internal sales and marketing teams.

    Avatar of ShareThis
    ShareThis uses KafkaKafka

    We are using Kafka as a message queue to process our widget logs.

    Avatar of Christopher Davison
    Christopher Davison uses KafkaKafka

    Used for communications and triggering jobs across ETL systems

    Avatar of theskyinflames
    theskyinflames uses KafkaKafka

    Used as a integration middleware by messaging interchanging.

    Avatar of Christian Moeller
    Christian Moeller uses Amazon RedshiftAmazon Redshift

    Connected to BI (Pentaho)

    Avatar of Kovid Rathee
    Kovid Rathee uses Amazon RedshiftAmazon Redshift

    OLAP and BI

    How much does Amazon Redshift cost?
    How much does Kafka cost?
    Pricing unavailable