What is Databricks and what are its top alternatives?
Databricks is a unified analytics platform that combines data engineering and data science capabilities. It allows users to set up distributed infrastructure and execute data workflows seamlessly. Key features include collaborative notebooks, machine learning support, real-time data processing, and integration with popular data sources. However, Databricks can be costly, especially for large-scale usage, and there might be limitations in terms of customization and control over infrastructure.
- Apache Spark: Apache Spark is an open-source distributed computing system that provides processing for large-scale data sets. Key features include in-memory processing, compatibility with multiple programming languages, and a rich set of libraries. Pros include its high performance and extensibility, while cons might involve a steeper learning curve compared to Databricks.
- Google Cloud Dataproc: Google Cloud Dataproc is a managed Spark and Hadoop service that allows users to run big data analytics and machine learning workloads. Features include scalability, easy integration with other Google Cloud services, and cost-effectiveness. Pros include seamless integration with Google Cloud ecosystem, while limitations may involve less control compared to Databricks.
- AWS EMR: Amazon EMR is a managed big data platform on AWS that allows users to process large amounts of data using Apache Spark and other big data frameworks. Key features include flexibility, scalability, and seamless integration with other AWS services. Pros include deep integration with AWS, while cons may involve complex setup and maintenance compared to Databricks.
- Alteryx: Alteryx is a self-service analytics platform that offers data blending, advanced analytics, and machine learning capabilities. Features include drag-and-drop interface, automation of data workflows, and predictive analytics. Pros include ease of use and comprehensive analytics functionalities, while cons may involve less emphasis on big data processing compared to Databricks.
- Cloudera: Cloudera is a big data platform that provides tools for data engineering, data warehousing, and machine learning. Key features include scalability, security, and support for a variety of data processing frameworks. Pros include comprehensive big data solutions, while cons could be complexity and setup overhead compared to Databricks.
- IBM Watson Studio: IBM Watson Studio is an integrated environment for data scientists, developers, and domain experts to collaboratively and easily work with data and to build and train models at scale. Features include visual modeling tools, automatic model generation, and seamless data integration. Pros include IBM's cognitive capabilities and enterprise-grade security, while cons may include a higher learning curve for beginners compared to Databricks.
- Talend: Talend is a cloud data integration and data integrity platform that enables users to connect, cleanse, and combine data from different sources. Key features include data quality tools, real-time data integration, and self-service data preparation. Pros include ease of use and flexibility in data integration, while cons may involve less focus on advanced analytics compared to Databricks.
- Qubole: Qubole is a cloud-native, self-service big data platform that enables users to quickly process and analyze big data workloads. Features include auto-scaling, integrations with popular data processing engines, and self-service data exploration. Pros include ease of use and cost-effectiveness, while cons may involve limited customization options compared to Databricks.
- Snowflake: Snowflake is a cloud-based data platform that provides data warehousing, data lake, and data sharing capabilities. Key features include scalability, performance, and ease of use. Pros include simplicity in managing data and querying, while cons may involve less emphasis on advanced analytics and machine learning compared to Databricks.
- H2O.ai: H2O.ai is an open-source machine learning platform that offers automatic machine learning, model management, and interpretable machine learning. Features include scalability, ease of use, and support for popular machine learning algorithms. Pros include a strong focus on machine learning capabilities, while cons may involve less comprehensive data engineering tools compared to Databricks.
Top Alternatives to Databricks
- Snowflake
Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn. ...
- Azure Databricks
Accelerate big data analytics and artificial intelligence (AI) solutions with Azure Databricks, a fast, easy and collaborative Apache Spark–based analytics service. ...
- Domino
Use our cloud-hosted infrastructure to securely run your code on powerful hardware with a single command — without any changes to your code. If you have your own infrastructure, our Enterprise offering provides powerful, easy-to-use cluster management functionality behind your firewall. ...
- Confluent
It is a data streaming platform based on Apache Kafka: a full-scale streaming platform, capable of not only publish-and-subscribe, but also the storage and processing of data within the stream ...
- Apache Spark
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...
- Azure HDInsight
It is a cloud-based service from Microsoft for big data analytics that helps organizations process large amounts of streaming or historical data. ...
- Splunk
It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...
- Qubole
Qubole is a cloud based service that makes big data easy for analysts and data engineers. ...
Databricks alternatives & related posts
- Public and Private Data Sharing7
- Multicloud4
- Good Performance4
- User Friendly4
- Great Documentation3
- Serverless2
- Economical1
- Usage based billing1
- Innovative1
related Snowflake posts
I'm wondering if any Cloud Firestore users might be open to sharing some input and challenges encountered when trying to create a low-cost, low-latency data pipeline to their Analytics warehouse (e.g. Google BigQuery, Snowflake, etc...)
I'm working with a platform by the name of Estuary.dev, an ETL/ELT and we are conducting some research on the pain points here to see if there are drawbacks of the Firestore->BQ extension and/or if users are seeking easy ways for getting nosql->fine-grained tabular data
Please feel free to drop some knowledge/wish list stuff on me for a better pipeline here!
I use Google BigQuery because it makes is super easy to query and store data for analytics workloads. If you're using GCP, you're likely using BigQuery. However, running data viz tools directly connected to BigQuery will run pretty slow. They recently announced BI Engine which will hopefully compete well against big players like Snowflake when it comes to concurrency.
What's nice too is that it has SQL-based ML tools, and it has great GIS support!
related Azure Databricks posts
Domino
related Domino posts
Confluent
- Free for casual use4
- No hypercloud lock-in3
- Dashboard for kafka insight3
- Easily scalable2
- Zero devops2
- Proprietary1
related Confluent posts
I have recently started using Confluent/Kafka cloud. We want to do some stream processing. As I was going through Kafka I came across Kafka Streams and KSQL. Both seem to be A good fit for stream processing. But I could not understand which one should be used and one has any advantage over another. We will be using Confluent/Kafka Managed Cloud Instance. In near future, our Producers and Consumers are running on premise and we will be interacting with Confluent Cloud.
Also, Confluent Cloud Kafka has a primitive interface; is there any better UI interface to manage Kafka Cloud Cluster?
- Open-source61
- Fast and Flexible48
- One platform for every big data problem8
- Great for distributed SQL like applications8
- Easy to install and to use6
- Works well for most Datascience usecases3
- Interactive Query2
- Machine learning libratimery, Streaming in real2
- In memory Computation2
- Speed4
related Apache Spark posts
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
https://eng.uber.com/distributed-tracing/
(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)
Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark
The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.
Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).
At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.
For more info:
- Our Algorithms Tour: https://algorithms-tour.stitchfix.com/
- Our blog: https://multithreaded.stitchfix.com/blog/
- Careers: https://multithreaded.stitchfix.com/careers/
#DataScience #DataStack #Data
related Azure HDInsight posts
- API for searching logs, running reports3
- Alert system based on custom query results3
- Splunk language supports string, date manip, math, etc2
- Dashboarding on any log contents2
- Custom log parsing as well as automatic parsing2
- Query engine supports joining, aggregation, stats, etc2
- Rich GUI for searching live logs2
- Ability to style search results into reports2
- Granular scheduling and time window support1
- Query any log as key-value pairs1
- Splunk query language rich so lots to learn1
related Splunk posts
I am designing a Django application for my organization which will be used as an internal tool. The infra team said that I will not be having SSH access to the production server and I will have to log all my backend application messages to Splunk. I have no knowledge of Splunk so the following are the approaches I am considering: Approach 1: Create an hourly cron job that uploads the server log file to some Splunk storage for later analysis. - Is this possible? Approach 2: Is it possible just to stream the logs to some splunk endpoint? (If yes, I feel network usage and communication overhead will be a pain-point for my application)
Is there any better or standard approach? Thanks in advance.
I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.
Qubole
- Simple UI and autoscaling clusters13
- Feature to use AWS Spot pricing10
- Optimized Spark, Hive, Presto, Hadoop 2, HBase clusters7
- Real-time data insights through Spark Notebook7
- Hyper elastic and scalable6
- Easy to manage costs6
- Easy to configure, deploy, and run Hadoop clusters6
- Backed by Amazon4
- Gracefully Scale up & down with zero human intervention4
- All-in-one platform2
- Backed by Azure2
related Qubole posts
By mid-2014, around the time of the Series F, Pinterest users had already created more than 30 billion Pins, and the company was logging around 20 terabytes of new data daily, with around 10 petabytes of data in S3. To drive personalization for its users, and to empower engineers to build big data applications quickly, the data team built a self-serve Hadoop platform.
To start, they decoupled compute from storage, which meant teams would have to worry less about loading or synchronizing data, allowing existing or future clusters to make use of the data across a single shared file system.
A centralized Hive metastore act as the source of truth. They chose Hive for most of their Hadoop jobs “primarily because the SQL interface is simple and familiar to people across the industry.”
Dependency management takes place across three layers: *** Baked AMIs, which are large slow-loading dependencies pre-loaded on images; **Automated Configurations (Masterless Puppets), which allows Puppet clients to “pull their configuration from S3 and set up a service that’s responsible for keeping S3 configurations in sync with the Puppet master;” and Runtime Staging on S3, which creates a working directory at runtime for each developer that pulls down its dependencies directly from S3.
Finally, they migrated their Hadoop jobs to Qubole, which “supported AWS/S3 and was relatively easy to get started on.”