What is Sqoop and what are its top alternatives?
Top Alternatives to Sqoop
- Apache Spark
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...
- Apache Flume
It is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application. ...
- Talend
It is an open source software integration platform helps you in effortlessly turning data into business insights. It uses native code generation that lets you run your data pipelines seamlessly across all cloud providers and get optimized performance on all platforms. ...
- Kafka
Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...
- Apache Impala
Impala is a modern, open source, MPP SQL query engine for Apache Hadoop. Impala is shipped by Cloudera, MapR, and Amazon. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. ...
- Slick
It is a modern database query and access library for Scala. It allows you to work with stored data almost as if you were using Scala collections while at the same time giving you full control over when a database access happens and which data is transferred. ...
- Spring Data
It makes it easy to use data access technologies, relational and non-relational databases, map-reduce frameworks, and cloud-based data services. This is an umbrella project which contains many subprojects that are specific to a given database. ...
- DataGrip
A cross-platform IDE that is aimed at DBAs and developers working with SQL databases. ...
Sqoop alternatives & related posts
- Open-source60
- Fast and Flexible48
- Great for distributed SQL like applications8
- One platform for every big data problem8
- Easy to install and to use6
- Works well for most Datascience usecases3
- In memory Computation2
- Interactive Query2
- Machine learning libratimery, Streaming in real2
- Speed3
related Apache Spark posts










The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.
Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).
At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.
For more info:
- Our Algorithms Tour: https://algorithms-tour.stitchfix.com/
- Our blog: https://multithreaded.stitchfix.com/blog/
- Careers: https://multithreaded.stitchfix.com/careers/
#DataScience #DataStack #Data
Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :
Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:
https://eng.uber.com/marmaray-hadoop-ingestion-open-source/
(Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )
Apache Flume
related Apache Flume posts
related Talend posts
Kafka
- High-throughput126
- Distributed119
- Scalable90
- High-Performance85
- Durable65
- Publish-Subscribe37
- Simple-to-use19
- Open source17
- Written in Scala and java. Runs on JVM11
- Message broker + Streaming system8
- Avro schema integration4
- Robust4
- KSQL4
- Suport Multiple clients2
- Partioned, replayable log2
- Flexible1
- Extremely good parallelism constructs1
- Simple publisher / multi-subscriber model1
- Fun1
- Non-Java clients are second-class citizens30
- Needs Zookeeper28
- Operational difficulties8
- Terrible Packaging3
related Kafka posts










The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.
Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).
At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.
For more info:
- Our Algorithms Tour: https://algorithms-tour.stitchfix.com/
- Our blog: https://multithreaded.stitchfix.com/blog/
- Careers: https://multithreaded.stitchfix.com/careers/
#DataScience #DataStack #Data










As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.
We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.
- Super fast11
- Load Balancing1
- Replication1
- Scalability1
- Distributed1
- High Performance1
- Massively Parallel Processing1
- Open Sourse1
related Apache Impala posts
I have been working on a Java application to demonstrate the latency for the select/insert/update operations on KUDU storage using Apache Kudu API - Java based client. I have a few queries about using Apache Kudu API
Do we have JDBC wrapper to use Apache Kudu API for getting connection to Kudu masters with connection pool mechanism and all DB operations?
Does Apache KuduAPI supports order by, group by, and aggregate functions? if yes, how to implement these functions using Kudu APIs.
How can we add kudu predicates to Kudu update operation? if yes, how?
Does Apache Kudu API supports batch insertion (execute the Kudu Insert for multiple rows at one go instead of row by row)? (like Kudusession.apply(List);)
Does Apache Kudu API support join on tables?
which tool is preferred over others (Apache Impala /Kudu API) for read and update/insert DB operations?
related Slick posts
Spring Data
related Spring Data posts








I need some advice to choose an engine for generation web pages from the Spring Boot app. Which technology is the best solution today? 1) JSP + JSTL 2) Apache FreeMarker 3) Thymeleaf Or you can suggest even other perspective tools. I am using Spring Boot, Spring Web, Spring Data, Spring Security, PostgreSQL, Apache Tomcat in my project. I have already tried to generate pages using jsp, jstl, and it went well. However, I had huge problems via carrying already created static pages, to jsp format, because of syntax. Thanks.
- Works on Linux, Windows and MacOS4
- Code analysis2
- Diff viewer2
- Wide range of DBMS support2
- Generate ERD1
- Quick-fixes using keyboard shortcuts1
- Database introspection on 21 different dbms1
- Export data using a variety of formats using open api1
- Import data1
- Code completion1