What is Alooma and what are its top alternatives?
Alooma is a cloud-based data integration platform that helps businesses streamline the process of moving data from various sources into data warehouses for analysis. It offers features such as data transformation, automated mapping, and real-time data monitoring. However, some limitations of Alooma include limited support for complex data transformations and high pricing for small businesses.
- Stitch: Stitch is a cloud-based data integration platform that offers real-time data loading into various data warehouses. Key features include support for over 100 data sources, simple setup, and easy-to-use interface. Pros include ease of use and fast setup, while cons include limited data transformation capabilities compared to Alooma.
- Segment: Segment is a customer data platform that helps businesses collect, clean, and control their customer data. Key features include data routing to various destinations, analytics, and privacy compliance tools. Pros include robust data governance features, while cons include the complexity of setting up custom data pipelines.
- Fivetran: Fivetran is an automated data integration platform that replicates data from various sources to data warehouses. Key features include support for over 150 data sources, automatic schema migrations, and fully managed pipelines. Pros include high level of automation, while cons include limited data transformation capabilities compared to Alooma.
- Talend: Talend is an open-source data integration platform that offers a suite of tools for data integration, data quality, and big data. Key features include support for both batch and real-time data processing, extensive data transformation tools, and collaboration features. Pros include open-source flexibility, while cons include a steeper learning curve compared to Alooma.
- Matillion: Matillion is a cloud-native data integration platform that is specifically designed for cloud data warehouses such as Snowflake, BigQuery, and Redshift. Key features include a visual interface for data transformation, pre-built connectors, and scalability. Pros include optimized for cloud data warehouses, while cons include limited support for on-premise data sources.
- Xplenty: Xplenty is a cloud-based data integration platform that offers ETL and ELT capabilities for moving data between various sources and destinations. Key features include a drag-and-drop interface, support for multiple data sources, and data security compliance. Pros include ease of use, while cons include limited support for complex data transformations.
- Aginity: Aginity is a platform that specializes in analytics and data science workflows, offering capabilities for data management, analytics, and collaboration. Key features include SQL IDE, data catalog, and advanced analytics tools. Pros include focus on analytics workflows, while cons include limited support for real-time data integration compared to Alooma.
- Hevo Data: Hevo Data is a no-code data pipeline platform that helps businesses integrate data from various sources to data warehouses and other destinations. Key features include real-time data integration, support for over 40 data sources, and auto-mapping of data fields. Pros include ease of use, while cons include limited support for complex data transformations.
- Skyvia: Skyvia is a cloud data integration platform that offers ETL, data synchronization, and backup solutions. Key features include a visual interface for designing workflows, support for various data sources, and automation of data processing tasks. Pros include affordability, while cons include limited scalability compared to Alooma.
- Onna: Onna is a platform that specializes in enterprise data governance by providing capabilities for eDiscovery, compliance, and data integration. Key features include federated search across multiple data sources, data processing and enrichment, and collaboration tools. Pros include focus on enterprise data governance, while cons include limited support for real-time data integration and data warehouses.
Top Alternatives to Alooma
- Stitch
Stitch is a simple, powerful ETL service built for software developers. Stitch evolved out of RJMetrics, a widely used business intelligence platform. When RJMetrics was acquired by Magento in 2016, Stitch was launched as its own company. ...
- Segment
Segment is a single hub for customer data. Collect your data in one place, then send it to more than 100 third-party tools, internal systems, or Amazon Redshift with the flip of a switch. ...
- Datadog
Datadog is the leading service for cloud-scale monitoring. It is used by IT, operations, and development teams who build and operate applications that run on dynamic or hybrid cloud infrastructure. Start monitoring in minutes with Datadog! ...
- Talend
It is an open source software integration platform helps you in effortlessly turning data into business insights. It uses native code generation that lets you run your data pipelines seamlessly across all cloud providers and get optimized performance on all platforms. ...
- Kafka
Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...
- AWS Glue
A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. ...
- Matillion
It is a modern, browser-based UI, with powerful, push-down ETL/ELT functionality. With a fast setup, you are up and running in minutes. ...
- Airflow
Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed. ...
Alooma alternatives & related posts
- 3 minutes to set up8
- Super simple, great support4
related Stitch posts
Looker , Stitch , Amazon Redshift , dbt
We recently moved our Data Analytics and Business Intelligence tooling to Looker . It's already helping us create a solid process for reusable SQL-based data modeling, with consistent definitions across the entire organizations. Looker allows us to collaboratively build these version-controlled models and push the limits of what we've traditionally been able to accomplish with analytics with a lean team.
For Data Engineering, we're in the process of moving from maintaining our own ETL pipelines on AWS to a managed ELT system on Stitch. We're also evaluating the command line tool, dbt to manage data transformations. Our hope is that Stitch + dbt will streamline the ELT bit, allowing us to focus our energies on analyzing data, rather than managing it.
Hello, For security and strategic reasons, we are migrating our apps from AWS/Google to a cloud provider with more security certifications and fewer functionalities, named Outscale. So far we have been using Google BigQuery as our data warehouse with ELT workflows (using Stitch and dbt ) and we need to migrate our data ecosystem to this new cloud provider.
We are setting up a Kubernetes cluster in our new cloud provider for our apps. Regarding the data warehouse, it's not clear if there are advantages/inconvenients about setting it up on kubernetes (apart from having to create node groups and tolerations with more ram/cpu). Also, we are not sure what's the best Open source or on-premise tool to use. The main requirement is that data must remain in the secure cluster, and no external entity (especially US) can have access to it. We have a dev cluster/environment and a production cluster/environment on this cloud.
Regarding the actual DWH usage - Today we have ~1.5TB in BigQuery in production. We're going to run our initial rests with ~50-100GB of data for our test cluster - Most of our data comes from other databases, so in most cases, we already have replicated sources somewhere, and there are only a handful of collections whose source is directly in the DWH (such as snapshots, some external data we've fetched at some point, google analytics, etc) and needs appropriate level of replication - We are a team of 30-ish people, we do not have critical needs regarding analytics speed, and we do not need real time. We rebuild our DBT models 2-3 times a day and this usually proves enough
Apart from postgreSQL, I haven't really found open-source or on-premise alternatives for setting up a data warehouse, and running transformations with DBT. There is also the question of data ingestion, I've selected Airbyte and @meltano and I have troubles understanding if one of the 2 is better but Airbytes seems to have a bigger community.
What do you suggest regarding the data warehouse, and the ELT workflows ? - Kubernetes or not kubernetes ? - Postgresql or something else ? if postgre, what are the important configs you'd have in mind ? - Airbyte/DBT or something else.
Segment
- Easy to scale and maintain 3rd party services86
- One API49
- Simple39
- Multiple integrations25
- Cleanest API19
- Easy10
- Free9
- Mixpanel Integration8
- Segment SQL7
- Flexible6
- Google Analytics Integration4
- Salesforce Integration2
- SQL Access2
- Clean Integration with Application2
- Own all your tracking data1
- Quick setup1
- Clearbit integration1
- Beautiful UI1
- Integrates with Apptimize1
- Escort1
- Woopra Integration1
- Not clear which events/options are integration-specific2
- Limitations with integration-specific configurations1
- Client-side events are separated from server-side1
related Segment posts
Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.
I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.
For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.
Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.
Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.
Future improvements / technology decisions included:
Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic
As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.
One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.
Our primary source of monitoring and alerting is Datadog. We’ve got prebuilt dashboards for every scenario and integration with PagerDuty to manage routing any alerts. We’ve definitely scaled past the point where managing dashboards is easy, but we haven’t had time to invest in using features like Anomaly Detection. We’ve started using Honeycomb for some targeted debugging of complex production issues and we are liking what we’ve seen. We capture any unhandled exceptions with Rollbar and, if we realize one will keep happening, we quickly convert the metrics to point back to Datadog, to keep Rollbar as clean as possible.
We use Segment to consolidate all of our trackers, the most important of which goes to Amplitude to analyze user patterns. However, if we need a more consolidated view, we push all of our data to our own data warehouse running PostgreSQL; this is available for analytics and dashboard creation through Looker.
- Monitoring for many apps (databases, web servers, etc)139
- Easy setup107
- Powerful ui87
- Powerful integrations84
- Great value70
- Great visualization54
- Events + metrics = clarity46
- Notifications41
- Custom metrics41
- Flexibility39
- Free & paid plans19
- Great customer support16
- Makes my life easier15
- Adapts automatically as i scale up10
- Easy setup and plugins9
- Super easy and powerful8
- AWS support7
- In-context collaboration7
- Rich in features6
- Docker support5
- Cost4
- Full visibility of applications4
- Monitor almost everything4
- Cute logo4
- Automation tools4
- Source control and bug tracking4
- Simple, powerful, great for infra4
- Easy to Analyze4
- Best than others4
- Best in the field3
- Expensive3
- Good for Startups3
- Free setup3
- APM2
- Expensive20
- No errors exception tracking4
- External Network Goes Down You Wont Be Logging2
- Complicated1
related Datadog posts
Our primary source of monitoring and alerting is Datadog. We’ve got prebuilt dashboards for every scenario and integration with PagerDuty to manage routing any alerts. We’ve definitely scaled past the point where managing dashboards is easy, but we haven’t had time to invest in using features like Anomaly Detection. We’ve started using Honeycomb for some targeted debugging of complex production issues and we are liking what we’ve seen. We capture any unhandled exceptions with Rollbar and, if we realize one will keep happening, we quickly convert the metrics to point back to Datadog, to keep Rollbar as clean as possible.
We use Segment to consolidate all of our trackers, the most important of which goes to Amplitude to analyze user patterns. However, if we need a more consolidated view, we push all of our data to our own data warehouse running PostgreSQL; this is available for analytics and dashboard creation through Looker.
Hey there! We are looking at Datadog, Dynatrace, AppDynamics, and New Relic as options for our web application monitoring.
Current Environment: .NET Core Web app hosted on Microsoft IIS
Future Environment: Web app will be hosted on Microsoft Azure
Tech Stacks: IIS, RabbitMQ, Redis, Microsoft SQL Server
Requirement: Infra Monitoring, APM, Real - User Monitoring (User activity monitoring i.e., time spent on a page, most active page, etc.), Service Tracing, Root Cause Analysis, and Centralized Log Management.
Please advise on the above. Thanks!
related Talend posts
- High-throughput126
- Distributed119
- Scalable92
- High-Performance86
- Durable66
- Publish-Subscribe38
- Simple-to-use19
- Open source18
- Written in Scala and java. Runs on JVM12
- Message broker + Streaming system9
- KSQL4
- Avro schema integration4
- Robust4
- Suport Multiple clients3
- Extremely good parallelism constructs2
- Partioned, replayable log2
- Simple publisher / multi-subscriber model1
- Fun1
- Flexible1
- Non-Java clients are second-class citizens32
- Needs Zookeeper29
- Operational difficulties9
- Terrible Packaging5
related Kafka posts
When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?
So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.
React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.
Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
- Managed Hive Metastore9
related AWS Glue posts
Will Dataflow be the right replacement for AWS Glue? Are there any unforeseen exceptions like certain proprietary transformations not supported in Google Cloud Dataflow, connectors ecosystem, Data Quality & Date cleansing not supported in DataFlow. etc?
Also, how about Google Cloud Data Fusion as a replacement? In terms of No Code/Low code .. (Since basic use cases in Glue support UI, in that case, CDF may be the right choice ).
What would be the best choice?
Hi,
We are currently storing the data in Amazon S3 using Apache Parquet format. We are using Presto to query the data from S3 and catalog it using AWS Glue catalog. We have Metabase sitting on top of Presto, where our reports are present. Currently, Presto is becoming too costly for us, and we are looking for alternatives for it but want to use the remaining setup (S3, Metabase) as much as possible. Please suggest alternative approaches.
related Matillion posts
Airflow
- Features53
- Task Dependency Management14
- Beautiful UI12
- Cluster of workers12
- Extensibility10
- Open source6
- Complex workflows5
- Python5
- Good api3
- Apache project3
- Custom operators3
- Dashboard2
- Observability is not great when the DAGs exceed 2502
- Running it on kubernetes cluster relatively complex2
- Open source - provides minimum or no support2
- Logical separation of DAGs is not straight forward1
related Airflow posts
Data science and engineering teams at Lyft maintain several big data pipelines that serve as the foundation for various types of analysis throughout the business.
Apache Airflow sits at the center of this big data infrastructure, allowing users to “programmatically author, schedule, and monitor data pipelines.” Airflow is an open source tool, and “Lyft is the very first Airflow adopter in production since the project was open sourced around three years ago.”
There are several key components of the architecture. A web UI allows users to view the status of their queries, along with an audit trail of any modifications the query. A metadata database stores things like job status and task instance status. A multi-process scheduler handles job requests, and triggers the executor to execute those tasks.
Airflow supports several executors, though Lyft uses CeleryExecutor to scale task execution in production. Airflow is deployed to three Amazon Auto Scaling Groups, with each associated with a celery queue.
Audit logs supplied to the web UI are powered by the existing Airflow audit logs as well as Flask signal.
Datadog, Statsd, Grafana, and PagerDuty are all used to monitor the Airflow system.
We are a young start-up with 2 developers and a team in India looking to choose our next ETL tool. We have a few processes in Azure Data Factory but are looking to switch to a better platform. We were debating Trifacta and Airflow. Or even staying with Azure Data Factory. The use case will be to feed data to front-end APIs.