Alternatives to Altiscale logo

Alternatives to Altiscale

Amazon Redshift, Google BigQuery, Snowflake, Amazon EMR, and Stitch are the most popular alternatives and competitors to Altiscale.
6
10
+ 1
28

What is Altiscale and what are its top alternatives?

we run Apache Hadoop for you. We not only deploy Hadoop, we monitor, manage, fix, and update it for you. Then we take it a step further: We monitor your jobs, notify you when something’s wrong with them, and can help with tuning.
Altiscale is a tool in the Big Data as a Service category of a tech stack.

Top Alternatives to Altiscale

  • Amazon Redshift
    Amazon Redshift

    It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions. ...

  • Google BigQuery
    Google BigQuery

    Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python. ...

  • Snowflake
    Snowflake

    Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn. ...

  • Amazon EMR
    Amazon EMR

    It is used in a variety of applications, including log analysis, data warehousing, machine learning, financial analysis, scientific simulation, and bioinformatics. ...

  • Stitch
    Stitch

    Stitch is a simple, powerful ETL service built for software developers. Stitch evolved out of RJMetrics, a widely used business intelligence platform. When RJMetrics was acquired by Magento in 2016, Stitch was launched as its own company. ...

  • Cloudera Enterprise
    Cloudera Enterprise

    Cloudera Enterprise includes CDH, the world’s most popular open source Hadoop-based platform, as well as advanced system management and data management tools plus dedicated support and community advocacy from our world-class team of Hadoop developers and experts. ...

  • Dremio
    Dremio

    Dremio—the data lake engine, operationalizes your data lake storage and speeds your analytics processes with a high-performance and high-efficiency query engine while also democratizing data access for data scientists and analysts. ...

  • Azure Synapse
    Azure Synapse

    It is an analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. It brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs. ...

Altiscale alternatives & related posts

Amazon Redshift logo

Amazon Redshift

1.4K
1.2K
104
Fast, fully managed, petabyte-scale data warehouse service
1.4K
1.2K
+ 1
104
PROS OF AMAZON REDSHIFT
  • 37
    Data Warehousing
  • 27
    Scalable
  • 17
    SQL
  • 14
    Backed by Amazon
  • 5
    Encryption
  • 1
    Cheap and reliable
  • 1
    Isolation
  • 1
    Best Cloud DW Performance
  • 1
    Fast columnar storage
CONS OF AMAZON REDSHIFT
    Be the first to leave a con

    related Amazon Redshift posts

    Julien DeFrance
    Principal Software Engineer at Tophatter · | 16 upvotes · 2.5M views

    Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

    I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

    For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

    Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

    Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

    Future improvements / technology decisions included:

    Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

    As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

    One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

    See more
    Ankit Sobti

    Looker , Stitch , Amazon Redshift , dbt

    We recently moved our Data Analytics and Business Intelligence tooling to Looker . It's already helping us create a solid process for reusable SQL-based data modeling, with consistent definitions across the entire organizations. Looker allows us to collaboratively build these version-controlled models and push the limits of what we've traditionally been able to accomplish with analytics with a lean team.

    For Data Engineering, we're in the process of moving from maintaining our own ETL pipelines on AWS to a managed ELT system on Stitch. We're also evaluating the command line tool, dbt to manage data transformations. Our hope is that Stitch + dbt will streamline the ELT bit, allowing us to focus our energies on analyzing data, rather than managing it.

    See more
    Google BigQuery logo

    Google BigQuery

    1.4K
    1.2K
    146
    Analyze terabytes of data in seconds
    1.4K
    1.2K
    + 1
    146
    PROS OF GOOGLE BIGQUERY
    • 27
      High Performance
    • 24
      Easy to use
    • 21
      Fully managed service
    • 19
      Cheap Pricing
    • 16
      Process hundreds of GB in seconds
    • 11
      Full table scans in seconds, no indexes needed
    • 11
      Big Data
    • 8
      Always on, no per-hour costs
    • 5
      Good combination with fluentd
    • 4
      Machine learning
    CONS OF GOOGLE BIGQUERY
    • 1
      You can't unit test changes in BQ data

    related Google BigQuery posts

    Context: I wanted to create an end to end IoT data pipeline simulation in Google Cloud IoT Core and other GCP services. I never touched Terraform meaningfully until working on this project, and it's one of the best explorations in my development career. The documentation and syntax is incredibly human-readable and friendly. I'm used to building infrastructure through the google apis via Python , but I'm so glad past Sung did not make that decision. I was tempted to use Google Cloud Deployment Manager, but the templates were a bit convoluted by first impression. I'm glad past Sung did not make this decision either.

    Solution: Leveraging Google Cloud Build Google Cloud Run Google Cloud Bigtable Google BigQuery Google Cloud Storage Google Compute Engine along with some other fun tools, I can deploy over 40 GCP resources using Terraform!

    Check Out My Architecture: CLICK ME

    Check out the GitHub repo attached

    See more
    Tim Specht
    ‎Co-Founder and CTO at Dubsmash · | 14 upvotes · 651K views

    In order to accurately measure & track user behaviour on our platform we moved over quickly from the initial solution using Google Analytics to a custom-built one due to resource & pricing concerns we had.

    While this does sound complicated, it’s as easy as clients sending JSON blobs of events to Amazon Kinesis from where we use AWS Lambda & Amazon SQS to batch and process incoming events and then ingest them into Google BigQuery. Once events are stored in BigQuery (which usually only takes a second from the time the client sends the data until it’s available), we can use almost-standard-SQL to simply query for data while Google makes sure that, even with terabytes of data being scanned, query times stay in the range of seconds rather than hours. Before ingesting their data into the pipeline, our mobile clients are aggregating events internally and, once a certain threshold is reached or the app is going to the background, sending the events as a JSON blob into the stream.

    In the past we had workers running that continuously read from the stream and would validate and post-process the data and then enqueue them for other workers to write them to BigQuery. We went ahead and implemented the Lambda-based approach in such a way that Lambda functions would automatically be triggered for incoming records, pre-aggregate events, and write them back to SQS, from which we then read them, and persist the events to BigQuery. While this approach had a couple of bumps on the road, like re-triggering functions asynchronously to keep up with the stream and proper batch sizes, we finally managed to get it running in a reliable way and are very happy with this solution today.

    #ServerlessTaskProcessing #GeneralAnalytics #RealTimeDataProcessing #BigDataAsAService

    See more
    Snowflake logo

    Snowflake

    754
    885
    18
    The data warehouse built for the cloud
    754
    885
    + 1
    18
    PROS OF SNOWFLAKE
    • 4
      Public and Private Data Sharing
    • 3
      Good Performance
    • 2
      Serverless
    • 2
      Multicloud
    • 2
      Great Documentation
    • 2
      User Friendly
    • 1
      Usage based billing
    • 1
      Innovative
    • 1
      Economical
    CONS OF SNOWFLAKE
      Be the first to leave a con

      related Snowflake posts

      Shared insights
      on
      Google BigQueryGoogle BigQuerySnowflakeSnowflake

      I use Google BigQuery because it makes is super easy to query and store data for analytics workloads. If you're using GCP, you're likely using BigQuery. However, running data viz tools directly connected to BigQuery will run pretty slow. They recently announced BI Engine which will hopefully compete well against big players like Snowflake when it comes to concurrency.

      What's nice too is that it has SQL-based ML tools, and it has great GIS support!

      See more
      Shared insights
      on
      SnowflakeSnowflakeHadoopHadoopMarkLogicMarkLogic

      For a property and casualty insurance company, we currently use MarkLogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus Snowflake versus a hadoop or all three of these platforms redundant with one another?

      See more
      Amazon EMR logo

      Amazon EMR

      508
      602
      54
      Distribute your data and processing across a Amazon EC2 instances using Hadoop
      508
      602
      + 1
      54
      PROS OF AMAZON EMR
      • 15
        On demand processing power
      • 12
        Don't need to maintain Hadoop Cluster yourself
      • 7
        Hadoop Tools
      • 6
        Elastic
      • 4
        Backed by Amazon
      • 3
        Flexible
      • 3
        Economic - pay as you go, easy to use CLI and SDKs
      • 2
        Don't need a dedicated Ops group
      • 1
        Great support
      • 1
        Massive data handling
      CONS OF AMAZON EMR
        Be the first to leave a con

        related Amazon EMR posts

        Stitch logo

        Stitch

        137
        140
        11
        All your data. In your data warehouse. In minutes.
        137
        140
        + 1
        11
        PROS OF STITCH
        • 7
          3 minutes to set up
        • 4
          Super simple, great support
        CONS OF STITCH
          Be the first to leave a con

          related Stitch posts

          Ankit Sobti

          Looker , Stitch , Amazon Redshift , dbt

          We recently moved our Data Analytics and Business Intelligence tooling to Looker . It's already helping us create a solid process for reusable SQL-based data modeling, with consistent definitions across the entire organizations. Looker allows us to collaboratively build these version-controlled models and push the limits of what we've traditionally been able to accomplish with analytics with a lean team.

          For Data Engineering, we're in the process of moving from maintaining our own ETL pipelines on AWS to a managed ELT system on Stitch. We're also evaluating the command line tool, dbt to manage data transformations. Our hope is that Stitch + dbt will streamline the ELT bit, allowing us to focus our energies on analyzing data, rather than managing it.

          See more
          Cloudera Enterprise logo

          Cloudera Enterprise

          112
          152
          0
          Enterprise Platform for Big Data
          112
          152
          + 1
          0
          PROS OF CLOUDERA ENTERPRISE
            Be the first to leave a pro
            CONS OF CLOUDERA ENTERPRISE
              Be the first to leave a con

              related Cloudera Enterprise posts

              Dremio logo

              Dremio

              101
              302
              7
              The data lake engine
              101
              302
              + 1
              7
              PROS OF DREMIO
              • 3
                Nice GUI to enable more people to work with Data
              • 2
                Connect NoSQL databases with RDBMS
              • 2
                Easier to Deploy
              CONS OF DREMIO
                Be the first to leave a con

                related Dremio posts

                Azure Synapse logo

                Azure Synapse

                61
                147
                7
                Analytics service that brings together enterprise data warehousing and Big Data analytics
                61
                147
                + 1
                7
                PROS OF AZURE SYNAPSE
                • 3
                  Security
                • 2
                  ETL
                • 2
                  Serverless
                CONS OF AZURE SYNAPSE
                  Be the first to leave a con

                  related Azure Synapse posts