Amazon Redshift

Amazon Redshift

Application and Data / Data Stores / Big Data as a Service
Needs advice
on
Amazon RedshiftAmazon Redshift
and
MySQLMySQL

please help me understand the difference in syntax between MySQL and Amazon Redshift . They have the difference or they are completely the same?

READ MORE
2 upvotes·4.2K views
Needs advice
on
AirflowAirflow
and
AWS LambdaAWS Lambda
in

I have data stored in Amazon S3 bucket in parquet file format.

I want this data to be copied from S3 to Amazon Redshift, so I use copy commands to achieve this. But, I need to do this manually. I want to achieve this with some sort of automation such that if any new file comes into S3, it should be copied to the required table in redshift. Can you suggest what different approaches I can use?

READ MORE
8 upvotes·15K views
Replies (1)
Backend Software Engineer ·
Recommends
on
aws
sns
aws-s3

Hello Aditya, I haven't tried this myself, but theoretically you can harness the power of Amazon s3 events to generate an event whenever there is a CRUD event on your parquet files in S3. First you'd have to generate the event: https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html

Then you would want to subscribe to the event use the event info to pull the file into Redshift https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-event-notifications.html#working-with-event-notifications-subscribe

-Scott

READ MORE
4 upvotes·3.6K views

Hi all,

Currently, we need to ingest the data from Amazon S3 to DB either Amazon Athena or Amazon Redshift. But the problem with the data is, it is in .PSV (pipe separated values) format and the size is also above 200 GB. The query performance of the timeout in Athena/Redshift is not up to the mark, too slow while compared to Google BigQuery. How would I optimize the performance and query result time? Can anyone please help me out?

READ MORE
3 upvotes·491.8K views
Replies (4)

you can use aws glue service to convert you pipe format data to parquet format , and thus you can achieve data compression . Now you should choose Redshift to copy your data as it is very huge. To manage your data, you should partition your data in S3 bucket and also divide your data across the redshift cluster

READ MORE
7 upvotes·194.7K views
Data Technologies Manager at SDG Group Iberia·
Recommends
on
Amazon Redshift

First of all you should make your choice upon Redshift or Athena based on your use case since they are two very diferent services - Redshift is an enterprise-grade MPP Data Warehouse while Athena is a SQL layer on top of S3 with limited performance. If performance is a key factor, users are going to execute unpredictable queries and direct and managing costs are not a problem I'd definitely go for Redshift. If performance is not so critical and queries will be predictable somewhat I'd go for Athena.

Once you select the technology you'll need to optimize your data in order to get the queries executed as fast as possible. In both cases you may need to adapt the data model to fit your queries better. In the case you go for Athena you'd also proabably need to change your file format to Parquet or Avro and review your partition strategy depending on your most frequent type of query. If you choose Redshift you'll need to ingest the data from your files into it and maybe carry out some tuning tasks for performance gain.

I'll recommend Redshift for now since it can address a wider range of use cases, but we could give you better advice if you described your use case in depth.

READ MORE
5 upvotes·236.1K views
View all (4)