CTO at Harvested Financial·

I'm trying to build a way to read financial data really, really fast, for low cost. We are write/update-light (in this arena) and read-heavy. Google BigQuery being serverless can keep costs beyond low, but query speeds are always a few seconds because, I think, of the lack of indexing and potential to take advantage of the structure of the common queries. I have tried various partitions on BigQuery to speed things up too with some success but nothing extraordinary. I have never used Google Cloud Bigtable but get how it works conceptually. I believe it would make date-range based queries markedly faster. Question is, are there ways to take advantage of date-ranges in BigQuery, or does it makes sense to just shift to BigTable for mega-fast reads? I'd love to get sub-50ms.

READ LESS
2 upvotes·27.6K views
Replies (1)
Lead Data Engineer at BharatPe·

As a DataWarehouse Solution Google Bigquery is meant more for Large Data Analysis then real time Write/Update. You can go with BigTable instead of BigQuery but be prepare for the hight cost. Also, in most of the Data solution if you are looking for heavy real time Wrire/Update you have to put some cost on the solution. For more detail you can check this link https://cloud.google.com/blog/products/gcp/in-memory-query-execution-in-google-bigquery

READ MORE
3 upvotes·2.4K views
Avatar of Sanjeev Singh

Sanjeev Singh

Lead Data Engineer at BharatPe