I'm trying to build a way to read financial data really, really fast, for low cost. We are write/update-light (in this arena) and read-heavy. Google BigQuery being serverless can keep costs beyond low, but query speeds are always a few seconds because, I think, of the lack of indexing and potential to take advantage of the structure of the common queries. I have tried various partitions on BigQuery to speed things up too with some success but nothing extraordinary. I have never used Google Cloud Bigtable but get how it works conceptually. I believe it would make date-range based queries markedly faster. Question is, are there ways to take advantage of date-ranges in BigQuery, or does it makes sense to just shift to BigTable for mega-fast reads? I'd love to get sub-50ms.