How to get faster query speeds

I recently installed Timescaledb locally using Docker, and I’m slightly disappointed with the performance. I tried retrieving 500K records (roughly 45MB) from a Hypertable I created from the Getting Started guide, and I’m loading it into memory using ConnectorX, which is supposed to be very fast.

At any rate, it takes 5s to load into memory using Timescaledb and ConnectorX. I was previously using pd.read_csv(), and takes about 0.5s to load into memory. Of course I don’t expect it to be anywhere near as fast as that, but I also didn’t expect it to take 10X longer given that it’s on a the same computer and same disk.

5 seconds is not really workable for my usecase, is there anything I can do to reduce this time?

Can you give a bit more background? What’s the chunk size? When you say the data set from the Getting Started guide, do you mean the stocks example?

What was the query you’ve executed? Did it include any aggregations? The main use case for TimescaleDB isn’t just reading all data in the database out, but using the data to run aggregations and analytics inside the database.

Apart from that, never used ConnectorX. Can’t give any hints on that.

Sure, ConnectorX doesn’t appear to be using a chunking size, it uses some sort of streaming AFAIK. The dataset is financial data similar to the one provided in the example.

The query doesn’t include any aggregations:

select *
from mytable
where
close_time > now() - interval ‘500 day’ and
symbol = ‘abc’