Recommendation / Ideas on backfilling data with enabled compression and lots of data

We want to migrate old data (no hypertables) to our Timescale Instance where we use compression. Hence some of that data is already compressed.
While the approach described here seems to work fine, we have the challenge that for one it is a lot of data to migrate and there is already a lot of data compressed.

I did some testing and even with around 2 Billion of entries already existing and reading with the time_bucket function at the same time where we insert the old data seems to work. Obviously with increased query duration. But the described approach on the documentation would be our way to go forward now.

Did anyone had to deal with such scenarios and has some insights?
Are there better ideas to to such a migration of old data with enabled compression?

Our team is quite new to Timescale, so any input is appreciated.

Welcome @BilledTrain380, this is a recurrent need for anyone using or migrating to Timescale.

Now you can use the hypershift which can handle compression too. Check it out and let us know if you have any ideas on how to make it work better for you!

Hi @jonatasdp

Thank you very much for your input. Quick question regarding Hypershift. Does it also work when the target database already contains data and this data is already compressed?

Yes! As far as I remember compressed data is also being transferred.