These days I am converting a lot of regular tables to hypertables. In some cases, we’ll want a retention policy so I delete old data before converting to a hypertable. But in other cases, we want to preserve all existing data.
Typically I use migrate_data=>TRUE for an in-place migration. This can take many minutes, up to an hour even. I wonder what (if any) faster alternatives exist. Would a standalone index on the hypertable key candidate help, or not at all? How about some much more involved method of creating an empty hypertable and loading it with a CSV using timescaledb-parallel-copy?
I’m somewhat presuming that your extension, written in C, is probably as efficient as it gets. But I’d love confirmation from the authors.