Backup and restore without being the owner of the database


I am trying to backup/pg_dump my entire database (500GB) and when I restore it, there is no data in the timescaledb tables :frowning:

I read the documentation and found this : Timescale Documentation | Migrate the entire database at once which says that I should first SELECT timescaledb_pre_restore(); then pg_restore ... and then SELECT timescaledb_post_restore(); and finally do ANALYZE;

The problem is that SELECT timescaledb_pre_restore(); can only be done by the owner of the database, and I use a database managed by Scaleway which means I am not the owner, which means I can’t do SELECT timescaledb_pre_restore();.

The only alternative I see is to dump the data into CSV files and then restore it as described here : Timescale Documentation | Migrate schema and data separately
But it’s a lot more complex than what I expected by using timescaledb.

Do you have any advice ?

Best Regards

Hi Carlos, the easiest fix is in the user permission but if you have a single hypertable, you may build a simple script to import/export it as a csv file or something that works for you.

Can you just fix the IO by putting some extra |sed s/old_role/new_role/g or inject the role/user you need to make it work?

thank you for your response.

Regarding the user permission fix, I do not follow you.

Are you saying that I should be able to restore a big database (500GB) without SELECT timescaledb_pre_restore(); and SELECT timescaledb_post_restore(); ?

And that, in order to do that, I only need to change somes roles ?

How would I identify those roles ?

Best regards


I mean, you can identify the roles by opening the file and looking the GRANT ... commands that are changing the permissions. If you figure out the user, you can inject or replace the username.

You can get like grep GRANT dump.sql and you’ll see occurrences that refers to the username. You can also see messages like

ALTER ... OWNER TO tsdbadmin;

I think you can change it to your valid super user.

In the other hand, as you mention the csv, separate the schema and the dump gives a flexibility to also append the remaining records instead of loading the entire database always. Separating schema creation, data loading, and data sync as a fast option to get up to date.