Background workers issue while migrating from single node to HA cluster

Hi,
We are running a single node docker image of timescale 1.7 from some time now. Now we wish to move the data to a self managed HA cluster, we have around 30 GB of data in the databases. We are using the official timescale-single node helm Helm 0.7.We were able to backup and restore the data using pg_dump and pg_restore commands but after restoring the data,the logs are full of these Warnings.

05 “Compress Chunks Background Job”: failed to start a background worker
2022-11-15 05:08:21 UTC [96]: [63728f16.60-36554] @,app= [01000] WARNING: failed to launch job 1 “Telemetry Reporter”: failed to start a background worker
2022-11-15 05:08:22 UTC [98]: [63728f16.62-36543] @,app= [01000] WARNING: failed to launch job 1 “Telemetry Reporter”: failed to start a background worker
2022-11-15 05:08:22 UTC [95]: [63728f16.5f-36549] @,app= [01000] WARNING: failed to launch job 1 “Telemetry Reporter”: failed to start a background worker
2022-11-15 05:08:22 UTC [97]: [63728f16.61-139833] @,app= [01000] WARNING: failed to launch job 1000 “Compress Chunks Background Job”: failed to start a background worker
2022-11-15 05:08:22 UTC [97]: [63728f16.61-139834] @,app= [01000] WARNING: failed to launch job 1001 “Compress Chunks Background Job”: out of background workers
2022-11-15 05:08:22 UTC [94]: [63728f16.5e-36551] @,app= [01000] WARNING: failed to launch job 1 “Telemetry Reporter”: failed to start a background worker

So My question is

  1. How can we stop this Telemetry Reporter in this helm? We don’t need it.
  2. How many background workers do we get by default? Why are we getting this compress chunk warning? Is it because of huge volume of data being restored and not enough workers to compress it? If this is the case what can be done to make a smooth migration?

Thanks

This could be useful.

Regards.