Compressed chunks are creating automatically without any compression policy


I have enabled compression in my Timescale Hypertable in order to compress manually some old chunks. As you can see in the picture there are only to compressed tables:

SELECT * FROM hypertable_compression_stats(‘public.ts_kv’);

total_chunks | number_compressed_chunks | before_compression_table_bytes | before_compression_index_bytes | before_compression_toast_bytes | before_compression_total_bytes | after_compression_table_bytes | after_compression_index_bytes | after_compression_toast_bytes | after_compression_total_bytes | node_name
745 | 2 | 607010816 | 894181376 | 16384 |
1501208576 | 1376256 | 327680 | 36184064 | 37888000 |
(1 row)


But, unexceptedly once I compressed these tables the following chunks start creating as compress hyper_chunks…

All it is happening without any compression policy. The only commands I used were:

ALTER TABLE public.ts_kv SET (timescaledb.compress, timescaledb.compress_orderby = ‘ts DESC’, timescaledb.compress_segmentby = ‘entity_id’);

SELECT compress_chunk(‘_timescaledb_internal._hyper_1_10_chunk’);
SELECT compress_chunk(‘_timescaledb_internal._hyper_1_11_chunk’);

Please, could someone explain to me why it is happening and how could I stop automatic compression?

Thank you!


Hi @ingegaizka , can you check what you have in the view timescaledb_information.compression_settings.

Do you have a single hypertable? maybe it’s related to another hypertable.

If you think it’s a bug, let’s try to isolate the case and create a new database, create the extension, then a hypertable and follow your steps. If you get it in a reproducible manner, it would worth to report as an issue on github.

Hi Jonatan!

Here my timescaledb_information.compression_settings view’s content:

I have got a single hypertable which is automatically divide in chunks depending on the ts (timestamp). But, it is true that in this view seems to be another kind of mechanism apply…

I do not know if it is a bug but, I have been using this database for 2 years and it has got a hundreds of GB of information. I think creating a new database is a problematic option.

Might it be related with compression_settings view’s info?

Thank you so much!



Can someone help me with this? It would be great :wink:

Thank you! :slight_smile:

Hello, @ingegaizka !
For each chunk that is compressed it is created compressed sub chunk. If you run show_chunks(‘your_hypertable_name’) then you will see only names of main chunks. But when you run explain plan of query then you will see compressed subchunks.
Don`t worry about that and use the tsdb calmly)

1 Like

Hello @LuxCore !
Thank you for your response. But, there is something that I do not understand. I have got a process to check the augmentation of database size every day and it was usually of 300 MB. Now, after enabled compression (not activated to compress automatically) in Timescaled it is less than 1 MB. It meas that the current data is compressing directly when it is injected but, it was not what I want neither what I configured.

How could you explain that?

Thank you so much.



That’s really weird @ingegaizka, if you can build a reproducible steps, it would be great to fill a bug.

Have you checked if any older jobs are running? anything on timescaledb_information.job_stats?

This is what I have in timescaledb_information.job_stats:

What do you mean with reproducible steps? I enabled compression for the hypertable and, then, I manually compressed 2 old chunks. Not more. It was the only thing I did.