There are compressed chunks that prevent changing the existing compression configuration

I am running an upgrade scenario where I am adding a column to an existing table, it works fine for new installations where there is no data and no chunks, but in the case of an upgrade scenario, I am getting errors as mentioned below.

ALTER TABLE timeseries SET (timescaledb.compress=false);
ALTER TABLE eventvalues ADD COLUMN IF NOT EXISTS insertiontime timestamp DEFAULT CURRENT_TIMESTAMP;
UPDATE timeseries SET insertiontime = NULL WHERE insertiontime IS DISTINCT FROM NULL;

Existing compression policy:

SELECT set_chunk_time_interval('timeseries ', INTERVAL 8 hours);
ALTER TABLE timeseries SET (timescaledb.compress, timescaledb.compress_segmentby = ‘objectid’, timescaledb.compress_orderby = ‘timestamp DESC’);
SELECT add_compression_policy('timeseries ', INTERVAL 8 hours);

Error:
: ERROR: cannot change configuration on already compressed chunks
DETAIL: There are compressed chunks that prevent changing the existing compression configuration.
CONTEXT: SQL statement “ALTER TABLE timeseries SET (timescaledb.compress=false)”

In case chunks are already created and we want to add a column to hyper table, then Is it required to drop hyper table to insert a new column?

Yes, you can. Input;

drop table data cascade;
CREATE TABLE data (
    time TIMESTAMP with time zone NOT NULL,
    value FLOAT NOT NULL
);

SELECT create_hypertable('data', 'time');
INSERT INTO data (time, value) VALUES
('2023-03-28 12:00:00', 1.0),
('2023-03-28 12:01:00', 2.0),
('2023-03-28 12:02:00', 3.0),
('2023-03-28 12:03:00', 4.0),
('2023-03-28 12:04:00', 5.0),
('2023-03-28 12:05:00', 6.0);

ALTER TABLE data SET (
    timescaledb.compress = true
);

SELECT compress_chunk(i) FROM show_chunks('data') i;

ALTER TABLE data ADD COLUMN new_column INTEGER;

SELECT * FROM data;

Output

DROP TABLE
CREATE TABLE
┌─────────────────────┐
│  create_hypertable  │
├─────────────────────┤
│ (802,public,data,t) │
└─────────────────────┘
(1 row)

INSERT 0 6
ALTER TABLE
┌────────────────────────────────────────────┐
│               compress_chunk               │
├────────────────────────────────────────────┤
│ _timescaledb_internal._hyper_802_760_chunk │
└────────────────────────────────────────────┘
(1 row)

ALTER TABLE
┌────────────────────────┬───────┬────────────┐
│          time          │ value │ new_column │
├────────────────────────┼───────┼────────────┤
│ 2023-03-28 12:05:00-03 │     6 │            │
│ 2023-03-28 12:04:00-03 │     5 │            │
│ 2023-03-28 12:03:00-03 │     4 │            │
│ 2023-03-28 12:02:00-03 │     3 │            │
│ 2023-03-28 12:01:00-03 │     2 │            │
│ 2023-03-28 12:00:00-03 │     1 │            │
└────────────────────────┴───────┴────────────┘
(6 rows)

@jonatasdp But with the given approach there will be loss of data, where we need to create a new hypertable. Is there any way by which we can insert new column into compressed chunks without disturbing existing data.

I don’t think timescale can handle multiple chunks from the same hypertable using different schemas.
Is it too slow to add the new column?
How will it loss data? I don’t understand how are you going to loss data.

Please ignore the previous comment,

While running command " ALTER TABLE data ADD COLUMN new_column INTEGER;" I am getting the following error “ERROR: cannot add column with non-constant default expression to a hypertable that has compression enabled
SQL state: 0A000”

Please, update your timescaledb extension for the latest versions which it’s allowed. I have here 2.9 and it’s already allowed.

Let me try and come back, Thank you @jonatasdp

1 Like