Tuple decompression limit exceeded by operation

Hi! I faced with the error 53400 which corresponds to issue:

SQL Error [53400]: ERROR: tuple decompression limit exceeded by operation
  Details: current limit: 100000, tuples decompressed: 106000
  Hint: Consider increasing timescaledb.max_tuples_decompressed_per_dml_transaction or set to 0 (unlimited).

I thought that I can upsert data with limit-offset in batches of 10000 records and commit in for-loop after each batch, but I still ran into an error.

What could I do to upsert data correctly without this error?

Hi Denis, I asked Ante (PR owner) and the explanation was:

doing it in limit-offset doesn’t accomplish more than just setting the limit to unlimited (0)

Can you confirm you’re using the segmentby and orderby columns for filtering the tuples?

Feel free to share your query too, may it help us to understand.

No, I do not use where clause because the data I want to upsert are prepared (approximately 2 million rows).
Most of the inserted data falls on the last 3 - 5 days, the rest of the data (~100,000 rows+) is smeared until 2023-07.

Below is the script of table and upsert (names are changed):

CREATE TABLE schema.my_hypertable(
	id NUMERIC,
	col1 NUMERIC,
	segment_by_1 VARCHAR(1),
	time_column TIMESTAMP,
	col4 NUMERIC,
	col5 INTEGER,
	segment_by_2 INTEGER,
	col7 NUMERIC,
	PRIMARY KEY (id, time_column)
);

SELECT * FROM api_bet.create_hypertable(
	relation => 'schema.my_hypertable',
	time_column_name => 'time_column',
	chunk_time_interval => INTERVAL '1 month'
	);

ALTER TABLE schema.my_hypertable
SET (
	timescaledb.compress,
	timescaledb.compress_orderby = 'id',
	timescaledb.compress_segmentby = 'segment_by_1, segment_by_2'
);

DO $$
DECLARE
	C_LIMIT INTEGER := 10000;

	v_loop_count INTEGER := 0;
	v_offset INTEGER := 0;
	i INTEGER := 0;
BEGIN
	v_loop_count := (SELECT COUNT(*) / C_LIMIT FROM tt_modified_rows);

	FOR i IN 0..v_loop_count
	LOOP
		v_offset := i * C_LIMIT;

		INSERT INTO schema.my_hypertable
		SELECT * FROM tt_modified_rows
		ORDER BY id
		OFFSET v_offset LIMIT C_LIMIT
		ON CONFLICT (id, time_column) DO UPDATE SET
			col1 = EXCLUDED.col1,
			segment_by_1 = EXCLUDED.segment_by_1,
			col4 = EXCLUDED.col4,
			col5 = EXCLUDED.col5,
			segment_by_2 = EXCLUDED.segment_by_2,
			col7 = EXCLUDED.col7,
		;

		COMMIT;
	END LOOP;
END;$$;

Thanks for sharing the full example.

Can you try to set it to zero?

1 Like

Thanks @jonatasdp ! But before I want to understand new feature and I tried other test that throw error anyway.

-- DROP TABLE hypertable;

CREATE TABLE hypertable(id int, type_id int, ts timestamp, PRIMARY KEY(id, ts));

SELECT create_hypertable(relation => 'hypertable', dimension => by_range('ts', INTERVAL '1 day'));

DO $$
BEGIN
	FOR i IN 0..4 LOOP
		INSERT INTO hypertable(id, type_id, ts)
		SELECT gs.id, floor(random() * 3 + 1), TIMESTAMP '2023-12-31' + make_interval(days => (i + 1))
		FROM generate_series(i * 110000 + 1, 110000 * (i + 1), 1) AS gs(id);
	END LOOP;
END;$$;

ALTER TABLE hypertable
SET (
	timescaledb.compress,
	timescaledb.compress_orderby = 'id',
	timescaledb.compress_segmentby = 'type_id'
);

SELECT compress_chunk(c.chunk_schema || '.' || c.chunk_name)
FROM timescaledb_information.chunks AS c
WHERE hypertable_name = 'hypertable';

-- DROP TABLE tt;

CREATE TEMP TABLE tt AS
SELECT id FROM hypertable
WHERE ts >= '2024-01-05' AND ts < '2024-01-06'
LIMIT 50000;

UPDATE hypertable AS tgt
SET type_id = floor(random() * 3 + 1)
FROM tt AS src
WHERE ts >= '2024-01-05' AND ts < '2024-01-06' AND tgt.id = src.id
;

-- Output
-- SQL Error [53400]: ERROR: tuple decompression limit exceeded by operation
--   DETAILS: current limit: 100000, tuples decompressed: 110000
--   HINT: Consider increasing timescaledb.max_tuples_decompressed_per_dml_transaction or set to 0 (unlimited).

How do I must update table with tuples number less then 100000 and do not get the error?

I don’t know exactly the calculation. Have you tried to set it to zero?

@jonatasdp , SET timescaledb.timescaledb.max_tuples_decompressed_per_dml_transaction TO 0; helped only once: when I set this option to 0 then upsert worked. Then I return option to 100 000 and the error occurred as expected. Then I changed the option to 0 for the second time and the error occurred again as not expected. Next times setting of this option does not give the needed result. I reseted session, reseted connection but it was all in vain.

Should setting this option at the session level be enough? Or does it need to be set in some way at the table level?


At now the question is closed for me)

  1. This option has session level.
  2. If a lot of or all tuples were decompressed then the next settings of this option do not matter. In order for this setting to work again and throw an error, you need to recompress the chunk(s).
  3. The option has double prefix timescaledb. in github doc and not working properly for me. But single prefix timescaledb. from site doc works well.