Entry-Id updating/editing in timescale-hypertable fails "randomly"

Hey folks,
I have come across a problem that I need your help with.

First the basic facts:

  • I’m using a dockerized Timescale DB on image version: timescale/timescaledb:2.11.2-pg14 for testing
  • And created the database with following script:
create table if not exists my_table
(
    time_stamp timestamp with time zone,
    created_at timestamp with time zone DEFAULT NOW(),
    id int not null,
    value double precision not null,
    UNIQUE (time_stamp, id, value)
);

SELECT create_hypertable('my_table', 'time_stamp', 'id', 2, migrate_data => true);
SELECT set_chunk_time_interval('my_table', INTERVAL '48 hours');
SELECT add_retention_policy('my_table', INTERVAL '1 months');

Now to the problem:
I have a table that is filled with measurement data from devices. The identification is done by an id. Now a device id has changed in my system and I want to adjust these changes retroactively in the hypertable to have a complete history.
I wanted to change this Id with the following script:

UPDATE my_table
SET id = 10000
WHERE id = 1;

But this fails with the following message:

[23514] ERROR: new row for relation "_hyper_13_6_chunk" violates check constraint "constraint_7" Detail: Failing row contains (2023-09-08 07:33:00+00, 2023-09-08 07:34:05.301663+00, 10000, 50.25).

The inexplicable thing for me is that I can use some int-values without an error and others not. All id’s I set new are not yet in the DB → so duplicate problems I can exclude.

A few random examples for new Id’s I tried:

  • 777, 11567, 100000 are successful
  • 13200, 10000, 20000 fail with the above message

If I look up the id’s of the error message (constraint_7) in the pg_catalog I get info about a hashing problem.

pg_prog -> oid=18258 -> prog-name="get_partition_hash" -> pro_src = "ts_get_partition_hash" -> pro_bin = "$libdir/timescaledb-2.11.2"

Is there a traceable explanation for this, to me it seems very random.

One problem I have identified is my second dimension (hypertable-partitioning_column) that I initially create, without this there seems to be no problem, but since I have this in production and there is no remove function (timescale db doesn’t provide a built-in command to directly remove a partitioning dimension.

So with that I mean if I set up the table with only the small change of:

SELECT create_hypertable('my_table', 'time_stamp', migrate_data => true);

Then all tests work, however I do not know how I can transfer this to my productive DB without having to transfer a massive amount of data into a newly created table.

Thanks already for your help

Hi @Scheerinio , dealing with large datasets is always a long wait :smiley:

I don’t know any pathway that does not need any rewrites. As your tables will have a different partition constraint, you’ll need to migrate the data manually.

1 Like