Hypertable insert causing chunk creation conflict

We are running into an error with an insert query that is very weird.

Context :
We are running Postgresql 15 and timescaledb extension 2.15.3
We have a readings hypertable, it has chunk of 1 month interval, and a secondary dimension on an id column with a partition number of 50.
CREATE TABLE rdg.readings (

“time” timestamptz NOT NULL,

usage_point_id int8 NOT NULL,

reading_type_id int4 NOT NULL,

res_spec_id int4 NULL,

data_channel_id int8 NOT NULL,

variant_id int4 NOT NULL,

meter_id int8 NOT NULL,

value numeric(20, 6) NULL,

vee_state_id int4 NOT NULL,

v_fail_codes jsonb NULL,

version_id int4 NOT NULL,

inserted_time timestamptz NOT NULL,

batch_id int8 NULL,

CONSTRAINT readings_pk PRIMARY KEY (“time”, usage_point_id, reading_type_id, data_channel_id, variant_id),

CONSTRAINT rdg_batch_fk FOREIGN KEY (batch_id) REFERENCES rdg.reading_batches(id) ON DELETE RESTRICT ON UPDATE RESTRICT,

CONSTRAINT rdg_dch_fk FOREIGN KEY (data_channel_id) REFERENCES mdm.data_channels(id) ON DELETE RESTRICT ON UPDATE RESTRICT,

CONSTRAINT rdg_res_spec_fk FOREIGN KEY (res_spec_id) REFERENCES mdm.resource_specs(id) ON DELETE RESTRICT ON UPDATE RESTRICT,

CONSTRAINT rdg_rt_fk FOREIGN KEY (reading_type_id) REFERENCES mdm.reading_types(id) ON DELETE RESTRICT ON UPDATE RESTRICT,

CONSTRAINT rdg_up_fk FOREIGN KEY (usage_point_id) REFERENCES mdm.usage_points(id) ON DELETE RESTRICT ON UPDATE RESTRICT,

CONSTRAINT rdg_variant_fk FOREIGN KEY (variant_id) REFERENCES cmn.reference_list(id) ON DELETE RESTRICT ON UPDATE RESTRICT,

CONSTRAINT rdg_vee_fk FOREIGN KEY (vee_state_id) REFERENCES cmn.reference_list(id) ON DELETE RESTRICT ON UPDATE RESTRICT

);

CREATE INDEX readings_batch_id_idx ON rdg.readings USING btree (batch_id);

CREATE INDEX readings_time_idx ON rdg.readings USING btree (“time” DESC);

CREATE INDEX readings_usage_point_id_time_idx ON rdg.readings USING btree (usage_point_id, “time” DESC);

SELECT create_hypertable(‘RDG.readings’,‘time’,partitioning_column=>‘usage_point_id’, chunk_time_interval => INTERVAL ‘1 month’,number_partitions=>50);

We have an insert query that is inserting 5k records. These records are on the same month and are across 2 time slice since our chunks do not start at beginning of the month.
It runs well for almost all records but I have a set of 2 usage points that return this error :
SQL Error [23505]: ERROR: duplicate key value violates unique constraint “dimension_slice_dimension_id_range_start_range_end_key”
Detail: Key (dimension_id, range_start, range_end)=(20, 1760936552, 1789569705) already exists.
This is the dimesion_id for usage_point_id field.

I checked and if I only keep the values that generate the error are on the same usage_point_id dimension_slice, and across 2 time dimension_slice.

If I drop the chunk associated with the dimension_id mentioned (there’s only one), I can successfully run a query that contains all values that would belong to the same time slice_dimension. It does not matter which time slice_dimension I pick, the query will run and the other will not. If I drop the chunk then I can run any of the 2 query succefully but then the other one will fail.

For some reason timescale is trying to insert the records of whatever query I run second into the wrong chunk, or trying to create the second chunk and create the existing dimension_slice which causes the error.

An example of 2 queries that I mentioned :
QUERY 1
INSERT INTO rdg.readings (
time, usage_point_id, reading_type_id, res_spec_id, data_channel_id,
variant_id, meter_id, value, vee_state_id, v_fail_codes, version_id,
inserted_time, batch_id
) VALUES
(‘2022-11-24 00:45:00’, 1320502, 69, 5008, 3582164, 1001, 225165, 2.353, 1, NULL, 1, current_timestamp, 581349);

QUERY 2
INSERT INTO rdg.readings (
time, usage_point_id, reading_type_id, res_spec_id, data_channel_id,
variant_id, meter_id, value, vee_state_id, v_fail_codes, version_id,
inserted_time, batch_id
) VALUES
(‘2022-11-24 01:00:00’, 1320502, 69, 5008, 3582164, 1001, 225165, 1.503, 1, NULL, 1, current_timestamp, 581349);

A lot of the data is not relevent to the issue but I honestly do not know how to reproduce the error somewhere else as I still do not understand what is happening.
So if I run query 1, I cannot run query 2 as the chunk creation for this time dimension_slice will cause the error. If I run query 2 it works but then query 1 causes the same error on chunk creation

Edit : I also must mention that this hypertable already contains several billions of records, and that this case has presented itself thousands of times without causing an issue before. In fact I manually tried a couple of similar case insert for other dimension_sice without any issue.