Can I save new data only to the new node

Hi,
Can I save new data only to the new node? I want let the old node never save data just for query.
Also how to set the distribution of control data to different data nodes?

Welcome to the Timescale forums @loveworldlovesky!

Iā€™m extrapolating a bit without a little more detail, but it sounds like you have a multi-node TimescaleDB setup with a distributed hypertable. Assuming thatā€™s true, you can set the node(s) that a distributed hypertable will use for storage and modify this over time using attach_data_node() and detach_data_node().

Assuming data is currently being saved to the distributed hypertable, youā€™d want to attach the new data node to the hypertable and then detach the old node. Doing this allows the data on the old node to continue being queried, but new data will not be sent to it.

Please note, however, that this change (and nearly all changes to nodes and chunk configuration) takes effect at the next chunk creation. The data is not immediately cut off from the old node. Instead, once data arrives that requires the creation of a new chunk, thatā€™s when new data would start being sent to the new data node.

Let me know if that doesnā€™t make sense.

Thank you for your suggestion, but I failed to implement it.
I have 3 nodes(dn1,dn2,dn3), and the data is distributed on three nodes.
When execute like :
SELECT detach_data_node(ā€˜dn3ā€™, ā€˜dtdemo1ā€™);dtdemo1 is my distributed hypertable.
I got error like :
ERROR: insufficient number of data nodes
DETAIL: Distributed hypertable ā€œdtdemo1ā€ would lose data if data node ā€œdn3ā€ is detached.
HINT: Ensure all chunks on the data node are fully replicated before detaching it.

It seems that distributed tables do not allow data separation operations before replicated chunks .

Official document description:
Detaching a node is not permitted:

  • If it would result in data loss for the hypertable due to the data node containing chunks that are not replicated on other data nodes
  • If it would result in under-replicated chunks for the distributed hypertable (without the force argument)

Thanks for the feedback. I havenā€™t tried to set this kind of configuration up in a while, and it feels like something has changed that I wasnā€™t up-to-speed onā€¦ or Iā€™m just having a case of totally misremembering how I tested this at one point. My memory is that detaching the node as a destination didnā€™t remove the references to the chunks on that node (for query purposes). There has been a lot of work and discussion around how replication works with multi-node and this might have changed along the way. But, itā€™s probably more likely that I missed something.

Out of curiosity, when you created your distributed hypertable, did you modify the replication_factor setting? And actually, maybe a quick show of your hypertable schema (or close example) and your create_distributed_hypertable command in full would be helpful.

Iā€™ll try to confirm any path forward today or early next week and get back to you.

Thank you for your reply.
The general operation used to create the statement has no special configuration.

SELECT create_distributed_hypertable('dtdemo1', 'time', 'location',
    chunk_time_interval => 1 days,
    data_nodes => '{ "dn1"}');

insert some datas and then :

SELECT attach_data_node('dn2', hypertable => 'dtdemo1', repartition => true);
SELECT attach_data_node('dn3', hypertable => 'dtdemo1', repartition => true);

Continue inserting some data.
Then When i want detach one node :

SELECT detach_data_node('dn3', 'dtdemo1');

I encountered such a error info :
insufficient number of data nodes
DETAIL: Distributed hypertable ā€œdtdemo1ā€ would lose data if data node ā€œdn3ā€ is detached.
HINT: Ensure all chunks on the data node are fully replicated before detaching it.

Official document not yet recommend using replication_factor > 1.
So I donā€™t know how to configure it now.

I see that you asked this question a different way in a new thread and Dimitry give you some input. It seems that I was remembering the discussion around these new functions and thinking it was done a while ago through ā€œdetachā€ - so my apologies for the incorrect guidance.

Same thanks.
Iā€™ll try two new functions next, and Iā€™ll inform you when I have the results

I tried, and these two experimental functions did take effect. I can control whether the data continues to be written to the specified node according to these two functions.

SELECT "timescaledb_experimental"."block_new_chunks"('dn1', 'dtdemo1');
SELECT "timescaledb_experimental"."allow_new_chunks"('dn1', 'dtdemo1');

Thank you all the same.

2 Likes