Question: time_partitioning_func

Hi!

Question regarding time_partitioning_func described here: Timescale Docs

We want to use timestamp9 as time column and everything works, besides add_compression_policy (because it manipulates with the time column type and doesn’t know about timestamp9 extension). We though that it would be interesting to use time_partitioning_func to convert timestamp9 to bigint and use bigints in chunk_intervals/compression_policies

Question: does this time_partitioning_func functionality affects only chunk management/policies and does not affect regular queries performance?

P.S. Currently, we are choosing between adding custom jobs (with add_compression_policy behaviour) for timestamp9 tables and this.

With warm regards,
Nursan Valeyev

2 Likes

I haven’t seen any reply to this post yet and wondering if it’s any progress.

I run into the same need for using a timestamp with nanoseconds. TimescaleDB recommends the use of timestamptz. But what’s the recommended solution for storing nanoseconds? Is timestamp9 recommended and what’s the suggested approach to use it?

@Tindarid I would really appreciate if you share your findings.

I spent some time investigating this problem and I submitted a related bug report in [Bug]: Compression job failed for custom time types (e.g. timestamp9) · Issue #6537 · timescale/timescaledb (github.com)

Thanks.

1 Like