Timescale Logo

Best Practices for (Time-)Series Metadata Tables

Written by Chris Engelbert

Collecting time-related data, like IoT or observability metrics, events, logs, or similar datasets, comes with massive amounts of data split into many distinct series.

One of the many lessons I learned at my previous startup is that understanding the corresponding attributes of each data series is essential and must be available quickly for many use cases.

In the last two installments of the best practices series [Narrow, Medium, or Wide Table Layout and Single vs. Multiple Table(s)], we looked at how to design your hypertables. Now, we want to start looking into how we can make things available fast, starting with separate tables for metadata and series lookups.

We’ll cover the following best practices:

  • Querying meta information

  • Referencing the hypertable data

  • Further specifying the series in the metadata table

Why These Metadata Tables Matter

The interesting thing about these metadata tables is that we can easily and quickly resolve all available data series for a specific user and time range. We'd also be able to present overlapping series, which could still be distinct, like two temperature sensors in the same device (yes, that may make sense with external, cable-connected sensors in different rooms—been there, done that 😉).

What's a Series Metadata Table?

First and foremost, it is a term I came up with while at my startup. Our use case was as simple as it was complicated. We collected IoT data from a large set of devices, but devices could have their sensors exchanged (they were pluggable) at any given point in time.

A series metadata table is often used in time-series databases, and it's a special kind of table that stores metadata about the series of data. This metadata could include information such as series names, labels, the start and end times of the series, units of measurement, or any other descriptive details that provide context to the series.

For developers dealing with time-series data, the series metadata table helps improve query efficiency and data organization. By storing this metadata separately from the actual time-series data, you can keep your database structure more compact and make it easier to manage and search through your data. It also provides a way to annotate your data with additional context, which can be useful when analyzing or visualizing the data.

Metadata table vs. series metadata table

A metadata table, in general, stores information (metadata) about other data in the database. This could be information about the data's structure, such as the names and types of columns in a table, or it could be more descriptive information about the data itself, such as the source of the data, when it was last updated, etc.

So, the main difference between a metadata table and a series metadata table is what they store information about: a generic metadata table can store information about any kind of data, while a series metadata table specifically stores information about time-series data. The purpose of both tables is to provide additional context and organization to the data stored in the database, which can make it easier to manage and analyze the data.

In a time-series database, like TimescaleDB, you might store the actual time-series data in a Timescale hypertable (essentially a partitioned table that is divided into smaller partitions called "chunks") and then use a series metadata table to store additional information about that data.

Example: Implementing a Series Metadata Table

After this series's first two blog posts, we already know what the table layouts can look like and how we can implement a system for GDPR or similar regulations. That's "easy" by now. The more complicated part of the design is to tell the user what data series are available at any given building at any given time.

To clarify a bit, that means that a single building may have multiple devices with temperature sensors, let's say four of them. Let's also assume that at least one of the devices had the temperature sensor exchanged for damage during the given time.

Our set of available data series could look like the following example:

postgres=# SELECT * FROM series_metadata;
 series_id | device_id | sensor_id |        ts_start        |         ts_end         |      ts_last_seen
-----------+-----------+-----------+------------------------+------------------------+------------------------
         1 |         1 |         1 | 2023-01-01 00:00:00+00 |                        | 2023-02-01 00:00:00+00
         2 |         2 |         2 | 2022-12-13 22:00:00+00 | 2023-02-01 20:00:00+00 | 2023-02-01 20:00:00+00
         3 |         3 |         3 | 2023-01-15 17:00:00+00 | 2023-01-17 09:00:00+00 | 2023-01-17 09:00:00+00
         4 |         3 |         4 | 2023-01-17 09:10:00+00 |                        | 2023-02-01 00:00:00+00
(4 rows)

We can see that we have three devices (device_id) and four sensors (sensor_id). Next, two columns (ts_start and ts_end) define the available time range for each data series. A value of null in ts_end represents an open series, an ongoing series where new data is expected.

Given that, we can see that the third device had its sensor exchanged only two days after the first series started. The last column ( ts_last_seen) finds that specific data series's last available row (timestamp). We'll come back to this later.

Querying Meta Information

In our example, we want to show all available data series for a customer and building. That said, we need two more tables. 

The initial request

For simplicity, we use one building table (with a customer_id field and an assignment table for the devices in a building).

postgres=# \d buildings
                                   Table "public.buildings"
   Column    |  Type  | Collation | Nullable |                    Default
-------------+--------+-----------+----------+------------------------------------------------
 building_id | bigint |           | not null | nextval('buildings_building_id_seq'::regclass)
 customer_id | bigint |           | not null |
Indexes:
    "buildings_pk" PRIMARY KEY, btree (building_id)

postgres=# \d assignment_buildings_devices
                                                      Table "public.assignment_buildings_devices"
             Column              |  Type  | Collation | Nullable |                                       Default
---------------------------------+--------+-----------+----------+--------------------------------------------------------------------------------------
 assignment_buildings_devices_id | bigint |           | not null | nextval('assignment_buildings_devices_assignment_buildings_devices_i_seq'::regclass)
 building_id                     | bigint |           |          |
 device_id                       | bigint |           |          |
Indexes:
    "assignment_buildings_devices_pk" PRIMARY KEY, btree (assignment_buildings_devices_id)
    "assignment_buildings_devices_building_id_device_id_uindex" UNIQUE, btree (building_id, device_id)
    "assignment_buildings_devices_device_id_uindex" UNIQUE, btree (device_id)

I omit the actual data in those two tables but give a quick overview as a combined table.

Customer

Building

Device(s)

1

1

1

2

2

2, 3

With those three tables, we can search all available series with a simple query and only get the available data series of customer 2.

postgres=# SELECT sm.*
postgres-# FROM series_metadata sm
postgres-# RIGHT JOIN assignment_buildings_devices abd ON sm.device_id = abd.device_id
postgres-# RIGHT JOIN buildings b ON abd.building_id = b.building_id
postgres-# WHERE b.customer_id = 2;
 series_id | device_id | sensor_id |        ts_start        |         ts_end         |      ts_last_seen
-----------+-----------+-----------+------------------------+------------------------+------------------------
         2 |         2 |         2 | 2022-12-13 22:00:00+00 | 2023-02-01 20:00:00+00 | 2023-02-01 20:00:00+00
         3 |         3 |         3 | 2023-01-15 17:00:00+00 | 2023-01-17 09:00:00+00 | 2023-01-17 09:00:00+00
         4 |         3 |         4 | 2023-01-17 09:10:00+00 |                        | 2023-02-01 00:00:00+00
(3 rows)

That's already awesome. The alternative would be to crawl through all the rows stored in our hypertables and try to find those values on the fly. Doesn't sound like a great alternative.

Anyhow, that was just part of the initial request. 

Finding data series in a certain time range

The second part is to find data series in a certain time range. And here is where this approach really shines. We can quickly filter our series with a small helper function (which I highly recommend) and the magic of PostgreSQL tzrange data type.

CREATE OR REPLACE FUNCTION series_in_range(
        series series_metadata,
        rangeStart timestamptz, rangeEnd timestamptz
)
    RETURNS bool
    LANGUAGE SQL
    PARALLEL SAFE
    STABLE
AS $$
    SELECT NOT isempty(
        tstzrange(rangeStart, rangeEnd) *
        tstzrange(series.ts_start, series.ts_end, '[]')
    );
$$;

This function checks if a given series candidate is in a requested time range. As a note, the function is marked as language sql to give the query planner the option to fully optimize it with the query we build.

With all that in place, we can now extend the query above with a potential time range—let's say everything after 2023-01-18, and it returns only the matching rows. It also automatically considers the ts_end columns with NULL to be open-ended.

postgres=# SELECT sm.*
postgres-# FROM series_metadata sm
postgres-# RIGHT JOIN assignment_buildings_devices abd ON sm.device_id = abd.device_id
postgres-# RIGHT JOIN buildings b ON abd.building_id = b.building_id
postgres-# WHERE b.customer_id = 2
postgres-#   AND series_in_range(sm, '2023-01-18', now());
 series_id | device_id | sensor_id |        ts_start        |         ts_end         |      ts_last_seen
-----------+-----------+-----------+------------------------+------------------------+------------------------
         2 |         2 |         2 | 2022-12-13 22:00:00+00 | 2023-02-01 20:00:00+00 | 2023-02-01 20:00:00+00
         4 |         3 |         4 | 2023-01-17 09:10:00+00 |                        | 2023-02-01 00:00:00+00
(2 rows)

To simplify querying later on, I add one more tidbit to the query above and have the open and closing resolved to actual timestamps that can be passed back to the database later, making it easier to query. I drop that into a function:

CREATE OR REPLACE FUNCTION series_in_range_by_customer(
    customer_id int8,
    rangeStart timestamptz, rangeEnd timestamptz
)
    RETURNS SETOF series_metadata
    LANGUAGE SQL
    PARALLEL SAFE
    STABLE
AS $$
    SELECT sm.series_id, sm.device_id, sm.sensor_id,
           greatest(ts_start, rangeStart) AS ts_start,
           least(sm.ts_end, rangeEnd) AS ts_end,
           sm.ts_last_seen
    FROM series_metadata sm
    RIGHT JOIN assignment_buildings_devices abd
        ON sm.device_id = abd.device_id
    RIGHT JOIN buildings b
        ON abd.building_id = b.building_id
    WHERE b.customer_id = series_in_range_by_customer.customer_id
      AND series_in_range(sm, rangeStart, rangeEnd);
$$;

Again, make sure this function is defined as language sql for the query planner to fully optimize it in place.

The result changed slightly, exchanging each data series's start and end times to align with our requested time window.

postgres=# SELECT * FROM series_in_range_by_customer(2, '2023-01-18', now());
 series_id | device_id | sensor_id |        ts_start        |            ts_end             |      ts_last_seen
-----------+-----------+-----------+------------------------+-------------------------------+------------------------
         2 |         2 |         2 | 2023-01-18 00:00:00+00 | 2023-02-01 20:00:00+00        | 2023-02-01 20:00:00+00
         4 |         3 |         4 | 2023-01-18 00:00:00+00 | 2023-03-31 08:12:44.033446+00 | 2023-02-01 00:00:00+00
(2 rows)

Referencing the Hypertable Data

Now that we know what the external definition looks like, we need to understand how we reference the actual hypertable values. Without going into the actual layout of the hypertable (if you're interested in this, please find more information in the Narrow, Medium, or Wide Table Layout article), let's go with the medium table layout style.

Our hypertable may look like the following:

postgres=# \d temperatures
                                           Table "public.temperatures"
   Column    |           Type           | Collation | Nullable |                     Default
-------------+--------------------------+-----------+----------+-------------------------------------------------
 metric_id   | bigint                   |           | not null | nextval('temperatures_metric_id_seq'::regclass)
 series_id   | bigint                   |           | not null |
 recorded_at | timestamp with time zone |           | not null |
 value       | double precision         |           | not null |
Indexes:
    "temperatures_pk" PRIMARY KEY, btree (metric_id)
    "temperatures_series_id_recorded_at_uindex" UNIQUE, btree (series_id DESC, recorded_at DESC)

Customer

Building

Device(s)

1

1

1

2

2

2, 3


The hypertable stores data according to the series_id and the values timestamp. The order on the index temperatures_series_id_recorded_at_uindex is interesting, though, because it is deliberately chosen to be DESC ordering on both columns.

In most cases, you want the latest series and the latest data. That said, building the index in the order of most common query patterns is a real magic speed-up!

Further Specifying the Series in the Metadata Table

The other really interesting element about these types of external series definitions is that we can extend the series_metadata with additional columns, helping us further specify the series. The possibilities are endless. One additional meta information is the ts_last_seen column, which is updated every time we write a new row to our data hypertable.

At my startup, clevabit, we used it to quickly find all series that could be closed after a given time window of inactivity (for us, it was after seven days). To achieve that, we had a small cron job running every hour or so, selecting all series with the ts_last_seen smaller than now() - interval ‘7 days’ and ts_end being NULL, and updating the row with ts_end being set to ts_last_seen.

Alternative approach: incorporating series metrics 

An alternative could be a field providing information about the metric the series stores, such as "cpu_usage" or "mem_free." If you choose the multi-table approach, it could also provide information about which hypertable the series is stored in. As mentioned earlier, the possibilities are extensive.

There are quite a few more reasons why you'd want to use that setup. Let's look at them as questions or optimizations they could bring.

  • Does a device have a specific sensor in time range [x, y]?

  • Does a device have a specific metric in time range [x, y]?

  • Does a building have a specific metric in time range [x, y]?

  • To provide information about similar parallel but distinct data series in time range [x, y].

  • To optimize queries by knowing the exact available time ranges for each series instead of just using the provided search window.

  • To require less space for metadata since the alternative to the above approach would be to store all necessary metadata with every row in the hypertable, massively duplicating information.

  • To have a quick lookup of available data series time ranges.

  • To find previous assignments of devices and sensors (in case you have a use case where, for example, sensor demo units are sent to different customers for testing).

So, Should I Do It?

Finally, the question you waited for, should you do it?

My answer is: yes, do it.

There are very limited use cases where you wouldn't be interested in the disk space savings or the possibility of finding distinct data series quickly. It also didn't make sense to provide a table for ease of use, as I did in the previous episodes.

Learn more

Next Steps for Metadata Table Optimization

For more table optimizations impacting your performance, I recommend you check out PostgreSQL Performance Tuning: Designing and Implementing Your Database Schema.

On that quick ending note, I hope you found this installment interesting. If you're eager to try it out now, create a free Timescale account or install TimescaleDB on-prem.

Timescale Logo

Subscribe to the Timescale Newsletter

By submitting, I acknowledge Timescale’s Privacy Policy
2024 © Timescale Inc. All rights reserved.