Using continuous aggregates to downsample Prometheus metrics

In Promscale, you can use continuous aggregates to downsample your Prometheus metrics, a method that is more timely and accurate than recording rules in many circumstances.

Why use continuous aggregates for downsampling?

Downsampling through the recording of rules in Prometheus works great in certain scenarios, but it also presents some limitations:

  • Timeliness. With recording rules, users only see the results of the query once the rules engine has run the materialization but not as soon as data comes in. This might not be such a big deal for 5-minute aggregates (although it could be) but for hourly or daily aggregates it could be a significant limitation. [Continuous aggregates] ( automatically combine the materialized results with a query over the newest not-yet-materialized data to give us an accurate up-to-the-second view of our data.

  • Rollups. Downsampling is defined for particular time-bucket granularities (e.g. 5 minutes). But, when performing analysis, we may want to look at longer aggregates (e.g. 1 hour). With recording rules, this is sometimes possible (a minimum of many minimums is the same as the minimum of the samples) but often it isn’t (the median of many medians is not the same as the median of the underlying samples). Continuous aggregates solve this issue by storing the intermediate state of an aggregate in the materialization, making further rollups possible.

  • Query flexibility for retrospective analysis. Once a query for a recording rule is defined, the resulting metric is sufficient to answer only that one query. However, when using continuous aggregates, we can use multi-purpose aggregates. For instance, Timescale’s toolkit extension has aggregates that support percentile queries on any percentile, and statistical aggregates supporting multiple summary aggregates. The aggregates that we define when we configure the materialization are much more flexible in what data we can derive at query time.

  • Backfilling. Prometheus recording rules only downsample data collected after the recording rule is created. The Prometheus community created a tool to backfill data but requires an additional manual step and has a number of limitations that make it more complex to use on a regular basis or to automate the process. Continuous aggregates automatically downsample all data available including past data so that we can start benefiting from the performance improvements the aggregated metric brings as soon as it is created.

How to use this method: example

In Promscale, Prometheus metrics are stored in a hypertable containing the following columns, corresponding to the Prometheus data model:

  • time, which stores the timestamp of the reading;
  • value, which stores the sample reading as a float;
  • series_id, which stores a foreign key to the table that defines the series (label set) of the reading.

Let’s now imagine we have some metric called node_memory_MemFree. In this example, we will create a continuous aggregate to derive some summary statistics (min, max, average) about the reading on an hourly basis, and we will use this continuous aggregate as a way to downsample our data.

First, we would run the following query on the underlying TimescaleDB database to define the continuous aggregate:

WITH (timescaledb.continuous) AS
          time_bucket('1 hour', time) AT TIME ZONE 'UTC' +'1 hour')  
            as time, 
        min(value) as min, 
        max(value) as max, 
        avg(value) as avg
    FROM prom_data.node_memory_MemFree
    GROUP BY time_bucket('1 hour', time), series_id;

Once defined, this continuous aggregate can immediately be queried via SQL.

We can also make it available to PromQL queries. If you are interested in doing so, register the continuous aggregate as a PromQL metric view:

SELECT register_metric_view('public', 'node_memfree_1hour');

Querying the data

The new aggregated metric can be queried like any other Prometheus metric in Promscale: using SQL. For example:

SELECT time, jsonb(labels) as metric, avg
FROM node_memfree_1hour m
INNER JOIN prom_series.node_memory_MemFree s 
    ON (m.series_id=s.series_id)
ORDER BY time asc

You could also use PromQL, if you have defined the metric as we show before:


Data retention for downsampled data

Now that our data is effectively downsampled through the continuous aggregate we just defined, we can decide to keep this metric around for longer, dropping our original raw data after a certain period of time. This allows us to do long-term analysis without incurring the storage and performance costs of raw data.

To enable this, all we need to do is increase the retention period for our new metric by setting a retention period. For example, the command below would increase the retention period of our continuous aggregate to a full year, even if the underlying metric data on which it was based has been deleted:

SELECT set_metric_retention_period('public', 'cpu_usage_1hour', INTERVAL '365 days');

This information was originally published in this blogpost. Check it out for further insights on downsampling through continuous aggregates!