No-Fuss Alerting: Introducing Prometheus Alerts in Promscale

No-Fuss Alerting: Introducing Prometheus Alerts in Promscale
⚠️
While part of this content may still be up to date, we regret to inform you that we decided to sunset Promscale in February 2023. Read our reasons why and other FAQs.

This is KubeCon Europe week, and at Timescale, we’re celebrating it the way we know best: with a new edition of #AlwaysBeLaunching! 🐯 🎊 During this week, we are releasing new features and content every day, all focused on Promscale and observability. Do not miss any of it by following our hashtag #PromscaleKubecon on Twitter—and if you’re at KubeCon, visit us at booth G3! We have awesome demos (and swag) to give you!

Our launch week continues! Today, we’re excited to introduce you to a new capability in Promscale: the native support for alerts.

Alerting is a crucial component of monitoring cloud-native architectures, as it allows developers to automatically identify anomalies and take action before they become problems. Prometheus is the most popular open-source tool for metrics monitoring and alerting. It gives users great flexibility and robust functionality for enabling alerting rules on their metrics using PromQL.

Now, you can use the same standard to define alerts directly within Promscale. You can load the same PromQL alerting rules you would use in Prometheus into Promscale, even using the same YAML configuration files. This will allow you to collect metrics using Prometheus Agent Mode, which needs fewer resources to run than a separate Prometheus instance. Plus, it also opens the door to streaming metrics from OpenTelemetry directly into Promscale and raising alerts on them.

To learn how you can configure alerts in Promscale, keep reading. To install Promscale, click here. It’s 100 % free!

✨ Promscale is the observability backend built on PostgreSQL and TimescaleDB. It has native support for Prometheus, 100 % PromQL compliance, native support for OpenTelemetry tracing, and seamless integration with Grafana and Jaeger. If you’re new to Promscale, check out our website and documentation.

Configuring Alerting Rules in Promscale

In Prometheus, alerting rules allow you to define conditions for your metrics using PromQL, Prometheus’ query language. For example, you can specify a rule with the condition “alert me if a node has been using more than 90 % of its storage for five minutes,” or perhaps, “alert me if a queue is full and new items are being dropped.” Prometheus continuously evaluates these conditions, marking the alerting rule as “firing” if they are met.

Once a condition is met and an alert is fired, Prometheus sends the alert to one (or multiple) Alertmanager instances. These are responsible for managing the alert lifecycle (grouping, suppression, deduplication, routing), including sending notifications via the medium of your choice (e.g., Slack, email, Pagerduty) when you need to notify a real user.

Alerting Rules are configured in Promscale similarly to in Prometheus—using a YAML file that identifies the AlertManagers and links to YAML files containing alerting rules to load. In fact, the file formats are identical, so if you have a working Prometheus configuration that includes an alerting:block and some alerting rules, you can point Promscale at the config, and things will work.

So what does that config look like? A basic example of the config needed would be a prometheus.yml with the following content:

# Alerting settings
alerting:
  alert_relabel_configs:
   - replacement: "production"
     target_label: "env"
     action: "replace"
  alertmanagers:
    - static_configs:
      - targets:
        - alertmanager:9093

# Rules and alerts are read from the specified file(s)
rule_files:
  - alerts.yml

This will configure Promscale to send all alerts to a single Alertmanager, adding an env: production label to each alert on the way out.

Rules will be read from the alerts.yml, which contains a list of PromQL alerts to load and evaluate. An example of the file with a Watchdog alert that will always be firing would be:

groups:
- name: alerts
  rules:

  - alert: Watchdog
    annotations:
      description: > 
        This is a Watchdog meant to ensure that the entire Alerting  
        pipeline is functional. It is always firing.
      summary: Alerting Watchdog
    expr: vector(1)

Once we have the config files, Promscale can be started with either the metrics.rules.prometheus-config argument or the PROMSCALE_METRICS_RULES_PROMETHEUS_CONFIG environment variable pointing at the prometheus.yml file.

Check Out an Example

If you’d like to give this a go, you can use the Docker Compose file in GitHub’s Promscale repository to see an alert working.

If you execute the following commands from a shell:

git clone https://github.com/timescale/promscale
cd promscale/docker-compose
docker-compose up

You should see the stack come up. After waiting for it to stabilize, you can browse to the AlertManager web interface (available at localhost:9093). After a short time, you will see an alert called Watchdog which is always firing (it’s normally used to make sure the alerting pipeline is working).

Example of an alert configured in Promscale ("Watchdog" alert) in the AlertManager UI.
Example of an alert configured in Promscale ("Watchdog" alert) in the AlertManager UI

And For Our Next Trick...

We are really excited to provide this functionality to our Promscale users, allowing them to select where they produce alerts and giving them the option of not running a full Prometheus instance on edge. If you are not yet a user, you can install Promscale for free here or get started now with Promscale on Timescale (free 30-day trial, no credit card required). Up to 94 % cost savings on managed Prometheus with Promscale on Timescale.

But while supporting alerting rules in PromQL is a significant step forward, that's only the start of Promscale’s alerting journey. In the future, we are planning to support sending alerts to Alertmanagers based on pure SQL queries. This will allow many new use cases outside of metric alerts—alerts on logs or alerts on trace content will be possible as we continue to grow our OpenTelemetry support.

If you’ve got any ideas or questions on alerting, feel free to reach out! You can interact with the team building Promscale in our Community Slack (make sure to join the #promscale channel). And for hands-on technical questions, you can also post in the Timescale Forum.

See you at the next launch!

           

Ingest and query in milliseconds, even at terabyte scale.
This post was written by
4 min read
Observability
Contributors

Related posts