Subscribe to the Timescale Newsletter

By submitting you acknowledge Timescale's  Privacy Policy.

[New Webinar]: How to analyze your Prometheus data in SQL: 3 queries you need to know

[New Webinar]: How to analyze your Prometheus data in SQL: 3 queries you need to know

Join us on March 25th to learn how to build the ultimate long term store for Prometheus metrics - complete with demos and queries you can use to analyze your monitoring data

Over the last year, we've met several developers at events like PromCon EU, All Things Open, AWS re:Invent, and most recently demoed all things Prometheus + Grafana + TimescaleDB at devopsdays NYC. At these events, we’re continually meeting developers looking for a better way to store, analyze, and visualize their monitoring metrics.

In my own experience, I’ve run into issues with traditional monitoring setups, be it that they’re too rigid to give me the more custom insights I’m looking for or too cost-prohibitive for me to use in my projects. I heard the same from the developers and community members I met at devopsdays NYC, and I see it again and again on social media and Timescale Community Slack.

The good news is that there’s a better way!

In my upcoming webinar, I’ll show you how to use open-source software to “roll your own” monitoring solution, allowing you to keep your Prometheus metrics around forever (almost), never run out of disk space, and use SQL to write custom queries.

Join me on March 25th at 10am PT/1pm ET/4pm GMT as I use the scenario of monitoring a database to:

  • Demo using Timescale and PostgreSQL to store and analyze your monitoring data
  • Spin up Grafana dashboards to visualize trends
Two screenshots of Grafana dashboards displaying monitoring metrics graphs and charts
Quick example of the dashboards we’ll build to visualize our monitoring data and queries.

We’ll start with a quick architecture overview and why it’s important to have a long-term data store for your metrics, then get straight into the code and ways you can customize and add metrics that give you more insight (e.g., query latency).

During the session, you’ll:

  • Learn why PostgreSQL + Timescale is the ultimate long term store for Prometheus metrics (and why you need a long-term store in the first place)
  • Set up aggregates to rollup hourly and daily summaries of your metrics
  • Create automated downsampling rules to keep aggregated metrics around longer, without wasting disk space
  • See 3 common monitoring queries that you can use and customize right away

RSVP here.

As always, myself and other Timescale team experts will be available to answer questions throughout the session, and share ample resources and technical documentation.

Signup even if you’re unable to attend live, and I’ll make sure you receive the recording, slides, and resources - and answer any questions you may have along the way.

See you soon!

Ingest and query in milliseconds, even at terabyte scale.
This post was written by
2 min read
Events & Recaps

Related posts