Recommendations for Setting Up Your Architecture With AWS & TimescaleDB

Recommendations for Setting Up Your Architecture With AWS & TimescaleDB

Learn more about common implementations and find the option that works best for your project, team, or organization

With AWS Re:Invent right around the corner, we thought it might be a good time to re-cap how you can use AWS services in conjunction with TimescaleDB.

We find that a lot of our users implement an architecture that uses AWS services, and add TimescaleDB to their stack to manage, store, and analyze all their time-series data at scale. Fortunately, since TimescaleDB seamlessly integrates with many AWS offerings, there are several ways to create a flexible stack that works for you.

In this post, we will discuss different ways to set up your architecture and provide recommendations to help you choose the best option for your use case.

Option #1: Timescale

The first option that comes to mind is Timescale’s managed service offering, Timescale. This is a path allows you to host your TimescaleDB instance on AWS and use Virtual Private Cloud (VPC) peering to connect the instance to the rest of your AWS infrastructure.

The primary benefit to the managed service is that you can be hands-off in terms of the day-to-day management of the system since we manage updates and upgrades—along with backups and high availability (HA). With this route, you also get the flexibility to customize compute and storage configurations based on your needs and grow, shrink, or migrate your workloads with just a few clicks.

Here is an overview of what this setup would look like:

If you are interested in running TimescaleDB as a managed service on AWS, this is the option to explore. This configuration will offer you the benefits of the managed service, while running on the AWS platform, and allow you to add it to the rest of your cloud stack through VPC Peering.

Summary:

  • Best for users who prefer a hands-off approach and need flexibility when it comes to managing their data.
  • Not ideal for users who want to be heavily involved in the day-to-day management, or who need granular control over the environment.

Option #2: EC2 Instance

In the event you’re looking for more granular control over your instance and/or more granular control over how the instance runs, you can always spin up an EC2 instance directly on AWS with TimescaleDB preinstalled.

To help facilitate this setup, there are a number of community Amazon Machine Images (AMI) to help you get up and running quickly, and we cover this option in our installation docs.

A quick search of the Community AMIs for Timescale will get you the results below:

(See our AMI installation docs for step-by-step instructions)

Using the AMI can make it a little easier and faster to spin up an instance, since the AMI is preset with TimescaleDB already installed. Alternatively, you can spin up your own EC2 profile and install TimescaleDB within a custom/tailored environment. As an example, if you have specific requirements around the operating system, or the configuration of the virtual machine (e.g.,. Security software etc.) you will want to go with a more generic image, and customize your installation vs. going with a pre-built AMI.

When you select this option, you gain operational control over your instance -- but assume a level of operational responsibility for ongoing maintenance vs. a managed cloud offering.

Summary:

  • Best for users who want a lot of control over how their instance is configured and run.
  • Not ideal for users who don’t want to be heavily involved with setup and ongoing management.

Option #3: AWS Elastic Kubernetes Service via Helm Charts

The third option is to deploy Timescale via Kubernetes, using the AWS Elastic Kubernetes Service (EKS). Here, Timescale offers a set of Helm Charts (freely available via our GitHub repository) to help you facilitate this deployment.

Here is an overview of what this setup would look like:

This option gives you the ability to deploy TimescaleDB as a cloud-native application, adding the time-series database functionality to your microservices deployment.

Summary:

  • Best for users that have embraced and are using a microservices architecture.
  • Not ideal for users that leverage legacy deployment models.

Option #4: Amazon Elastic Container Service

In the event that you would like to run via the Amazon Elastic Container Service (ECS), Timescale offers a Docker image to get you started.

To illustrate how this works in a real-world scenario, let’s take a look at how we might deploy this in production. Say we want to collect monitoring data from Prometheus and store it in TimescaleDB to identify trends and make future predictions.

Here is an overview of what this setup would look like:

Here, we’re using Prometheus to monitor a cluster running on-premises (single or multiple locations), and as we collect metrics, they’re written to the TimescaleDB/Prometheus adapter.

In this example, we set up the adapter as a container in ECS in order to give this small, yet critical, part of the monitoring stack the availability it requires.

After setting up the adapter as a container in ECS, we connect to Timescale via VPC (our instance runs on AWS and also includes our AWS-hosted Grafana instance). This allows ECS to ensure high availability of the TimescaleDB/Prometheus adapter while Timescale manages the availability of our TimescaleDB and Grafana instances.

(TimescaleDB also offers a pre-built Docker image with the Prometheus adapter installed. You simply add it to your container registry and deploy it from there.)

Summary:

  • Best for users looking to set up a simple, yet powerful application monitoring stack.
  • Not ideal for users that aren’t collecting a lot of monitoring data and don’t need to leverage Prometheus.

Option #5: AWS CloudWatch, Lambda, and TimescaleDB

To add to the use case above, a lot of our users combine on-premises or private cloud resources and AWS cloud resources. So, let's talk about monitoring data consolidation.

In our previous example, we collect metrics from Prometheus directly from our on-premises assets, leaving us with the question: how can I correlate that data with my cloud-based monitoring data?

The answer: use an AWS CloudWatch Subscription Filter to send events directly to an AWS Lambda function, which then writes the event to our Timescale instance (included in our previous example).

Here is an overview of what this setup would look like:

We’re now storing and analyzing log events, metrics, and/or traces from on-premises and cloud.

For another example of how to combine AWS Lambda and Timescale, check out Using AWS Lambda with Timescale for IoT Data.

Summary:

  • Best for users who are operating production environments or who’d like to consolidate monitoring data in a single place.
  • Not ideal for users who aren’t collecting a lot of monitoring data yet.

Next Steps

While this is not a complete list of ways you can use TimescaleDB and AWS services, we’ve covered a majority of common use cases (and their high-level implementations) to help you navigate your options.

Brand new to Timescale? Sign up for a Timescale account or view all available installation options here.

As always, we encourage you to join our Community Slack channel to chat with the team, ask questions, and see what others are working on.

Ingest and query in milliseconds, even at terabyte scale.
This post was written by
5 min read
Product & Engineering
Contributors

Related posts