Understanding RDS Cost

Understanding RDS Cost

Amazon RDS is a fully managed database service provided by Amazon Web Services (AWS). Being PostgreSQL-based, it is promoted as an easy way to set up, operate, and scale a relational database in the cloud. And unsurprisingly, such promises draw many PostgreSQL fans. At least, that is, until their database grows, and so does their RDS cost.

At Timescale, we’ve seen many users migrating from Amazon RDS for PostgreSQL after their RDS costs grew uncontrollably. Let’s just say that after hearing so many stories, we looked into it and learned a thing or two about RDS pricing.

In this blog post, we’ll help you understand why the cost of your RDS database may be getting out of hand and how to help your team get it back under control.

How Much Does RDS Cost? (And Challenges to Control It)

RDS can be expensive, with many engineering teams blaming it for the lion's share of their AWS bill. At Timescale, we’ve heard this story over and over again, and many of our clients migrated to our cloud database platform to find relief in a more predictable and simplified billing scheme.

One of the reasons why it’s so difficult to keep track of your RDS for PostgreSQL spending is that the RDS pricing formula encompasses several different variables:

  • RDS instances
  • RDS storage
  • Backup storage
  • Snapshot export
  • Data transfer
  • Technical support
  • Multi-Availability Zone (AZ) deployments
  • Other extra features (like Amazon RDS Proxy for connection pooling and Amazon RDS Performance Insights for performance diagnostics)
  • Billing options: on-demand and reserved instances

With so many components at stake, you and your cloud engineering team should keep an eye on what you spend regularly. An unplanned spike in usage can cause your team to scramble and add capacity quickly to solve the problem. The result? You could end up adding capacity you ultimately don’t need, making your costs climb through the roof.

How RDS Costs Spike

As we’ve realized, Amazon RDS pricing is generically based on the instance type you choose, your region, and the amount of storage and data transfer. Let’s see what can make your costs soar.

Compute capacity

One of the pricing models available for Amazon RDS for PostgreSQL is on-demand instances. With on-demand instances, you pay for the compute capacity you use by the hour. This option suits applications with short-term or unpredictable workloads or for testing and development environments.

However, a spike in compute capacity may lead you to scale instance size and leave it scaled, resulting in wasted spend. On the other hand, you may not be using the most efficient instance type for your workload—a.k.a., you’re paying for resources you don’t need.

Increased data transfer

Data transfer costs are incurred when data is transferred between your database and other AWS services or the Internet. Data transfer to the Internet comes with a price per GB depending on the zones you select and the data transferred.

You can reduce data transfer costs by choosing a region close to your users and other AWS services. For example, you may incur higher data transfer costs if you have users in Europe and your database is hosted in the U.S.

In this case, hosting your database in Europe may be more cost-effective. It is also essential to consider the amount of data you transfer, as prices may vary depending on the amount of data transferred. However, to minimize costs, you must always reduce the data and queries you send to the Internet.

Provisioned IOPS spend

Besides different data transfer costs depending on the region, RDS charges additional fees for two of its main storage options (General Purpose Storage and Provisioned IOPS Storage). As we’ve seen in our previous article, provisioned IOPS spend, a type of storage designed for I/O-intensive applications, can be quite expensive.

As an example, you’ll pay $0.10 per IOPS per month for a single AZ deployment in the U.S. East (Ohio) region. This is why we highly recommend you first test the performance of General Purpose gp3 Storage (which is much better than gp2 volumes), as only highly intensive workloads will need IOPS.


Runaway environments

And since we’ve talked about testing, ensure you are monitoring your team’s non-production environment spend, as you could be wasting a lot in AWS charges you don’t need.

Reducing Amazon RDS Costs

Now that you know what makes your RDS service's cost unbearable, it’s time to get your team together and see where and how you can implement optimizations or judicious cuts.

Here are a few tips.

Get visibility into the problem

Given its complexity and numerous hidden costs, it really takes a village to start saving money on your RDS service bill. This is why you should build awareness among your team, namely to ensure you’re right-sizing your RDS instance types.

In fact, if you are not using over 40-50 percent of available networking, throughput, etc., you may be able to downsize your instance types and save money.

Another vital task is to optimize and tune your usage (more on that later) based on actual performance. The concept is simple: to properly optimize your RDS spending, you need to tag your resource utilization and learn where the money is actually going. Effectively tagging and tracking your database resources usage will aid you in assigning billing to where it is needed the most.


With CloudWatch, you can get expense reports and decide how to manage your resources and who can access them.

You can tag the following resources:

  • Database instances
  • Database clusters
  • Read replicas
  • Database snapshots
  • Database cluster snapshots
  • Reserved database instances
  • Event subscriptions
  • And more

To tag your instance, use the Tags Tab on the AWS RDS console and add the tags to categorize your resources and begin tracking their consumption.

Setting up AWS billing alarms is another effective measure to save on RDS costs. You can enable these in the CloudWatch Billing and Cost Management console to help you monitor your estimated RDS cost using billing metric data.

To create the billing alarm, sign in to your AWS account and go to the CloudWatch console. Select Alarms, then All Alarms, and Create alarms. Finally, click on Select metric. In Browse, choose Billing and then Total estimate charge.

From there, you can select Alarms from the navigation menu on the left side, then select Create alarm using CloudWatch to integrate with Amazon Simple Notification Service (SNS) to create the alert. CloudWatch monitors your usage and sends email notifications when the alarm is triggered.


Optimize your non-production environments

We already mentioned the importance of monitoring your production settings in non-production environments. For example, you will not need Provisioned IOPS Storage in the developing and staging phases, so ensure you’re not paying for it until you really need it.

To do this, you can use automation tools, such as the AWS Systems Manager, to find and enforce constraints on RDS instances in non-production stacks. You can find these via an enforced tagging scheme. Simply add an environment tag to all your RDS databases (and terminate those that don’t).

On non-prod stacks, you can also issue violations or automate corrections to instances using extra large instance sizes, provisioned IOPS, or other costly services outside production.

Use the correct instance sizes

Size definitely matters when it comes to RDS instances. If you’re looking to bring down costs, you need to right-size your instance so you’re not paying for resources you don’t need or experiencing performance issues.

Here are the steps you should take to use the correct RDS instance size:

  • Right-size your instance: There are five types of instance series in RDS: T and M (general-purpose instances) and R, X, and Z (memory-optimized instances). To choose the suitable instance series, you need to know your database memory use, CPU, EBS Bandwidth, and the network performance supported by your instance type. You can get this information by monitoring your database using CloudWatch metrics. With this information, you can choose the right instance family.

  • Determine the instance's performance: Instances with lower CPU utilization (less than 40 percent over four weeks) can be downsized to a lower instance class. Monitoring your CPU consumption with CloudWatch can help unveil your instance performance, leading to low-read IOPS. In this case, you can optimize costs by changing the M series instance to the R series, ensuring you get the same memory for half of the CPU usage. Reducing CPU—more expensive than memory—can save you tons of money.

  • Turn off idle instances: You can save money by turning off your RDS instances when they are not in use. For that, you need to monitor when all your instances are being used and ensure they are turned off if they are no longer functional. Keep in mind that you will still pay for the storage and snapshots you used.

Speaking of snapshots, another way to save on RDS costs is to remove manual snapshots. You can back up your database using manual snapshots, which are retained even after you delete a database instance. This will take up a lot of storage and lead charges from Amazon according to its backup storage rate. Make sure you periodically review and delete these manual snapshots when no longer needed to reduce costs.

Purchased reserved instances

A fuss-free way to save money on RDS usage is by paying your reserved instances upfront. These are one-to three-year contracts that can shave off costs (starting at 42 percent for one year) compared to the on-demand option, especially if you opt for the longer contract and pay upfront.

Still, unless you take the time to right-size your instances, your bill won’t reflect such savings.

Plan your data transfer routes carefully

As we’ve seen earlier, using AWS RDS has numerous scenarios associated with data transfer costs. But you can bypass some of these fees by planning your data transfer routes and regions very carefully:

  • Use Multi-AZ only when necessary: Multi-AZ deployment provides high availability and automatic failover in case of a database instance failure. However, it also comes at an additional cost. If high availability is not critical for your workload, consider using single deployment to reduce cost.

  • Disable Multi-AZ: A more drastic option is creating a secondary database instance on another AZ to synchronously replicate the data from the main engine in a different zone. This will increase the availability of your database in the event of a failure of the primary database. When a failure occurs, the RDS service moves to the secondary database. While this makes your RDS service highly available, all these hardware costs are equally high. If you are in a development environment, disabling the feature is probably a good idea to save money.

  • Remove backup for non-critical RDS: For those trying to optimize costs, the automated backups of your Multi-AZ database cluster created by Amazon RDS during the backup window of your database instance should be reviewed. When created, the automated backups and manual database snapshots are stored in your Amazon RDS backup storage for each AWS region. If you don't need a backup, you can save costs by removing it—but remember that AWS will only charge for backup storage that exceeds the total database storage for a region. However, if you do need a backup, make sure only to remove automatically created database backup snapshots based on your organization’s needs while retaining the manual backup for critical RDS.
  • Reduce your data transfers: Attempt to reduce traffic to the Internet as much as possible to reduce cost. Inbound traffic is typically free, but you will incur charges when it is outbound. You can reduce costs by minimizing the data flow between AZs and across regions by keeping your data within the same AZ or region, which is free. Another option is using a cheaper region or AZ and reducing your data internally in the business layer before sending it raw to the Internet. You can also compress it before sending it out.

  • Consider using CloudFront for sending data out to the Internet: Why? Its transfer costs are cheaper. Using CloudFront for your most active assets to the AWS Edge locations will deliver such services faster to end users. Data transfer into AWS CloudFront from the Internet or AWS is free, while data transfer from AWS CloudFront into the Internet incurs charges.

Optimize database performance

Optimal database performance can keep your RDS instances (and bill) small. You can achieve this by optimizing indexing and database sanitation to help with I/O. Using read replicas can also make your very heavy workload perform better and get reads from your database faster, providing improved scalability and durability.

Should you host yourself on EC2?

Some teams might be inclined to host their own databases on EC2 because it gives them complete control and flexibility over their system, including the OS and database. You can install any database engine and version of your choice and have control over the updates, patches, and maintenance windows.

You can also choose whether to run one or multiple instances on the same EC2 instance and the ports used. However, you’ll also have to implement the management support RDS gives you, thereby increasing daily overhead.

Controlling Costs for Large Workloads (Like Time-Series Projects)

If you use Amazon RDS to manage time-series data, Timescale provides a better alternative with the best price performance.

We engineered PostgreSQL to make time-series data calculations simpler and more cost-effective. Want to learn more? Reach out and ask about our pricing estimator.

Plus, by choosing Timescale, you’ll enjoy the following benefits:

Sign up for a free Timescale trial today and experience our PostgreSQL platform's simplicity, efficiency, and outstanding performance—without breaking the bank.

Ingest and query in milliseconds, even at terabyte scale.
This post was a collaboration between
10 min read
AWS
Contributors

Related posts