Building Columnar Compression for Large PostgreSQL Databases
Scaling a Postgres database is a rite of passage for growing applications. As you see your tables expand with millions or even billions of rows, your once-snappy queries start to lag, and the increasing infrastructure costs begin casting a long shadow on your bottom line. You're caught in a conundrum: you definitely don’t want to part with your beloved PostgreSQL, but it seems that you’re going to need a more effective way of dealing with your growing datasets.
In this article, we’ll tell you the story of how we built a flexible, high-performance columnar compression mechanism for PostgreSQL to improve its scalability. By combining columnar storage with specialized compression algorithms, we’re able to achieve impressive compression rates unparalleled in any other relational database (+95 %).
By compressing your dataset, you can grow your PostgreSQL databases further. As we’ll see across this article, this highly effective compression design allows you to reduce the size of your large PostgreSQL tables up to 10-20x. You can store much more data on smaller disks (a.k.a. saving money) while improving query performance. Timescale compression is also fully mutable, making database management and operations easy: you can add, alter, and drop columns in compressed tables, and you can INSERT, UPDATE, and DELETE data directly.
Welcome to a more scalable PostgreSQL!
Why PostgreSQL Needs Database Compression
But before getting into the details of how we built compression, let's spend a couple of minutes answering this question: why is it even necessary to add a new database compression mechanism to PostgreSQL?
Let’s first understand the needs of modern applications and a bit of software history.
We love Postgres: we believe it’s the best foundation for building applications since its combination of reliability, flexibility, and rich ecosystem are very hard to match by any other database. But Postgres was born decades ago—this robustness doesn’t come without downsides.
Today, developers are using PostgreSQL for much more than the traditional OLTP (OnLine Transaction Processing) use case it’s best known for. Many data-intensive, demanding applications—running 24/7 and handling ever-growing volumes of data—are powered by PostgreSQL:
- PostgreSQL databases are being used to ingest vast amounts of sensor data streaming from traffic management systems, utility networks, and public safety monitors.
- Energy companies are using PostgreSQL to store and analyze metrics from smart grids and renewable energy sources.
- In the financial sector, PostgreSQL is at the core of systems tracking market tick data in real time.
- E-commerce platforms are using PostgreSQL to track and analyze events generated by user interactions.
- Postgres is even being used as a vector database to power the new wave of AI applications.
Postgres tables are growing very quickly as a result, and tables getting to billions of rows is the new normal in production.
Unfortunately, PostgreSQL is natively ill-equipped to deal with this volume of data: query performance starts lagging, and database management becomes painful. To address these limitations, we built TimescaleDB, an extension that scales PostgreSQL’s performance for demanding applications via automatic partitioning, continuous aggregation, query planner improvements, and many more features.
Building a highly-performant compression mechanism for PostgreSQL was a similarly important unlock. These ever-growing datasets are not only challenging for good performance, their accumulation of data leads to ever-larger disks and higher storage bills. PostgreSQL needed a solution.
But what about TOAST?
But what about PostgreSQL’s existing TOAST method? Despite its amazing name 🍞😋, TOAST is not effective in systematically reducing the size of your large PostgreSQL databases.
TOAST is the automatic mechanism that PostgreSQL uses to store and manage large values that do not fit within individual database pages. While TOAST incorporates compression as one of its techniques to achieve this, TOAST’s primary role isn't to optimize storage space across the board.
For example, if you have a 1 TB database made up of small tuples, TOAST won’t help you systematically turn that 1 TB into 80 GB, no matter how much fine-tuning you try. TOAST will automatically compress oversized attributes in a row as they exceed the threshold of 2 KB, but TOAST doesn’t help for small values (tuples), nor can you apply more advanced user-configurable configurations such as compressing all data older than one month in a specific table. TOAST's compression is strictly based on the size of individual column values, not on broader table or dataset characteristics.
TOAST can also introduce significant I/O overhead, especially for large tables with frequently accessed oversized columns. In such cases, PostgreSQL needs to retrieve the out-of-line data from the TOAST table, which is a separate I/O operation from accessing the main table, as PostgreSQL must follow pointers from the main table to the TOAST table to read the complete data. This typically leads to worse performance.
Lastly, TOAST's compression is not designed to provide especially high compression ratios, as it uses one standard algorithm for all data types.
Why Isn’t Compression Native to PostgreSQL? An Introduction to Row vs. Column-Oriented Databases
This quick mention of TOAST also helps us understand PostgreSQL’s limitations in compressing data effectively. As we just saw, TOAST’s compression handles data row-by-row, but this row-oriented architecture scatters the homogeneity that compression algorithms thrive on, leading to a fundamental ceiling on how operational a compression can be. This is a fundamental reason why relational databases (like native Postgres) often fall short when it comes to storage optimization.
Let’s break this down. Traditionally, databases fall into one of two categories:
- Row-oriented databases organize data by rows, with each row containing all the data for a particular record. They are optimized for transactional processing where insertions, updates, and deletions of records are frequent, and they’re efficient for OLTP systems where operations involve individual records or small subsets of data (e.g., retrieving all information about a specific customer).
- Column-oriented (a.k.a. “columnar”) databases, on the other hand, organize data by columns. Each column stores all the data for a particular attribute across multiple records. They’re typically optimized for OLAP systems (OnLine Analytical Processing), where queries often involve aggregations and operations across many records.
Let’s illustrate this with an example. Say we have a
users table with four columns:
last_login. If this table stores the data for one million users, it will effectively have one million rows and four columns, physically storing each user’s data (i.e., each row) contiguously on disk.
In this row-oriented setup, the entire row for user_id = 500,000 is stored contiguously, making retrieval fast. As a result, shallow-and-wide queries will be faster on a row store (e.g., “fetch all data for user X”):
-- Create table CREATE TABLE users ( user_id SERIAL PRIMARY KEY, name VARCHAR(100), logins INT DEFAULT 0, last_login TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- Assume we have inserted 1M user records into the 'users' table -- Shallow-and-wide query example (faster in row store) SELECT * FROM users WHERE user_id = 500000;
By contrast, a columnar store will store all of the
user_ids together, all of the names together, all the login values together, and so forth, so that each column’s data is stored contiguously on disk. This database architecture favors deep-and-narrow queries, e.g., “calculate the average number of logins for all users”:
-- Deep-and-narrow query example (faster in column store) SELECT AVG(logins) FROM users;
Columnar stores do particularly well with narrow queries over wide data. In a columnar database, only the
logins column data needs to be read to calculate the average, which can be done without having to load the entire dataset for each user from disk.
As you might have guessed by now, storing data in rows vs columns also has an influence on how well data can be compressed. In a columnar database, individual columns of data are typically the same type and are often drawn from a more limited domain or range.
As a consequence, columnar stores typically compress better than row-oriented databases. For example, our
logins column previously would all be of an integer type and would probably consist of just a small range of numeric values (and thus have a low entropy, which compresses well). Compare that to a row-oriented format, where an entire wide row of data comprises many different data types and ranges.
But even if they show advantages in OLAP-style queries and compressibility, columnar stores are not without trade-offs:
- Queries retrieving individual rows are much less performant (sometimes even impossible to run).
- Their architecture is not as well suited for traditional ACID transactions.
- It is often not possible to do updates in columnar stores.
- It is easier for row-based stores to take advantage of an index (e.g., B-tree) to quickly find the appropriate records.
- With a row store, it is easier to normalize your dataset, such that you can more efficiently store related datasets in other tables.
Row vs. Column-Oriented Databases: What to Choose?
So, what’s better: row-oriented or columnar?
Traditionally, you would evaluate the trade-offs between both depending on your workload. If you were running a typical OLTP use case, you would probably pick a row-oriented relational database like PostgreSQL; if your use case was clearly OLAP, you might lean towards a columnar store like ClickHouse.
But what if your workload is actually a mix of both?
- Your application queries might generally be shallow-and-wide, with an individual query accessing many columns of data as well as data across many different devices/servers/items. For example, you might be powering a user-facing visualization that requires displaying the last recorded temperature and humidity for all sensors in a specific manufacturing plant. Such a query would need to access multiple columns across all the rows that match the building criteria, potentially spanning thousands or millions of records.
- But some of your queries might also be deep-and-narrow, with an individual query selecting a smaller number of columns for a specific sensor across a longer time period. For example, you might also need to analyze the temperature trend for a specific device over the past month to inspect anomalies. This type of query would focus on a single column (temperature) but would need to retrieve this information from a large number of rows that correspond to each time interval over the target period.
- Your application might also be data-intensive and insert (append) heavy. As we discussed previously, dealing with hundreds of thousands of writes per second is the new normal. Your datasets are probably very granular too, e.g., you might be collecting data every second. Continuing with the previous example, your database would need to serve these heavy writes together with constant reads to power your user-facing visualization in real-time.
- Your data is append-mostly, but not necessarily append-only. You might need to occasionally update old records or possibly record late-arriving or out-of-order data.
This workload is neither OLTP nor OLAP in the traditional sense. Instead, it includes elements of both. So what to do?
Building Columnar Storage on a Row-Oriented Database
To serve a workload like the previous example, a single database would have to include the following:
- The ability to sustain high-insert rates, easily in the hundreds of thousands of writes per second
- Support for inserting late or out-of-order data, as well as modifying existing data
- Enough flexibility to efficiently process both shallow-and-wide and deep-and-narrow queries across a large dataset
- A compression mechanism able to reduce database sizes considerably to improve storage efficiency
This is what we aimed to achieve when adding columnar compression to TimescaleDB (and hence, to PostgreSQL).
Turning PostgreSQL into a hybrid row-columnar store
As we mentioned in a previous section, we built TimescaleDB to expand PostgreSQL with more performance and scalability, making it suitable for demanding workloads like time series data. TimescaleDB is implemented as a PostgreSQL extension: in doing so, it inherits everything that’s great about PostgreSQL, like full SQL, huge query and data model flexibility, battle-tested reliability, an ardent developer and user base, and one of the largest database ecosystems around.
In theory, this means that TimescaleDB is also locked into PostgreSQL’s row-oriented storage format, with its modest compressibility. In reality, there’s nothing that a little engineering cannot solve.
Two observations. First, most large PostgreSQL workloads actually have a time-series-like structure, that is, they are append-heavy (versus update-heavy) with a loosely sequential main key, such as a timestamp or serial event ID. Second, such datasets are regularly queried via scans or rollups, not just point queries. With those observations in hand, we architected a novel columnar storage feature for TimescaleDB (which we’ll cover in detail in the next section) that allows us to achieve unparalleled levels of compressibility.
In fact, this row-to-column transformation doesn’t need to be applied to your entire database. As a Timescale user, you can transform your PostgreSQL tables into hybrid row-columnar stores, selecting exactly which data to compress in columnar form via our simple API and benefiting from both storage architectures as your application requires.
Let’s illustrate how this works practically with an example. Imagine a temperature monitoring system collecting readings every second from multiple devices, storing data such as timestamp, device ID, status code, and temperature.
To access the most recent temperature data efficiently, particularly for operational queries where you might want to analyze the latest readings from different devices, you could keep the most recent data (e.g., the last week) in the traditional uncompressed, row-oriented PostgreSQL structure. This supports high ingest rates and is also great for point queries about recent data:
-- Find the most recent data from a specific device SELECT * FROM temperature_data WHERE device_id = 'A' ORDER BY timestamp DESC LIMIT 1; -- Find all devices in the past hour that are above a temperature threshold SELECT DISTINCT device_id, MAX(temperature) FROM temperature WHERE timestamp > NOW() - INTERVAL '1 hour' AND temperature > 40.0;
But once this data is a few days old, shallow-and-wide queries like the previous one are not frequently run anymore: instead, deep-and-narrow analytical queries are more common. So to improve storage efficiency and query performance for this type of queries, you could automatically choose to transform all data older than one week into a highly compressed columnar format. To do so in Timescale, you would define a compression policy such as this:
-- Add a compression policy to compress temperature data older than 1 week SELECT add_compression_policy('temperature_data', INTERVAL '7 days');
Once your data is compressed, running deep-and-narrow analytical queries on the temperature data (whether on a specific device or across many devices) would show optimal query performance.
-- Find daily max temperature for a specific device across past year SELECT time_bucket('1 day', timestamp) AS day, MAX(temperature) FROM temperature_data WHERE timestamp > NOW() - INTERVAL '1 year' AND device_id = 'A' ORDER BY day; -- Find monthly average temperatures across all devices SELECT device_id, time_bucket('1 month', timestamp) AS month, AVG(temperature) FROM temperature_data WHERE timestamp < NOW() - INTERVAL '2 weeks' GROUP BY device_id, month ORDER BY month;
How do we represent the “shift” from a row to a column format? Timescale’s hypertables serve to partition data into “chunks” based on a partitioning key, such as a timestamp or other serial ID column. Each chunk then stores the records corresponding to a certain range of timestamps or other values for that partitioning key. In the above example, the temperature data would be partitioned by week so that the latest chunk remains in row format, and all older weeks are converted to columnar format.
This hybrid row-columnar storage engine is an incredibly powerful tool to optimize query performance in large PostgreSQL databases while dramatically reducing the storage footprint. As we’ll see later in this article, by transforming data to a columnar format and applying specialized compression mechanisms, we’re not only able to speed up your analytical queries, but we also achieve up to 98 % compression rates. Imagine what this does to your storage bill!
Behind the scenes: From row data to compressed columnar arrays
Before getting into the details about query performance and storage savings, let’s first cover how this mechanism works under the hood: how the transformation from row to columns is actually performed and how compression is applied to the columnar data.
When the compression policy kicks in, it essentially transforms what were traditionally numerous individual records in the original PostgreSQL hypertable—imagine 1,000 densely packed rows—into a singular, more compact row structure. Within this compacted form, each attribute or column no longer stores singular entries from each row. Instead, it encapsulates a continuous, ordered sequence of all the corresponding values from these 1,000 rows. Let’s refer to these 1,000 rows as a batch.
To illustrate it, let’s imagine a table like this:
| Timestamp | Device ID | Status Code | Temperature | |-----------|-----------|-------------|-------------| | 12:00:01 | A | 0 | 70.11 | | 12:00:01 | B | 0 | 69.70 | | 12:00:02 | A | 0 | 70.12 | | 12:00:02 | B | 0 | 69.69 | | 12:00:03 | A | 0 | 70.14 | | 12:00:03 | B | 4 | 69.70 |
To prepare this data for compression, Timescale would first transform this tabular data into a columnar store. Given a batch of data (~1,000 rows), each column's data is aggregated into an array, with each array element corresponding to the value from one of the original rows. The process results in a single row, with each column storing an array of values from that batch.
| Timestamp | Device ID | Status Code | Temperature | |------------------------------|--------------------|--------------------|-------------------------------| | [12:00:01, 12:00:01, 12...] | [A, B, A, B, A, B] | [0, 0, 0, 0, 0, 4] | [70.11, 69.70, 70.12, 69....] |
Even before applying compression algorithms, this format immediately saves storage by greatly reducing Timescale’s internal per-row overhead. PostgreSQL typically adds ~27 bytes of overhead per row (e.g., for Multi-Version Concurrency Control or MVCC). So even without any compression, if our schema above is, say, 32 bytes, then the 1,000 rows of data from a batch that previously took [1,000 * (32 + 27)] ~= 59 kilobytes now takes [1,000 * 32 + 27] ~= 32 kilobytes in this format.
[Aside: This notion of “grouping” a larger table into smaller batches and then storing the columns of each batch contiguously (rather than those of the whole table) is actually a similar approach to the “row groups” in Apache Parquet file format. Although we only realized that similarity after the fact!]
But the big advantage of this transformation is that now, given a format where similar data (timestamps, device IDS, temperature readings, etc.) is stored contiguously, we can employ type-specific compression algorithms to it so that each array is separately compressed. This is how Timescale achieves impressive compression rates.
Timescale automatically employs the following compression algorithms. All of these algorithms are “lossless,” so we don’t throw away precision or introduce inaccuracies through our compression; any resulting decompression perfectly reconstructs the original values.
- Gorilla compression for floats
- Delta-of-delta + Simple-8b with run-length encoding compression for timestamps and other integer-like types
- Whole-row dictionary compression for columns with a few repeating values (+ LZ compression on top)
- LZ-based array compression for all other types
We extended Gorilla and Simple-8b to handle decompressing data in reverse order, allowing us to speed up queries that use backward scans.
We have found this type-specific compression quite powerful: in addition to higher compressibility, some of the techniques like Gorilla and delta-of-delta can be up to 40x faster than LZ-based compression during decoding, leading to much-improved query performance.
📚 Interested in a technical deep dive on how these compression algorithms actually work? See our previous blog post on Time-Series Compression Algorithms, Explained.
When decompressing data, Timescale can operate on these individual compressed batches, decompressing them batch by batch and only on the requested columns. So if the query engine can determine that only 20 batches (corresponding to 20,000 original rows of data) need to be processed from a table chunk that originally included one million rows of data, then the query can execute much faster, as it's reading and decompressing a lot less data. Let’s see how it does that.
Efficiently querying compressed data
The previous array-based format presents a challenge: namely, which rows should the database fetch and decompress in order to resolve a query?
Let's take our temperature data example again. Several natural types of queries emerge again and again: selecting and ordering data by time ranges or selecting data based on its device ID (either in the WHERE clause or via a GROUP BY). How might we efficiently support such queries?
Now, if we need data from the last day, the query has to navigate through timestamp data, which is now part of a compressed array. So should the database decompress entire chunks (or even the whole hypertable) to locate the data for the recent day?
Or even if we could identify the individual “batches” that are grouped into a compressed array (described above), is data from different devices interspersed together, so we need to decompress the entire array to find if it includes data about a specific device? While this simpler approach could still yield good compressibility, it would not be nearly as efficient from a query performance standpoint.
To solve the challenge of efficiently locating and decompressing data for specific queries in its columnar format, Timescale introduces the concept of "segment by" and “order by” columns.
Grouping commonly queried data by
Recall that data in Timescale is initially converted to compressed columnar form on a chunk-by-chunk basis. To enhance the efficiency of queries that filter based on a specific column (e.g., frequently querying by
device_id), you have the option to define this particular column as a “compress_segmentby” column. This approach is highly beneficial for organizing the compressed data.
segmentby columns are used to logically partition the data within each compressed chunk. Rather than building a compressed array of arbitrary values as shown above, the compression engine first groups together all values that have the same
So, 1,000 rows of data about device_id A are densely backed before being stored in a single compressed row, 1,000 rows about device_id B, and so forth. So, if
device_id is chosen as the
segmentby column, each compressed row includes compressed columnar batches of data about a specific device ID, which is stored uncompressed in that row. Timescale additionally builds an index on these segmentby values within the compressed chunk.
| Device ID | Timestamp | Status Code | Temperature | |-----------|--------------------------------|-------------|-----------------------| | A | [12:00:01, 12:00:02, 12:00:03] | [0, 0, 0] | [70.11, 70.12, 70.14] | | B | [12:00:01, 12:00:02, 12:00:03] | [0, 0, 4] | [69.70, 69.69, 69.70] |
This contiguous storage of data greatly enhances the efficiency of queries filtered by the
segmentby column. When running a query filtered by
device_id is the
segmentby column, Timescale can quickly select (via an index) all the compressed rows in the chunk that have the specified device ID(s), and it quickly skips over data (and avoids decompressing) data unrelated to the requested devices.
For example, in this query, Timescale will efficiently locate and process only those compressed rows containing data for the device_id A:
SELECT AVG(temperature) FROM sensor_data WHERE device_id = 'A' AND time >= '2023-01-01' AND time < '2023-02-01';
Additionally, Timescale hypertables store metadata associated with each chunk specifying the range of values the chunk covers. So if a hypertable is timestamp partitioned by week, then when the query planner runs the above query, it knows to only process those 4-5 chunks covering the month of January, further improving query performance.
You can specify which columns to use for segmentby when first enabling a hypertable’s compression. The choice of which column to use should be based on which column or columns are often used in your queries. In fact, you can use multiple columns to segment by: for example, rather than grouping together batches by device_id, you can (say) group those batches that have both the same tenant_id and device_id together.
Still, be careful not to overdo the selectivity: defining too many segmentby columns will diminish the efficiency of compression since each additional segmentby column effectively splits the data into smaller and smaller batches.
If you can no longer create 1,000 record batches of data but instead only have five records that have the specified segmentby keys within a specific chunk, it’s not going to compress well at all!
But once you’ve identified which columns you want to segment by, they are straightforward to configure when enabling compression in your hypertable:
ALTER TABLE temperature_data SET ( timescaledb.compress, timescaledb.compress_segmentby = 'device_id' );
Advanced fine-tuning via
TimescaleDB enhances query performance on compressed data through strategic data ordering within each chunk, dictated by the
compress_orderby parameter. While the default setting of ordering by the timestamp (the typical partitioning key in time-series data) is suitable for most scenarios, understanding this optimization can be valuable. Read on for an even deeper technical perspective.
Consider again the example of weekly chunks and a query that only requests data about a single day. In a regular table with a timestamp index, the query could efficiently walk this index to find the day’s data.
However, the situation is different with compressed data: timestamps are compressed and can't be accessed without decompressing entire batches. Creating an index on each individual timestamp would be counterproductive, as it could negate the benefits of compression by becoming excessively large.
Timescale addresses this by basically “sorting” the data to be batched according to its timestamp. It then records metadata about the minimum and maximum timestamps for each batch. When a query is executed, this metadata enables the query engine to quickly identify which compressed rows (batches) are relevant to the query's time range, thus reducing the need for full decompression.
This methodology plays nicely with the use of segmentby columns. During the compression process, data is first grouped by the segmentby column, then ordered based on the orderby parameter, and finally divided into smaller, timestamp-ordered “mini-batches,” each containing up to 1,000 rows.
The combination of TimescaleDB’s segmentation and ordering significantly enhances the performance of common time-series and analytical queries. This optimization across both time (via
orderby) and space (via
segmentby) ensures that TimescaleDB effectively manages and queries large-scale time-series data, offering an optimized balance between compression and accessibility.
The Evolution of Timescale's Compression
A first version of our compression design was released in 2019 with TimescaleDB 1.5. Many releases later, Timescale compression has come a long way.
One of the main limitations of our initial release was that we did not allow any further modifications of data—e.g., INSERTs, UPDATEs, DELETEs—once the data was compressed without first manually decompressing the whole hypertable chunk in which it resided.
Given that we were optimizing for data-intensive use cases based on analytical and time-series data, which are primarily insert-heavy and not update-heavy, this was much less of a limitation than it would have been in a traditional OLTP use case where data is frequently updated (e.g., a customer information table). However, as we discuss in this article, there are situations in which backfilling is necessary, and this significantly complicated developer pipelines that used TimescaleDB.
Another limitation of our initial compression release was that we didn’t allow schema modifications in tables, including compressed data. This meant that developers couldn’t evolve their data structure without decompressing the entire table, such as adding new columns to fit new metrics or new devices.
Today, all these limitations are removed. Timescale now allows you to perform full Data Manipulation Language (DML) and Data Definition Language (DDL) operations over compressed data:
- You can INSERT data over compressed chunks (with great performance).
- You can do UPDATEs, UPSERTs, and DELETEs.
- You can add columns, including with default values.
- You can rename and drop columns.
To automate data modification over compressed data (making it seamless for our users), we changed our compression approach by introducing a “staging area”—essentially, a overlapping chunk that remains uncompressed and in which we do the operations “over uncompressed data” under the hood.
As a user, you don’t have to do anything manually: you can modify your data directly while our engine takes care of everything automatically under the covers. The ability to make changes to compressed data makes Timescale’s hybrid row-columnar storage engine much more flexible.
This design via the staging area makes INSERTs as fast as inserting into uncompressed chunks since this is really what is happening (when you insert into a compressed chunk, you’re now writing to the staging area). It also allowed us to support UPDATEs, UPSERTs, and DELETEs directly: when a value needs to be altered, the engine moves a relevant portion of compressed data to the staging area, decompresses it, does the alteration, and (asynchronously) moves it again to the main table in its compressed form.
(This region of data typically operates on the scale of the compressed “mini-batches” of up to 1,000 values that comprise a “row” in the underlying PostgreSQL storage to minimize the amount of data that needs to be uncompressed in order to support modifications.)
This “staging area” still has regular transactional semantics, and your queries see these values as soon as they are inserted into it. In other words, the query planner is smart enough to understand how to properly query across these row-based “staging” chunks and the regular columnar storage.
The End Result: Faster Queries, Less Storage Footprint For Large PostgreSQL Databases
At this point, the next logical question to ask is: what’s the end result? How does compression impact query performance, and how much disk size can I save by using it?
Query performance before vs. after compression
As we’ve been discussing in this article, columnar stores generally don’t do very well for queries retrieving individual rows, but they tend to do much better for analytical queries looking at aggregated values. This is precisely what we see in Timescale: deep-and-narrow queries involving averages see significant performance improvements when using compression.
Let’s illustrate this by running a couple of queries over the NYC taxi dataset, one of the example datasets we provide in Timescale. This dataset contains information on taxi trips, including pick-up and drop-off times, locations, distances, fares, and more.
Consider the following query, asking for the highest fare amount from a subset of the taxi dataset within a specific timeframe:
SELECT max(fare_amount) FROM demo.yellow_compressed_ht WHERE tpep_pickup_datetime >= '2019-09-01' AND tpep_pickup_datetime <= '2019-12-01';
When run against the uncompressed dataset, the query execution time stands at 4.7 seconds. We’re using a small, unoptimized testing service and querying many millions of rows, so this performance is not the best. But after compressing the data, the response time lowers down to 77.074 milliseconds:
Let’s share another example. This query counts the number of trips with a specific rate code within a given timeframe:
SELECT COUNT(*) FROM demo.yellow_compressed_ht WHERE tpep_pickup_datetime >= '2019-09-01' AND tpep_pickup_datetime <= '2019-12-01' AND "RatecodeID" = 99;
When executed against the uncompressed data, this query would take 1.6 seconds to complete. The same query running against compressed data finishes in just 18.953 milliseconds. Once again, we see an immediate improvement! These are just quick examples, but they illustrate how powerful compression can be to speed up your queries.
How Timescale’s compression reduces PostgreSQL storage size: Real-world examples
Let’s not forget what brought us here in the first place: we needed a tactic that allowed us to reduce the size of our large PostgreSQL databases so we could scale PostgreSQL further. To show how effective Timescale compression can be for this task, the table below includes some actual examples of compression rates seen among Timescale customers.
These storage savings directly translate into saving money: The Timescale platform uses usage-based pricing for storage, so if your storage shrinks, your bill shrinks proportionally as well.
The compression rate you will ultimately achieve depends on several factors, including your data type and access patterns. But as you can see, Timescale compression can be extremely efficient—we even heavily use it internally to power our customer-facing Insights product with 96 % compression.
Our team can help you fine-tune compression to save you as much money as possible, so don’t hesitate to reach out.
“With compression, we’ve seen on average a 97 percent reduction [in disk size].” (Michael Gagliardo, Ndustrial)
“We have found Timescale’s compression ratio to be absolutely phenomenal! We’re currently at a compression ratio of over 26, drastically reducing the disk space required to store all our data.” (Nicolas Quintin, Octave)
“Timescale’s compression was as good as advertised, which gave us +90 % [disk] space savings in our underlying hypertable.” (Paolo Bergantino, METER Group)
Compression and Tiered Storage: Timescale storage lifecycle
Lastly, we couldn’t wrap up this article without a mention of Timescale’s Tiered Storage, which we just launched in General Availability.
On top of compression, you now have another tool to help you scale your PostgreSQL databases even further in the Timescale platform: you can tier your older, infrequently accessed data to a low-cost object storage tier while still being able to access it via standard SQL.
This low-cost storage tier has a flat price of $0.021 per GB/month for data—cheaper than Amazon S3—allowing you to keep many TBs in your PostgreSQL database for a fraction of the cost.
This is how our Tiered Storage backend works on the Timescale platform and how the low-storage tier plays along with compression:
- Your most recent data is written into a high-performance storage tier optimized for fast queries and high ingests. In this tier, you can enable Timescale columnar compression to shrink your database size and speed up your analytical queries, as we’ve been discussing in this article. For example, you can define a compression policy that compresses your data after 1 week.
- Once your application no longer frequently accesses that data, you can automatically tier it to a lower-cost object storage tier by setting up a tiering policy. The data in the low-cost storage tier remains fully queryable within your database, and there’s no limit to the amount of data you can store—up to hundreds of TBs or more. For example, you can define a tiering policy that moves all your data older than six months to the low-cost storage tier.
- Once you don’t have to keep this data in your database anymore, you can drop it via a retention policy. For example, you can delete all data after five years.
Stay With PostgreSQL
We gave Postgres an effective database compression mechanism by adding columnar compression capabilities. This is an essential feature to scale PostgreSQL databases in today’s data-intensive world: compression allows for huge savings in disk usage (storing more data for cheaper) and performance improvements (running analytical queries over large volumes in milliseconds).
Timescale’s compression design achieves impressive compression rates by combining best-in-class compression algorithms along with a novel method to create hybrid row/columnar storage within PostgreSQL. This capability makes Timescale’s (and thus PostgreSQL) storage footprint on par with custom-built, more limited columnar databases.
But unlike many columnar engines, Timescale supports ACID transactional semantics and direct support for modifications (INSERTs, UPDATEs, UPSERTs, DELETEs) on compressed columnar data. Because the old model of “one database for transactional workloads, another for analytical” is obsolete, many modern applications run workloads that fit both patterns. So why maintain two separate databases when you can do it all in PostgreSQL?
Timescale allows you to start on PostgreSQL, scale with PostgreSQL, stay with PostgreSQL.
Create a free account and try Timescale today—it only takes a couple of seconds, no credit card required.