Built for speed, scale, and savings
Our engineering team has been hard at work solving some of the thorniest problems people encounter when developing data-intensive and time-series applications in PostgreSQL:
Performance slows as your tables grow, but manually partitioning data can be challenging.
Simply set a range criteria (e.g., one day) and let Timescale automatically create the partitions.
”Both of our SELECT and INSERT performance results flatly outperformed InfluxDB...We saw an average of ~40 ms improvement in both types of queries.”From the Dev Q&A with Adam Inoue, Software Engineer at Messari
These long, mammoth-like tables mean slow queries and inserts.
Changing the structure of large tables means that you can be locked out of them for long periods of time, unable to query.
Changing the structure of smaller tables can be done incrementally and take shorter locks.
The result: you need to allocate time and resources to manually partition and maintain large tables or your performance will suffer.
The result: you get worry-free, automatic partitioning with small and responsive hypertables that will speed up your queries and work exactly like regular PostgreSQL tables.
Many applications show users' data aggregated over time (operational analytics, etc).
Create a continuous aggregate and set up the refresh policy (how often you want your materialized view to be refreshed).
The way to speed up these queries is to pre-compute the aggregates and materialize the results.
Continuous aggregates are incremental and automatically updated materialized views. They update only the aggregates that have changed in the background.
”We use it not just for the continuous aggregates of count data and other metrics [...] but the bucketing, the things that are so complicated if you push them to application code.”From the Dev Q&A with Shane Steidley, Director of Software at Density
But PostgreSQL materialized views are stale almost as soon as they are created. And making them up-to-date is incredible expensive, requiring a recomputation of aggregates over your entire dataset.
The result: you simply cannot provide users with fast, up-to-date aggregates over long time periods.
The result: Your queries are faster and you can speed them even further thanks to continuous aggregates on continuous aggregates. Oh, and you don't need your raw dataset—enjoy your storage savings!
PostgreSQL doesn’t offer many ways to compress your data so your storage is bloating (and your costs soaring!).
Simply enable compression on individual hypertables. You can also create a compression policy to compress data that is older than a certain time, saving you space and money.
You can only compress data using block-level compression or TOAST and both these techniques don’t compress well and are slow to query.
Timescale compresses hypertables incrementally chunk by chunk, saving you space and speeding up your queries. It uses columnar compression via compression algorithms optimized for the column type, which makes your data much smaller and faster to query.
The result: As you ingest more data, your data volume grows. Your performance will degrade over time.
The result: Watch as our built-in job scheduler converts your uncompressed rows into compressed columns. (Okay, you don’t have to watch it happen; enjoy a cup of ☕!) Then, enable compression to see your chunk size decrease by more than 90 percent.
”For one of our larger customers, we normally store about 64 GB of uncompressed data per day. With compression, we’ve seen, on average, a 97 percent reduction.”From the Dev Q&A with Michael Gagliardo, Software Architect at Ndustrial.io
Our resource efficiency means you need less compute and storage for your workloads. We’ve also engineered more cost savings with usage-based storage (only pay for what you store), a low-cost storage tier for rarely-accessed data, and scalable compute.
A top customer