TimescaleDB vs. InfluxDB: Purpose built differently for time-series data
An in-depth look into how two leading time-series databases stack up in terms of data model, query language, reliability, performance, ecosystem, operational management, and company/community support.
✨ As of September 2021, "Timescale Cloud" is now Managed Service for TimescaleDB. Updated name, same great, reliable, and rock-solid database 🎉"
Note: This study was originally published in August 2018, updated in June 2019 and last updated on 3 August 2020.
Time-series data is emerging in more and more applications, including IoT, DevOps, Finance, Retail, Logistics, Oil and Gas, Manufacturing, Automotive, Aerospace, SaaS, even machine learning and AI. In the past, the focus of time-series databases has been narrowly on metrics and monitoring; today it’s become clear that software developers really need a true time-series database designed for a variety of operational workloads.
If you are investing in a time-series database, that likely means you already have a meaningful amount of time-series data piling up quickly and need a place to store and analyze it. You may also recognize that the survival of your business will depend on the database you choose.
How to choose a time-series database
There are several factors to consider when evaluating a time-series database for your workloads:
- Data model
- Query language
- Operational management
- Company and community support
Typically database comparisons focus on performance benchmarks. Yet performance is just a part of the overall picture. It doesn't matter how well a database performs in benchmarks if it lacks the data model, query language, or reliability required for your production workloads. With that in mind, we begin by comparing TimescaleDB and InfluxDB across three qualitative dimensions, data model, query language, and reliability, before diving deeper with performance benchmarks. We then round out with a comparison with database ecosystem, operational management, and company/community support.
Yes, we are the developers of TimescaleDB, so you might quickly disregard our comparison as biased. But if you let the analysis speak for itself, you’ll find that we stay objective .
Also, this comparison isn’t a purely theoretical activity for us. Our company began as an IoT platform, where we first used InfluxDB to store our sensor data. However, owing to most of the differences listed below, we found InfluxDB unsatisfactory. So, we built TimescaleDB as the first time-series database that satisfied our needs, and then discovered others who needed it as well, which is when we decided to open source the database.
Today, just over 3 years later, the TimescaleDB developer community has come a long way, with tens of millions of downloads and over 500,000 active databases all over the world. This community includes organizations like AppDynamics, Bosch, Cisco, Comcast, Fujitsu, IBM, Schneider Electric, Samsung, Siemens, Uber, Warner Music, and thousands of others.
In the end, our goal is to help you decide which is the best time-series database for your needs.
Databases are opinionated. The way a database chooses to model and store your data determines what you can do with it.
When it comes to data models, TimescaleDB and InfluxDB have two very different opinions: TimescaleDB is a relational database, while InfluxDB is more of a custom, NoSQL, non-relational database. What this means is that TimescaleDB relies on the relational data model, commonly found in PostgreSQL, MySQL, SQL Server, Oracle, etc. On the other hand, InfluxDB has developed its own custom data model, which, for the purpose of this comparison, we’ll call the tagset data model.
Relational data model
The relational data model has been in use for several decades now. With the relational model in TimescaleDB, each time-series measurement is recorded in its own row, with a time field followed by any number of other fields, which can be floats, ints, strings, booleans, arrays, JSON blobs, geospatial dimensions, date/time/timestamps, currencies, binary data, or even more complex data types. One can create indexes on any one field (standard indexes) or multiple fields (composite indexes), or on expressions like functions, or even limit an index to a subset of rows (partial index). Any of these fields can be used as a foreign key to secondary tables, which can then store additional metadata.
An example is below:
The advantage of this approach is that it is quite flexible. One can choose to have:
- A narrow or wide table, depending on how much data and metadata to record per reading
- Many indexes to speed up queries or few indexes to reduce disk usage
- Denormalized metadata within the measurement row, or normalized metadata that lives in a separate table, either of which can be updated at any time (although it is easier to update in the latter case)
- A rigid schema that validates input types or a schemaless JSON blob to increase iteration speed
- Check constraints that validate inputs, for example checking for uniqueness or non-null values
The disadvantage of this approach is that to get started, one needs to generally choose a schema, and explicitly decide whether or not have indexes.
Note: In the past several years it’s been popular to criticize the relational model by claiming that it is not scalable. However, as we have already shown, this is simply not true: relational databases can indeed scale very well for time-series data.
Tagset data model
With the InfluxDB tagset data model, each measurement has a timestamp, and an associated set of tags (tagset) and set of fields (fieldset). The fieldset represents the actual measurement reading values, while the tagset represents the metadata to describe the measurements.
Field data types are limited to floats, ints, strings, and booleans, and cannot be changed without rewriting the data. Tagset values are indexed, while fieldset values are not. Also, tagset values are always represented as strings, and cannot be updated.
An example is below:
The advantage of this approach is that if one’s data naturally fits the tagset model, then it is quite easy to get started, as one doesn’t have to worry about creating schemas or indexes.
Conversely, the disadvantage of this model is that it is quite rigid and limited, with no ability to create additional indexes, indexes on continuous fields (e.g., numerics), update metadata after the fact, enforce data validation, etc.
In particular, even though this model may feel “schemaless”, there is actually an underlying schema that is auto-created from the input data, which may differ from the desired schema.
Data model summary
The tagset data model in InfluxDB is more limiting and thus might be easier to get started with for some. However, the relational model in TimescaleDB is more versatile and offers more functionality, flexibility, and control. This is especially important as your application evolves. When planning your system you should consider both its current and future needs.
Note: It’s also possible to create relational schema that are equivalent to the tagset model for specific use cases, such as Prometheus metrics. For more on this, see the Timescale-Prometheus GitHub repository.
Generally in the world of database query languages, there have been two extremes: full SQL support on one end, and completely custom languages (sometimes known as “NoSQL”) on the other.
From the beginning, TimescaleDB has firmly existed at the SQL end of the spectrum, fully embracing the language from day 1, and later further extending it to simplify time-series analysis. This has enabled TimescaleDB to have a minimal learning curve for new users, and allowed it to inherit the entire SQL ecosystem of 3rd party tools, connectors, and visualization options, which is larger than that of any other time-series database.
In contrast, InfluxDB began with a “SQL-like” query language (called InfluxQL), placing it in the middle of the spectrum, and then made a marked move towards the “custom” end with a new query language from InfluxDB called Flux. (Read the Flux announcement on Hacker News, and our comparison of SQL vs. Flux.)
At a high-level, here’s how the two language syntaxes compare, using a Flux query that performs math across measurements as an example:
For most use cases, we believe that SQL is the right query language for a time-series database. SQL has a rich tradition and history, including familiarity among millions of developers and a vibrant ecosystem of tutorials, training, and community leaders. In short, choosing SQL means you’re never alone.
While Flux may make some tasks easier, there are significant trade-offs to adopting a custom query language. First, new query languages introduce significant overhead and reduce readability. They force a greater learning curve onto new users and possess a scarcity of compatible tools and community support.
And they may not even be a viable option: rebuilding a system and re-educating a company to write and read a new query language is often not practically possible. Particularly if the company already is using SQL-compatible tools on top of the database, such as Tableau for visualization.
This is also why SQL is making a comeback as the query language of choice for data infrastructure in general. Indeed, SQL is well-documented and is the third-most commonly used programming language among developers.
Another cardinal rule for a database: it cannot lose or corrupt your data. This is a dimension where there is a stark difference in the approaches TimescaleDB and InfluxDB have taken, which has implications for reliability.
At its start, InfluxDB sought to completely write an entire database in Go. In fact, it doubled down on this decision with its 0.9 release, which again completely rewrote the backend storage engine. With InfluxDB 2.0 (currently in beta at the time of publishing), it’s (at least) the second complete rewrite attempted by the InfluxData team.
These design decisions have significant implications that affect reliability. First, InfluxDB has to implement the full suite of fault-tolerance mechanisms, including replication, high availability, and backup/restore. Second, InfluxDB is responsible for its on-disk reliability, e.g., to make sure all its data structures are both durable and resist data corruption across failures (and even failures during the recovery of failures).
We made a dramatically different architectural decision when building TimescaleDB: build on PostgreSQL. TimescaleDB relies on the 25+ years of hard, careful engineering work that the entire PostgreSQL community has done to build a rock-solid database that supports millions of mission-critical applications worldwide.
In fact, this was at the core of our co-founder’s launch post about TimescaleDB: When Boring is Awesome. Stateless microservices may crash and reboot, or trivially scale up and down. Actually, this is the entire “recovery-oriented computing” philosophy, as well as the thinking behind the new “serverless” design pattern. But your database needs to actually persist data, and should not wake you up at 3am because it’s in some broken state.
So let us return to these two aspects of reliability.
First, programs can crash, servers can encounter hardware or power failures, disks can fail or experience corruption. You can mitigate this risk (e.g., robust software engineering practices, uninterrupted power supplies, disk RAID, etc.), but not eliminate it completely; it’s a fact of life for systems. In response, databases have been built with an array of mechanisms to further reduce such risk, including streaming replication to replicas, full-snapshot backup and recovery, streaming backups, robust data export tools, etc.
Given TimescaleDB’s design, it’s able to leverage the full spectrum of tools that the Postgres ecosystem offers and has rigorously tested: streaming replication for high availability and read-only replicas, pg_dump and pg_recovery for full database snapshots, pg_basebackup and log shipping / streaming for incremental backups and arbitrary point-in-time recovery, pgBackrest or WAL-E for continuous archiving to cloud storage, and robust
COPY FROM and
COPY TO tools for quickly importing/exporting data with a variety of formats.
InfluxDB, on the other hand, has had to build all these tools from scratch. In fact, it doesn’t offer many of these capabilities even today. It initially offered replication and high availability in its open source version, but subsequently pulled this capability out of open source and into its enterprise product. InfluXDB backup tools have the ability to perform a full snapshot and recover to this point-in-time, and only recently added some support for a manual form of incremental backups.
That said, InfluxDB’s approach of performing incremental backups based on database time ranges seems quite risky from a correctness perspective, given that timestamped data may arrive out-of-order, and thus the incremental backups -since some time period would not reflect this late data. And its ability to easily and safely export large volumes of data is also quite limited. We’ve heard from many users (including Timescale engineers in their past careers) that they had to write custom scripts to safely export data; asking for more than a few 10,000s of data points would cause the database to out-of-memory error and crash.
The pain of trying to export data out of InfluxDB gave rise to Outflux, a tool to migrate data from InfluxDB to TimescaleDB with a single command.
Second, databases need to provide strong on-disk reliability and durability, so that once a database has committed to storing a write, it is safely persisted to disk. In fact, for very large data volumes, the same argument even applies to indexing structures, which could otherwise take hours or days to recover; there’s good reason that file systems have moved from painful fsck recovery to journaling mechanisms.
In TimescaleDB, we made the conscious decision not to change the lowest levels of PostgreSQL storage (even in implementing hybrid row/columnar storage in TimescaleDB native compression), nor interfere with the proper function of its write-ahead log (WAL). The WAL ensures that as soon as a write is accepted, it gets written to an on-disk log to ensure safety and durability, even before the data is written to its final location and all its indexes are safely updated. These data structures are critical for ensuring consistency and atomicity; they prevent data from becoming lost or corrupted, and ensure safe recovery. This is something the database community (and PostgreSQL) has worked hard on: what happens if your database crashes (and will subsequently try to recover) while it’s already in the middle of recovering from another crash?
InfluxDB had to design and implement all recovery, reliability and durability functionality from scratch. This is a notoriously hard problem in databases that typically takes many years or even decades to get correct. Some metrics stores might be okay with occasionally losing data, but we see TimescaleDB being used in settings where this is not acceptable. InfluxDB forums, on the other hand, are rife with such complaints: “DB lost after restart”, “data loss during high ingest rate”, “data lost from InfluxDB databases”, “unresponsive due to corruption after disk disaster”, “data messed up after restoring multiple databases”, and so on.
These challenges and problems are not unique to InfluxDB, and every developer of a reliable, stateful service must grapple with them. Every database goes through a period when it sometimes loses data because it's really, really hard to get all the corner cases right. And eventually, all those corner cases come to haunt some operator. But, PostgreSQL went through this period in the 1990s, while InfluxDB is still figuring these things out today.
Now, let’s get into some hard numbers with a quantitative comparison of the two databases across a variety of insert and read workloads. Given how common high-cardinality datasets are within time-series, we will first take a look at how InfluxDB and TimescaleDB handle this issue.
For comparing both insert and read latency performance, we used the following setup:
- Version: TimescaleDB version 1.7.1, community edition, with PostgreSQL 12, InfluxDB version 1.8.0 Open Source Edition (the latest non-beta releases for both databases at the time of publishing).
- 1 remote client machine, 1 database server, both in the same cloud datacenter
- Instance size: Both client and database server ran on DigitalOcean virtual machines (droplets) with 32 vCPU and 192GB Memory each.
- OS: Both server and client machines ran Ubuntu 18.04.3
- Disk Size: 4TB of block storage in a raid0 configuration (EXT4 filesystem), plus 800GB of local SSD storage.
- Deployment method: Database servers were deployed using Docker images, using images pulled from the official Docker hubs of Influx Data and Timescale respectively.
For insert performance, we used the following datasets and configuration:
- Dataset: 100-10,000,000 simulated devices generated 10 CPU metrics every 10 seconds for ~100M reading intervals. Intervals used for each configuration are as follows: 31 days for 100 devices, 4 days for 4,000 devices, 3 hours for 100,000 devices and 3 minutes for 1,000,000 and 10,000,000 devices.
The datasets were created using Time-Series Benchmarking Suite, using the
cpu-only use case.
- Batch size: Inserts were made using a batch size of 10,000 which was used for both InfluxDB and TimescaleDB
- Additional database configurations: For TimescaleDB, we set the chunk time depending on the data volume, aiming for 7-16 chunks in total for each configuration (more on chunks here). For InfluxDB, we enabled the TSI (time series index). All other parameters were kept as default.
On insert performance as the cardinality of the dataset increases, the results are fairly clear:
For workloads with extremely low cardinality, like the configuration with 100 devices, InfluxDB offers better insert performance than TimescaleDB.
However, as cardinality increases, InfluxDB performance drops dramatically due to its reliance on time-structured merge trees (which, similar to the log-structured merge trees it is modeled after, suffers with higher-cardinality datasets).
This of course should be no surprise, as high cardinality is a well known Achilles heel for InfluxDB (source: GitHub, Forums). In comparison, TimescaleDB actually sees better performance as cardinality increases, with moderate drop off in terms of absolute insert rate, very quickly surpassing InfluxDB in terms of insert performance for the configurations of 4,000, 100,000, 1 million and 10 million devices.
That said, it is worth doing an honest analysis of your insert needs. If your insert performance is far below these benchmarks (e.g., if it is 2,000 rows / second), then insert performance will not be your bottleneck, and this comparison becomes moot.
Insert performance summary
- For workloads with extremely low cardinality, the databases are comparable, with InfluxDB outperforming Timescale.
- As cardinality increases, InfluxDB insert performance drops off dramatically faster than that with TimescaleDB.
- For workloads with high cardinality, TimescaleDB has ~3.5x the insert performance as InfluxDB.
- If your insert performance is far below these benchmarks (e.g., if it is 2,000 rows / second), then insert performance will not be your bottleneck.
For benchmarking read latency, we used the following setup for each database (the machine configuration is the same as the the one used in the Insert comparison):
- Dataset: 100–4,000 simulated devices generated 1–10 CPU metrics every 10 seconds for 4 full days (100M+ reading intervals, 1B+ metrics)
- 10,000 batch size was used for both on inserts
- For TimescaleDB, we set the chunk time to 12 hours, resulting in 8 total chunks (more on chunk time here).
- We also enabled native compression on TimescaleDB, a new feature introduced in Timescale 1.5. We compressed all data older than 12 hours, resulting in 7 chunks compressed and 1 chunk (data from the last 12 hours) uncompressed. This configuration is a commonly recommended one, since raw data is kept for recent time periods and older data is compressed, enabling greater query efficiency and ability to handle out of order data (see our compression docs for more). The parameters we used to enable compression are as follows: We segmented by the `tags_id` and `hostname` columns and ordered by `time` descending and `usage_user` columns.
- For InfluxDB, we enabled the TSI (Time Series Index)
On read (i.e., query) latency, the results are more complex. Unlike inserts, which primarily vary on cardinality size (and perhaps batch size), the universe of possible queries is essentially infinite, especially with a language as powerful as SQL. Often, the best way to benchmark read latency is to do it with the actual queries you plan to execute. For this case, we use a broad set of queries to mimic the most common query patterns.
The results shown below are the average from 1000 queries for each query type. Latencies in this chart are all shown as milliseconds, with an additional column showing the relative performance of TimescaleDB compared to InfluxDB (highlighted in orange when TimescaleDB is faster, in blue when InfluxDB is faster).
For simple rollups (i.e., groupbys), when aggregating one metric across a single host for 1 or 12 hours, or multiple metrics across one or multiple hosts (either for 1 hour or 12 hours), TimescaleDB generally outperforms InfluxDB at both at low and high cardinality. In particular, TimescaleDB exhibited 460% the performance of InfluxDB on configurations with 100 and 4,000 devices with 10 unique metrics being generated every read interval.
When calculating a simple aggregate for 1 device, performance is comparable between both TimescaleDB and InfluxDB across any number of devices. But TimescaleDB significantly outperforms InfluxDB when it's necessary to aggregate more than 1 metric. In our benchmark, TimescaleDB demonstrates 168% the performance of InfluxDB when aggregating 8 metrics across 100 devices, and 156% when aggregating 8 metrics across 4000 devices. Once again, TimescaleDB outperforms InfluxDB for high-end scenarios.
For double rollups aggregating metrics by time and another dimension (e.g., GROUPBY time, deviceId): When aggregating one metric, InfluxDB shows better performance than TimescaleDB with TimescaleDB only 54% as good as InfluxDB for the 4,000 device config. However, as the number of metrics being aggregated increases, TimescaleDB achieves 188% the performance of InfluxDB.
When selecting rows based on a threshold, TimescaleDB outperforms InfluxDB. Timescale demonstrates between 350-860% the performance of InfluxDB when computing thresholds for a single device and 175-258% the performance of InfluxDB when computing thresholds for all devices for a random time window.
For complex queries that go beyond rollups or thresholds, the comparison is much more clear cut: TimescaleDB vastly outperforms InfluxDB here (in some cases over thousands of times faster). The absolute difference in performance here is actually quite stark: While InfluxDB might be faster by a few milliseconds or tens of milliseconds for some of the single-metric rollups, that difference is mostly indistinguishable to human-facing applications.
For complex queries that go beyond rollups or thresholds, there really is no comparison: TimescaleDB [Fully Managed Service for TimescaleDB, as of September 2021] vastly outperforms InfluxDB here (in some cases over thousands of times faster).
Yet for these more complex queries, TimescaleDB provides real-time responses (e.g., 10–100s of milliseconds), while InfluxDB sees significant human-observable delays (tens of seconds).
It’s worth noting that there were several other complex queries that we couldn’t test because of lack of support from InfluxDB: e.g., joins, window functions, geospatial queries, etc.
Notice that Timescale exhibits 340-7100% the performance of Influx on these complex queries, many of which are common to historical analysis and monitoring.
Read latency performance summary
- For simple queries, TimescaleDB generally outperforms InfluxDB.
- For aggregates and double roll ups, TimescaleDB also generally outperforms InfluxDB. However, when simply rolling up just a single metric, InfluxDB can sometimes outperform TimescaleDB.
- When selecting rows based on a threshold, TimescaleDB outperforms InfluxDB by a significant margin, being up to 414% faster.
- For complex queries, TimescaleDB vastly outperforms InfluxDB, and supports a broader range of query types; the difference here is often in the range of seconds to tens of seconds, with Timescale 344-7100% the performance improvement over InfluxDB.
Stability issues during benchmarking
We had several operational issues benchmarking InfluxDB as our datasets grew, even with the Influx Time-series Index (TSI) enabled. In particular, as we experimented with higher cardinality data sets (100K+ tags), we ran into trouble with both inserts and queries on InfluxDB (but not on TimescaleDB).
While we were able to insert batches of 10K into InfluxDB at lower cardinalities, once we got·to 100k devices we would experience timeouts and errors with batch sizes that large. The most common errors were write errors caused by exceeding the maximum cache memory size, timeouts and fatal out of memory errors, which all occurred during runtime.
Solving these errors required a combination of increasing the maximum cache size, from the default 1GB to between 4GB and 64GB as we went from 100,000 to 10,000,000 devices, as well as decreasing the batch size from 10,000 to between 5,000 and 1,000 and using client side code to deal with the backpressure incurred at higher cardinalities. We had to force our client code to sleep for up to 30 seconds after requests received errors writing the batches.
In contrast, with TimescaleDB, we were able to write large batches at higher cardinality without issue and with no additional configuration.
Moreover, starting at 100K cardinality, we also experienced problems with some of our read queries on InfluxDB. Our InfluxDB HTTP connection would error out with a cryptic ‘End of File’ message. When we investigated the InfluxDB server we found out that InfluxDB had consumed all available memory to run the query and subsequently crashed with an Out of Memory error. Since PostgreSQL helpfully allows us to limit system memory usage with settings like shared_buffers and work_mem, this generally was not an issue for TimescaleDB even at higher cardinalities.
High-cardinality datasets are a significant weakness for InfluxDB.
InfluxDB and the TSI
High-cardinality datasets are a significant weakness for InfluxDB. This is because of how the InfluxDB developers have architected their system, starting with their Time-series Index (TSI).
The InfluxDB TSI is a home-grown log-structured merge tree based system comprised of various data structures, including hashmaps and bitsets. This includes: an in-memory log (“LogFile”) that gets periodically flushed to disk when it exceeds a threshold (5MB) and compacted to an on-disk memory-mapped index (“IndexFile”); a file (“SeriesFile”) that contains a set of all series keys across the entire database. (Described here in the InfluxDB documentation.)
The performance of the TSI is limited by the complex interactions of all of these data structures.
The design decisions behind the TSI also leads to several other limitations with performance implications:
- Their total cardinality limit is around 30 million (although based on the graph above, InfluxDB starts to perform poorly well before that), or far below what is often required in time-series use cases like IoT and IT Monitoring.
- InfluxDB indexes tags but not fields, which means that queries that filter on fields can not perform better than full scans. For example, if one wanted to search for all rows where there was no free memory (e.g, something like,
SELECT * FROM sensor_data WHERE mem_free = 0), one could not do better than a full linear scan (i.e., O(n) time) to identify the relevant data points.
- The set of columns included in the index is completely fixed and immutable. Changing what columns in your data are indexed (tagged) and what things are not requires a full rewrite of your data.
- InfluxDB is only able to index discrete, and not continuous, values due to its reliance on hashmaps. For example, to search all rows where temperature was greater than 90 degrees (e.g., something like
SELECT * FROM sensor_data WHERE temperature > 90), one would again have to fully scan the entire dataset.
- Your cardinality on InfluxDB is affected by your cardinality across all time, even if some fields/values are no longer present in your dataset. This is because the SeriesFile stores all series keys across the entire dataset.
TimescaleDB and B-trees
In contrast, TimescaleDB is a relational database that relies on a proven data structure for indexing data: the B-tree. This decision leads to its ability to scale to high cardinalities.
First, TimescaleDB partitions your data by time, with one B-tree mapping time-segments to the appropriate partition (“chunk”). All of this partitioning happens behind the scenes and is hidden from the user, who is able to access a virtual table (“hypertable”) that spans all of their data across all partitions.
Next, TimescaleDB allows for the creation of multiple indexes across your dataset (e.g., for equipment_id, sensor_id, firmware_version, site_id). These indexes are then created on every chunk, by default in the form of a B-tree. (One can also create indexes using any of the built-in PostgreSQL index types: Hash, GiST, SP-GiST, GIN, and BRIN.)
This approach has a few benefits for high-cardinality datasets:
- The simpler approach leads to a clearer understanding of how the database performs. As long as the indexes and data for the dataset we want to query fit inside memory, which is something that can be tuned, cardinality becomes a non-issue.
- In addition, since the secondary indexes are scoped at the chunk level, the indexes themselves only get as large as the cardinality of the dataset for that range of time.
- You have control over which columns to index, including the ability to create compound indexes over multiple columns. You can also add or delete indexes anytime you want, for example if your query workloads change. Unlike in InfluxDB, changing your indexing structure in TimescaleDB does not require you to rewrite the entire history of your data.
- You can create indexes on discrete and continuous fields, particularly because B-trees work well for a comparison using any of the following operators:
<, <=, =, >=, >, BETWEEN, IN, IS NULL, IS NOT NULL. Our example queries from above (
SELECT * FROM sensor_data WHERE mem_free = 0and
SELECT * FROM sensor_data WHERE temperature > 90’)will run in logarithmic, or O(log n), time.
- The other supported index types can come in handy in other scenarios, e.g., GIST indexes for “nearest neighbor” searches.
The database can only do so much, which is when one typically turns to the broader 3rd party ecosystem for additional capabilities. This is when the size and scope of the ecosystem make a large difference.
TimescaleDB’s approach of embracing SQL pays large dividends, as it allows TimescaleDB to speak with any tool that speaks SQL. In contrast, the non-SQL strategy chosen by InfluxDB isolates the database, and limits how InfluxDB can be used by its developers.
Having a broad ecosystem makes deployment easier. For example, if one is already using Tableau to visualize data, or Apache Spark for data processing, TimescaleDB can plug right into the existing infrastructure due to its compatible connectors.
Here is a non-exhaustive list of 1st party (e.g., the components of the InfluxData TICK stack) and 3rd party tools that connect with either database, to show the relative difference in the two database ecosystems.
- Official support refers to when tool makers themselves support the database – for example, the visualization tool Grafana has official support for both TimescaleDB and InfluxDB.
- Official support is given 3 checkmarks.
- Unofficial support refers to when toolmakers do not support the database natively in the tool, but a connector or library is available.
- For tools which give either database unofficial support, we differentiate the quality of those tools based on the number of GitHub stars they’ve received. Unofficial tools with less than 100 GitHub stars are given 1 checkmark, but those with 100 stars or more are given two checkmarks.
For the open-source projects below, to reflect the popularity of the projects, we included the number of GitHub stars they had as of publication in parentheses, e.g., Apache Kafka (9k+). For many of the unofficial projects for InfluxDB, for example, the unofficial supporting project was often very early (very few stars) or inactive (no updates in months or years).
Even if a database satisfies all the above needs, it still needs to work, and someone needs to operate it.
Based on our experience, operational management requirements typically boil down to these categories: high availability, resource consumption (memory, disk, cpu), general tooling.
No matter how reliable the database, at some point your node will go down: hardware errors, disk errors, or some other unrecoverable issue.
Thus, high availability of your database goes from a value-added feature to a requirement for production deployments. At that point, you will want to ensure you have a standby available for failover with no loss of data.
While InfluxDB high availability is only offered by their paid enterprise version, TimescaleDB supports high availability for free in both its open source and Community editions, via PostgreSQL streaming replication (as explained in this tutorial). This is yet another benefit that Timescale inherits as a result of the rock solid foundation of PostgreSQL.
InfluxDB offers native compression using a variety of techniques: Snappy for strings, delta encoding, scaling and simple-8b with run length encoding for timestamps, Gorilla delta encoding for floats, bits for booleans and delta encoding for integers.
TimescaleDB offers native compression using a novel hybrid row/columnar storage approach, using Gorilla compression for floats, delta-of-delta and simple-8b with run-length encoding for timestamps and integer like types, whole-row dictionary compression for a few repeating values, with LZ compression on top and lastly LZ-based array compression for all other types. (See here for more on how TimescaleDB’s native compression works.)
Using a combination of datasets for the insert and query benchmarks above, and compressing all chunks for TimescaleDB, here’s how the two databases fared at varying cardinalities:
Note: Numbers do not include WAL size, as that is configurable by the user.
Thanks to its column-oriented structure, InfluxDB is able achieve better compression ratios overall. However, the gap is on the order of megabytes on the low end, and narrows at higher cardinalities. Considering the low memory usage of TimescaleDB compared to InfluxDB, and the fact that memory is often an order of magnitude more expensive than disk, we have found that this gap in storage to not be an issue.
When operating TimescaleDB, one inherits all of the battle-tested tools that exist in the PostgreSQL ecosystem: pg_dump and pg_restore for backup/restore, HA/failover tools like Patroni, load balancing tools for clustering reads like Pgpool, etc. Since TimescaleDB looks and feels like PostgreSQL, there are minimal operational learning curves. TimescaleDB “just works”, as one would expect from PostgreSQL.
For operating InfluxDB, one is limited to the tools that the Influx team has built: backup, restore, internal monitoring, etc.
Company and Community Support
Finally, when investing in an open source technology primarily developed by a company, you are implicitly also investing in that company’s ability to serve you, whether you’re a paying customer or not. With that in mind, let’s note the differences between Timescale and InfluxData, the companies behind TimescaleDB and InfluxDB.
Timescale continues to invest in the community with its free self-managed versions, while also actively developing its hosted and managed offering, Managed Service for TimescaleDB. Timescale announced that multi-node scale-out will be available for free. (See this post to learn more about TimescaleDB multi-node design and availability, as well as more ways Timescale is investing in its community.)
In contrast, while Influx does have an open-source offering alongside its licensed InfluxDB Enterprise, its more advanced features like clustering, remain gated behind an enterprise license. The company also appears to be de-prioritizing its open-source product and instead focusing on their SaaS offering, Influx Cloud.
Moreover, for technical products, support and resources often come not just from the company building the technology, but the community of developers who use it. InfluxData is building their community from scratch, while Timescale is able to inherit and build on PostgreSQL’s community. This means that InfluxData community support is a walled garden: inherently more closed, less varied, and smaller, compared to the open diversity of expertise present in the vibrant 20+ year-old PostgreSQL ecosystem.
Furthermore, because TimescaleDB operates just like PostgreSQL, much of the knowledge base relevant to PostgreSQL is also relevant to TimescaleDB. So if you are new to TimescaleDB (or SQL or PostgreSQL), there are many resources available to help get you started. Alternatively, if you are already a SQL or PostgreSQL expert, you will already know how to use the majority of TimescaleDB (save for a small learning curve of optimizations built specifically for time-series data, like SQL functions for complex analysis).
Both databases have cloud offerings. Managed Service for TimescaleDB, Timescale’s hosted and managed service, is available on AWS, GCP and Azure, in over 75 regions and over 2000 different region/storage/compute configurations. By comparison, Influx Cloud, InfluxData’s hosted service, is available on all 3 major clouds, but only in 4 regions.
No one wants to invest in a technology only to have it limit their growth or scale in the future, let alone invest in something that's the wrong fit today.
Before making a decision, take a step back and analyze your stack, your team's skills, and your needs (now and in the future). It could be the difference between infrastructure that evolves and grows with you and one that crumbles to the ground and forces you to start all over.
In this post, we performed a detailed comparison of TimescaleDB and InfluxDB. We don’t claim to be InfluxDB experts, so we’re open to any suggestions on how to improve this comparison. In general, we aim to be as transparent as possible about our data models, methodologies, and analysis, and we welcome feedback. We also encourage readers to raise any concerns about the information we’ve presented in order to help us with benchmarking in the future.
We recognize that TimescaleDB isn’t the only time-series solution and there are situations where it might not be the best answer. And we strive to be upfront in admitting where an alternate solution may be preferable. But we’re always interested in holistically evaluating our solution against that of others, and we’ll continue to share our insights with the greater community.
Like this post? Please recommend and/or share!
Star us on Github
Want to learn more?
Sign up for Managed Service for TimescaleDB (free to get started, no credit card required).