I’m currently using TimescaleDB 2.19.0 integrated with Ignition, and I’m encountering a significant performance discrepancy between executing the same SQL query via different tools:
- When the query is run through DataGrip, it completes in approximately 10 milliseconds.
- When run from Ignition, it takes around 6 seconds to return.
The query in question is run against a hypertable named sqlth_1_data
, which is structured and indexed for time-series performance. Specifically, the table includes an index on (tagid ASC, t_stamp DESC)
.
Here’s an example of the query:
SELECT tagid, floatvalue, t_stamp
FROM sqlth_1_data
WHERE t_stamp >= 1745812338871
AND t_stamp <= 1745898738871
AND tagid IN (19777, 19778, 19779, 19781, 19785, 22476, 19790)
AND dataintegrity = 192
ORDER BY t_stamp ASC, tagid ASC;
This query runs quickly and efficiently from DataGrip, returning results in under 10 ms. However, when executed from Ignition using system.db.runPrepQuery
, performance degrades drastically.
In addition to slow performance, I’ve noticed that database resource usage spikes significantly. Memory usage increases from about 1 GB to over 5 GB of RAM, and in some cases, the PostgreSQL server is killed due to OOM (Out-Of-Memory) events.
Interestingly, increasing the maximum connection pool size in Ignition actually worsens performance, which was not the case with vanilla PostgreSQL before integrating TimescaleDB.
This leads me to suspect a possible difference in how the JDBC driver or Ignition itself interacts with the Timescale query planner, memory handling, or caching behavior—especially under high concurrency or prepared statement usage.