Shared memory recommendations for Docker Timescaledb deployments

Hi,
I am using Timescaledb deployment in Docker Swarm (image: timescaledb:2.9.1-pg14) to collect performance data from devices. When running complex queries over the data, I have encountered following error:

ERROR: could not resize shared memory segment “/PostgreSQL.272694596” to 4194304 bytes: No space left on device

I have tried decreasing the shared_buffers and the number of parallel workers but it didn’t help.
Most of the recommendations I have found to deal with this issue are that I should increase the shared memory amount in the container - Docker default is only 64MB (postgresql - pq: could not resize shared memory segment. No space left on device - Stack Overflow). This is what I have eventually done (part of the docker-stack.yml file):

volumes:
  - db-data:/var/lib/postgresql/data
  - type: tmpfs
    target: /dev/shm
    tmpfs:
      size: 268435456 #256MB

it worked for me, but left me with doubts:

  • is this actually the recommended way to deal with this issue or I should rather configure timescaledb differently?
  • are there any risks when it is applied in an environment where many microservices are running?
  • how should I figure out the correct amount of shared memory?

Thanks in advance for recommendations / comments

Welcome @wlod, I don’t have answers for all the questions but probably 64MB is very little for PG and as the stackoverflow post suggests, you should increase it.

is this actually the recommended way to deal with this issue or I should rather configure timescaledb differently?

Almost any advice valid to postgresql will be valid for Timescale as it’s just an extension on top of it.

Checking the work-mem docs it says:

Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files. If this value is specified without units, it is taken as kilobytes. The default value is four megabytes (4MB). Note that for a complex query, several sort or hash operations might be running in parallel; each operation will generally be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, result cache nodes and hash-based processing of INsubqueries.

Hash-based operations are generally more sensitive to memory availability than equivalent sort-based operations. The memory available for hash tables is computed by multiplying work_mem by hash_mem_multiplier. This makes it possible for hash-based operations to use an amount of memory that exceeds the usual work_mem base amount.

So, it will depend a lot on the type of operations that your db is doing.