Connections pool

Hello
My team has a large timescale database, from which we query often, and some of the queries might take up to 5-7 seconds.
We use the python databases package to connect to the db, with “postgresql+asyncpg” schema.
Some of our endpoints use asyncio.gather to perform multiple queries at the same time.
We noticed that while each query run quite fast on it’s own when ran independently (up to 5s), the tasks take up to 40s to complete, which brought us to suspect there might not be a connections_pool in use, and they query the db sequently with the same connection, rather than in parallel.
I queried pg_stat_activity and noticed there’s indeed a single connection there, although there are multiple tasks running.
Experimenting with that, I tried to create a new database object for every task (meaning, do database.connect() and database.disconnect() for every task). I could then see numerous connections in pg_stat_activity, but it seemed like kind of missing the point of connections_pool, as I had to establish the connection for each of the tasks.

My question is - could it be that a single connection in pg_stat_activity encapsulates en entire connections_pool behind it? Or there might be some reason we are not aware of that prevents from the connections_pool to be created?

Notes:

  • We use a single python database object (meaning, we only call the database.connect() function once)
  • This object has min_pool_size parameter of 20, and max_pool_size of 50
  • I verified we have connection pooling enabled in our console (type transaction), and we connect to the tsdb_transaction database.
  • We are no where near the max_connections parameter from what I could tell from pg_stat_activity

Thanks for your help,
Tal.

This is the initialization of the database object:

database = databases.Database(
PostgresDsn('postgresql+asyncpg://user_name:user_password@host_name:port_number/tsdb_transaction'),
    min_size=settings.20,
    max_size=settings.50,
    command_timeout=600,
    statement_cache_size=0,
)

@app.on_event("startup")
async def startup():
    await database.connect()

And this is a high level overview of the endpoints logic:

def fetch_from_db(query)
   record = await database.fetch_one(query) #the database object is the global one previously created, same for all tasks

@router.post(...)
async def my_endpoint(request) -> response:
queries = get_queries_from_request(request)
results = await asyncio.gather(
        *[fetch_from_db(query) for query in queries]
    )

This is very python specific.
How far is the server? How much data is coming? Is the problem not related to serialize/deserialize too?

I gave extra information about our connection implementation, but it doesn’t change the fact we only see a single connection in pg_stat_activity, while numerous tasks are supposed to execute queries simultaneously. This is surprising, and we can’t tell for sure if the queries actually run in parallel, or in a sequential manner.

And so my question remains- could it be that a single connection in pg_stat_activity actually ‘hides’ a few other connections? If not, what might explain this?

And so my question remains- could it be that a single connection in pg_stat_activity actually ‘hides’ a few other connections? If not, what might explain this?

I think it worth to check it with the creators of the tool.

You can also run a small test with pg_sleep(), so, instead of running your query, just open a few queries to:

 select pg_sleep(100);

Then it will hold the connection and you can double check the pg_stat_activity while multiple stuff should be sleeping.

The idea of pool is also improve the idle time in the connections, but I think you’ll be able to check it with sleep, or at least collect more clear evidences.

This was a good idea, and it indeed helped me to verify the issue which persists.
When using a single ‘connections pool’ object in my FastAPI Python project, I could observe a single connection although there were supposed to be multiple connections from multiple asyncio tasks. The tasks shared the DB connection in a sequential manner.

When I created a new ‘connections pool’ object within each task, I could see multiple connections executing pg_sleep at the same time, but obviously this approach is not desired since I don’t want to create multiple pools but rather have a shared one.

I also tried to query for pg_backend_id(), to see if connections have the same or different process id. When I used a single pool (and tasks shared the same single connection…) I could see it’s always the same process id for this connection, no matter which task was using it.
When I created a new pool for every task, each also had a different process id.

So, to summarize - it seems like multiple requests from the same service (same backend id) are sharing a single connection, while they should actually use multiple connections. I believe the issue might be in the TSDB built in connections pool, which might be allocating a single connection to all my tasks since they have the same process id, rather than allocating multiple connections as there are multiple requests coming in from this process at the same time.

Would appreciate any thoughts and ideas how to resolve this.
Thanks.

Have you tried to bring this question to FastAPI project? My understanding is that the management of the connections was intentionally build on this way.

Looks related to this one.

Hey,
The issue turned out to be a bug in Python databases package. The bug was fixed 10 months ago in this commit, and the issue was resolved after bumping up the package version in our project.
Thanks a lot for your help!