-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First insert upon pool.acquire() takes long ass time #794
Comments
Try turning
|
This was exactly it! The psql explain analyze didn't even tell me there would be JIT optimisation going on for this query. Disabling it still helped tremendously!! |
elprans
added a commit
that referenced
this issue
Sep 23, 2023
elprans
added a commit
that referenced
this issue
Oct 7, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
trio 0.19.0
trio-asyncio 0.12.0
triopg 0.6.0
asyncpg 0.23.0
PostgreSQL 12.7 (local install trhough unix socket)
Python 3.8.10
Running in containers on alpine
I use triopg.create_pool()
All python pkgs installed via pip
Can the issue be reproduced under both asyncio and [uvloop](:
Not sure, I use trio_asyncio loop
triopg seems like a wrapper proxying asyncpg commands, which is why I write here...
I can't use manually prepared statements or 'manual' async style. Only async with style.
I need to do the following: wait for an event. If the event happens, insert some rows into DB, as fast as possible, then do a conditional thing if entry is existant in DB or not, then idle again.
I accomplished this by creating a pool like so:
Then I create an async worker waiting for the event and after it happened, executing:
In this worker I do the following:
The insert style is the following: INSERT INTO (rid,sid,a,b,c,d) VALUES
($1,$2,$3,$4,$5,$6)
rid and sid are FOREIGN KEYS
Now when I execute this insert, it takes an astonishing 300ms ! This is unacceptable. I tried to narrow it down, some other things, such as inserting into the same table multiple times reduces this latency to around 4ms.
The transaction and pool handling are eating up an additional ~13ms. But executing without a transaction gives similar overall times. So I ignore this.
Furthermore I noticed, that on other tables, such an insert is not eating up such large times (maybe 30ms, other tables, 7ms) and the time difference between the first and following inserts is lesser.
These other tables have less columns and have no FOREIGN KEYS, only UNIQUE SERIAL PRIMARY KEY.
What is the reason for this? I suspect it has to do with preparation of prepared statements on the connection object?
The documentation is misinforming about this, at one point it says on release of connection to pool all state is dropped(incl. prepared statements) at another point it says everything EXCEPT prepared statements is released.
Can a solution be to lease a connection with acquire beforehand? When I set all timeouts to 0 (infinite) will the prepared statements be retained?
E.g.
Are there internal things that can corrupt the connection state and require a new connection that will induce this delay again?
Now because some table inserts are faster than others, I suspect maybe some settings in postgres config might also be wrong, which causes this. I am not sure which it could be because the psql commandline EXPLAIN ANALYZE command does not show this in Plannung and execution time (which is together around 1ms).
Any help is much appreciated.
Regards, M.K.
The text was updated successfully, but these errors were encountered: