You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have an infinity stream of events from Kafka, each we groupBy(number = 1000) then go to the db and fetch in batch
We have a few queries which are parametrized with lists like
sql """ select ...
where id in (${SomeCodecs.id.list(ids.size)});
"""
We have a session with a pool, prepareCacheSize = 1024
We have about 25 queries
I think what happens is that we have more prepared statements ( lets say about 25 x 1000 ) than the cache size ,
and when the local cache runs out of space it just disposed it from the local cache but not from the service with deallocate.
When the same query with same number of params to list comes up, skunk just prepares again since it's no longer in the cache, so the number prepared statement prepared statement on the postgres server just keeps growing until it either dies or we restart the app
There are a few thing I could/will do at our side but I thought it make sense to report this as a bug
The text was updated successfully, but these errors were encountered:
silles79
changed the title
Postgres memleak cause by prepared statements growing forever.
Postgres memleak caused by prepared statements growing forever.
Nov 6, 2024
silles79
changed the title
Postgres memleak caused by prepared statements growing forever.
Postgres memleak caused by prepared statements growing forever when prepareCache is overflowing
Nov 6, 2024
Our Postgres db keeps running out of RAM.
We have an infinity stream of events from Kafka, each we groupBy(number = 1000) then go to the db and fetch in batch
We have a few queries which are parametrized with lists like
sql """ select ...
where id in (${SomeCodecs.id.list(ids.size)});
"""
We have a session with a pool, prepareCacheSize = 1024
We have about 25 queries
I think what happens is that we have more prepared statements ( lets say about 25 x 1000 ) than the cache size ,
and when the local cache runs out of space it just disposed it from the local cache but not from the service with deallocate.
When the same query with same number of params to list comes up, skunk just prepares again since it's no longer in the cache, so the number prepared statement prepared statement on the postgres server just keeps growing until it either dies or we restart the app
There are a few thing I could/will do at our side but I thought it make sense to report this as a bug
The text was updated successfully, but these errors were encountered: