-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak detected for publish message method. #1069
Comments
@ibmappcon Can you please provide the code you are using that demonstrates the memory leak? |
@kamalaboulhosn The code to publish message is some what like this:
We also tried caching and keeping one client only for each user instead of creating it at runtime, the problem with that approach we are seeing is we get this error for publish operation after few concurrent requests:
In both the approaches the leak is evident because as we stub the Problem statement:
Please note we have a specific requirement where we want to publish only one message at a time, Thanks! |
@kamalaboulhosn any updates on this will really help. |
I'll put this on my agenda for today. |
Hi @ibmappcon, I dug into this a bit today. I am indeed able to see the memory leak issue, and it appears to have something to do with the code we are generating for protos. I'd like to look at that a bit more, because it seems like we shouldn't be leaking things like that. But my take on it (and after consulting with my colleagues, they agree) is that it's not great to create a PubSub object per request. There is a lot of data that is loaded from disk when that happens, and it's also going to cause problems with the authentication service eventually, because that downloaded info is cached across PubSub object creations. Tomorrow I'm going to see if I can reproduce the multiple cached clients issue. Eventually that might still end up with the same issue, though, if you have thousands of users, or if you end up with thousands of Pub/Sub server sockets open. |
@feywind where does this one stand? Do you think our code somewhere is retaining objects in memory unnecessarily, or is this a matter of best practices? |
Im having the same issue:
getting very high memory usage.
|
I can confirm using the new pubsub client declaration outside of the function did the fix :) |
I haven't been able to dig into this more since the last post, but I'm still under the belief that there's something leaking from the generated gax API calls. At least when I looked at it in the profiler, most of the growing memory usage was strings with generated proto code. If keeping around the same |
Came across this issue after trying to track down a memory leak in our Firebase Functions that leaked when doing not much else other than publishing events. Just moving the creation of the pubsub client to outside the function vs creating a new one fixed our problem too. Is this potentially worth re-opening @feywind? Surely it's reproducible pretty trivially? |
+1 @lox. I'm able to recreate it using the below snippet. Minimal example to recreate this. const dotenv = require('dotenv');
const { randomSync } = require('ksuid')
dotenv.config();
const { PubSub } = require('@google-cloud/pubsub');
const c1 = new PubSub({
projectId: process.env.PUBSUB_PROJECT_ID,
emulatorMode: true,
});
async function main() {
const topicId = randomSync().string.slice(1, 10)
console.log('topic is', topicId)
await c1.createTopic(topicId);
const sub = await c1.createSubscription(topicId, 'sub-' + topicId);
sub[0].on('message', (msg) => {
console.log(msg.data.toString())
msg.ack()
});
for (let i = 0; i < 10000; i++) {
const client = new PubSub({
projectId: process.env.PUBSUB_PROJECT_ID,
emulatorMode: true,
});
const topic = client.topic(topicId)
await topic.publishMessage({
json: {
'val': i
}
});
console.log(i)
}
}
main()
.then(() => {
console.log('done');
}).catch(err => {
console.log(err)
}) Using global pub-sub outside is definitely solving this. |
Thanks for adding the code. Yeah, the intended usage is to have the |
@feywind understood. Would you have any thoughts on adding this note to the docs? |
The particular issue is mostly Node-specific, because it's the only language (as far as I know) that has this kind of "big object" approach to the API. So I'm not sure it would make sense to put it into the shared docs. Other languages use terminology like "subscriber" and "publisher" for the objects, which make it a bit more obvious that there is machinery in there. We want to make this change for Node, too, but it's kind of gotten deprioritized. In the meantime, here is the sort of change I've been making to samples:
Just to kind of pull the instantiation out and put an explicit comment. |
Hey @feywind, Maybe we can add the same comment in readme? My opinion is developers using this package would generally go through Reafme example first and only go through samples if they really need to look into something in specific which is not straight forward enough. Let me know if it makes sense. |
🤖 I have created a release *beep* *boop* --- ## [3.11.0](googleapis/nodejs-bigtable@v3.10.0...v3.11.0) (2022-04-13) ### Features * send retry attempt header to ease debugging ([googleapis#1068](googleapis/nodejs-bigtable#1068)) ([37f9b3c](googleapis/nodejs-bigtable@37f9b3c)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
Hi,
Encountered a memory leak with pubsubclient.publish method. We tried load testing for publish message operation and found the following results:
For 100 requests, an increase of (0.1-0.2)MB.
For 1000 requests, an increase of (0.6-0.7)MB.
For 5000 requests, an increase of around 3MB.
These results continue to give an increase in memory, with increase in number of requests.
Node version - 10.16.3
The text was updated successfully, but these errors were encountered: