Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Memory leak on client.js 'connected' event #348 #362

Merged
merged 17 commits into from
Jan 28, 2020
Merged

fix: Memory leak on client.js 'connected' event #348 #362

merged 17 commits into from
Jan 28, 2020

Conversation

robertsLando
Copy link
Member

No description provided.

Copy link
Collaborator

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a unit test

@robertsLando robertsLando requested a review from mcollina January 23, 2020 13:59
Copy link
Collaborator

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a test of the original "memory leak warning" that it claims to fix.

lib/client.js Outdated Show resolved Hide resolved
lib/client.js Outdated Show resolved Hide resolved
lib/client.js Outdated Show resolved Hide resolved
@robertsLando robertsLando requested a review from mcollina January 23, 2020 14:22
lib/client.js Outdated Show resolved Hide resolved
lib/client.js Outdated Show resolved Hide resolved
@robertsLando robertsLando requested a review from mcollina January 23, 2020 14:50
@mcollina
Copy link
Collaborator

The code is ok, the test on the EventEmitter warning is missing

@robertsLando
Copy link
Member Author

@mcollina What do you mean?

This ?

process.on('warning', t.fail);

@robertsLando
Copy link
Member Author

Without the queue fix the test fails so it tests also EventEmitter warning now

Copy link
Collaborator

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@robertsLando
Copy link
Member Author

@mcollina Can I merge this or need to wait reviwers?

@mcollina
Copy link
Collaborator

please wait for a bit

lib/client.js Outdated Show resolved Hide resolved
@gnought
Copy link
Collaborator

gnought commented Jan 24, 2020

Why we need to queue all packets between CONNECT and connected event?
We emit the "connected" event at

this.client.emit('connected')
. After we receive CONNECT and trigger connect handler, we still have do authentication, what will happen for the queue packets if the authentication fails?

@robertsLando
Copy link
Member Author

@gnought connected event is trigger after connectActions

We need to queue packets because like you can see in issue #348 after the first connect packet is received all other packet init a listener to connect event here and this throws the eventemitter warning

@robertsLando robertsLando requested a review from mcollina January 24, 2020 07:42
@gnought
Copy link
Collaborator

gnought commented Jan 24, 2020

The memory leakage occurs obviously we are keeping add listener in a same 'connected' event, and over the default limit 10 in https://nodejs.org/docs/latest-v12.x/api/events.html#events_emitter_setmaxlisteners_n

Currently we hook packet event on https://github.com/moscajs/aedes/blob/master/lib/client.js#L51, we parse all pending packets once 'connected' is triggered.

I think we should drop all packets before there is a clientReady or client event occur
That means in https://github.com/moscajs/aedes/blob/master/lib/client.js#L333-L335, we should change to

    if (client.connected)
      handle(client, packet, client._nextBatch)

@robertsLando
Copy link
Member Author

@gnought @mcollina Dropping packets could be an option but I don't know if this is spec compilant. Should we reject packets before the client is connected successfully? Should client wait to receive the connack before sending other commands? If so we can drop packets

@gnought
Copy link
Collaborator

gnought commented Jan 24, 2020

Rejecting packets before "connected" event makes sense. We could get rid of unauthenticated packets. Clients should see if the connection is ready like ioredis ready event.
Users may make use of preConnect to implement their protection logic, otherwise we need to think a way how to clear those queued packets if it is unauthenticated client.

@gnought
Copy link
Collaborator

gnought commented Jan 24, 2020

Yes, we can, but the fix does not convince me.
If it is under attack and there are lots of delays in preConnect per client with a high volume traffic. All queues are growing bigger and consumes a lot of memory, and we will let Node to use extra power to cleanup all items, even there may be a message lag if Node does not process faster enough.

@mcollina
Copy link
Collaborator

I think we can error the client after 42 messages being queued, making it configurable by the user.

@GavinDmello
Copy link
Collaborator

I think we can error the client after 42 messages being queued, making it configurable by the user.

@mcollina Life, universe and everything ?

I'm 👍 for configurable amount of messages.

@robertsLando
Copy link
Member Author

@mcollina Should be ok now?

@gnought
Copy link
Collaborator

gnought commented Jan 24, 2020

It sounds like that we still queue messages after connected.
What happen when the incoming messages are faster than the processing rate?

@robertsLando
Copy link
Member Author

It sounds like that we still queue messages after connected.

After connected event all queued packets are processed and the queue is set to null

What happen when the incoming messages are faster than the processing rate?

I think that this problem would be the same with or without the initial queue, if the incoming messages are faster then the processing rate simply we get a stack overflow but this would not be caused by the queue

@robertsLando
Copy link
Member Author

@mcollina So what about this?

Copy link
Collaborator

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good, I've left a few nits to fix.

lib/client.js Outdated Show resolved Hide resolved
lib/client.js Outdated Show resolved Hide resolved
lib/client.js Show resolved Hide resolved
@robertsLando robertsLando requested a review from mcollina January 28, 2020 08:37
Copy link
Collaborator

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code is ok, can you revert the linting changes to the typescript file?

@robertsLando
Copy link
Member Author

@mcollina done

Copy link
Collaborator

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

If you’d like to change linting, add a PR with a linter for the ts files.

@robertsLando
Copy link
Member Author

@mcollina I don't know which ide you are using, if everyone is using vscode we could add .vscode folder with all the formatters used (maybe another PR for this)

@robertsLando robertsLando merged commit 12725c6 into moscajs:master Jan 28, 2020
@mcollina
Copy link
Collaborator

I prefer integrations that are validated by CI, not by the editor.

@mcollina
Copy link
Collaborator

(I use vim)

@mcollina
Copy link
Collaborator

this conflicts with master now

@robertsLando
Copy link
Member Author

robertsLando commented Jan 28, 2020

@mcollina It doesn't really conflicts it just fails the coverall tests as now lib/handlers/connect.js has reduced coverage https://coveralls.io/builds/28372782

What should I do?

@robertsLando robertsLando deleted the fix#348 branch January 28, 2020 14:30
@robertsLando
Copy link
Member Author

robertsLando commented Jan 29, 2020

The coverage is decreesed by 0.04% causing that red cross on ci but all other tests are good. Should we create a separate job for coverage?

@mcollina
Copy link
Collaborator

mcollina commented Jan 29, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants