Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tls: shutdown all contexts when terminating server #8553

Closed
wants to merge 1 commit into from

Conversation

asraa
Copy link
Contributor

@asraa asraa commented Oct 9, 2019

When a server catches an Exception on initialization or is destructed, it calls terminate() to shutdown the listener manager, worker threads, and cluster manager among others. The ContextManager is also destroyed, but the destructor only removes contexts that are expired. This removes all contexts with a shutdown() method.

Fixes OSS-Fuzz Issue:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=15760
Testing: Added crashing corpus

Signed-off-by: Asra Ali [email protected]

@asraa asraa requested a review from lizan as a code owner October 9, 2019 19:13
@lizan
Copy link
Member

lizan commented Oct 9, 2019

Where is terminate() called? That might from an exception thrown from a destructor, which is caused by another exception. If that's the case we need investigate why destructor throw exceptions.

@asraa
Copy link
Contributor Author

asraa commented Oct 10, 2019

Yes, terminate() here was called when server initialization caught an exception. I will investigate what was causing it for this testcase
It was caused by SDS not supported yet, because the testcases config uses session_ticket_keys_sds_secret_config

Perhaps I should only call the shutdown() method when there's an expected/known exception in the config?

@lizan
Copy link
Member

lizan commented Oct 10, 2019

@asraa no, calling shutdown() isn't bad itself, but since contexts_ are list of weak_ptr, clearing it in shutdown doesn't prevent any leak but just worked around the destructor ASSERT, the owned shared_ptr is live somewhere.

@asraa
Copy link
Contributor Author

asraa commented Oct 11, 2019

Oh gotcha. I traced down where that context was coming from. The sslContextManager() was holding a ClientContext that is created by the cluster. This was made in the call to createTransportSocketFactory when initializing a ClusterImplBase.

It seems like I should handle this in ClusterManager shutdown. Is that right?

@stale
Copy link

stale bot commented Oct 18, 2019

This pull request has been automatically marked as stale because it has not had activity in the last 7 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!

@stale stale bot added the stale stalebot believes this issue/PR has not been touched recently label Oct 18, 2019
@stale
Copy link

stale bot commented Oct 25, 2019

This pull request has been automatically closed because it has not had activity in the last 14 days. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale stalebot believes this issue/PR has not been touched recently
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants