You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently I started observing a bug when an attempt to close Tribler does not lead to a proper shutdown. Instead, a "Force shutdown" button appears after some delay, and clicking on it terminates the Core process without any finalization. As a result, the triblerd.lock file remains undeleted, and the internal libtorrent state remains unsaved. Some Tribler users experienced this bug as well.
It was not clear to me what was the cause for the force shutdown, and I suspected our latest refactorings. Because of this, I hesitated to publish a new release, as the reason for that bug could be something very nasty.
Now I finally was able to track the root of the problem. I added the logging of REST API requests cancellation, and it turns out that some requests are canceled pretty often, mainly to the /downloads?get_pieces=1 endpoint. The shutdown request is often become canceled as well.
I fixed the problem by re-sending the shutdown request if the previous one was canceled.
Byt why the shutdown request was canceled? Initially, I suspected that REST API could become overloaded with queries from the GUI, which may lead to dropped requests. But the real reason turns out to be a race condition in the TriblerWindow.close_tribler() method:
defclose_tribler(self, checked=False):
...
self.core_manager.stop() # <-- The shutdown request is created hereself.downloads_page.stop_loading_downloads()
request_manager.clear() # <-- all requests to the REST API are canceled here
In the close_tribler method, a shutdown request is created, and almost immediately after that, all requests to Core are canceled. If the shutdown request is lucky and fast enough, it can reach Core before it is canceled, and in that case, we see the proper shutdown. But if Tribler is downloading several torrents and has some outgoing requests, the shutdown request is canceled before Tribler GUI can send it.
The addition of the re-sending logic to the canceled shutdown request was able to fix the problem.
The text was updated successfully, but these errors were encountered:
Recently I started observing a bug when an attempt to close Tribler does not lead to a proper shutdown. Instead, a "Force shutdown" button appears after some delay, and clicking on it terminates the Core process without any finalization. As a result, the
triblerd.lock
file remains undeleted, and the internal libtorrent state remains unsaved. Some Tribler users experienced this bug as well.It was not clear to me what was the cause for the force shutdown, and I suspected our latest refactorings. Because of this, I hesitated to publish a new release, as the reason for that bug could be something very nasty.
Now I finally was able to track the root of the problem. I added the logging of REST API requests cancellation, and it turns out that some requests are canceled pretty often, mainly to the
/downloads?get_pieces=1
endpoint. The shutdown request is often become canceled as well.I fixed the problem by re-sending the shutdown request if the previous one was canceled.
Byt why the shutdown request was canceled? Initially, I suspected that REST API could become overloaded with queries from the GUI, which may lead to dropped requests. But the real reason turns out to be a race condition in the
TriblerWindow.close_tribler()
method:In the
close_tribler
method, a shutdown request is created, and almost immediately after that, all requests to Core are canceled. If the shutdown request is lucky and fast enough, it can reach Core before it is canceled, and in that case, we see the proper shutdown. But if Tribler is downloading several torrents and has some outgoing requests, the shutdown request is canceled before Tribler GUI can send it.The addition of the re-sending logic to the canceled shutdown request was able to fix the problem.
The text was updated successfully, but these errors were encountered: