Skip to content
This repository has been archived by the owner on Apr 5, 2024. It is now read-only.

Distinguish different CLA instances during restart #41

Merged
merged 1 commit into from
Jun 3, 2021

Conversation

CryptoCopter
Copy link
Member

There may be a situation where there are multiple PeerDisappeared-messages for the same CLA waiting to be processed.
This might cause the manager to restart the same CLA multiple times in quick succession.
To prevent this, we can check if the pointer to the CLA from the PeerDisappeared-message points to the same instance as the one currently stored in the manager's convs map.

There may be a situation where there are multiple PeerDisappeared-messages for the same CLA waiting to be processed.
This might cause the manager to restart the same CLA multiple times in
quick succession.
To prevent this, we can check if the pointer to the CLA from the
PeerDisappeared-message points to the same instance as the one currently
stored in the manager's `convs` map.
Copy link
Member

@oxzi oxzi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

I tried to trigger this behavior with dtn-tool's ping, but with no success. Thus, I am looking forward to the outcome of the larger experiments.

@oxzi oxzi merged commit ce8acbb into dtn7:master Jun 3, 2021
@CryptoCopter CryptoCopter deleted the cla_instances branch June 3, 2021 12:21
@adur1990
Copy link
Member

adur1990 commented Jun 7, 2021

So, I ran our large setup, without success. Or, success, rather. This case is never triggered, i.e., the pointers to the CLAs are always the same, meaning there are no CLAs broken down accidentally. I don't know what to do with this now. I guess this PR doesn't make things worse?

@oxzi
Copy link
Member

oxzi commented Jun 7, 2021 via email

adur1990 pushed a commit that referenced this pull request Aug 16, 2021
As can be seen in recent issues and PRs (#39, #40, #41, #42, #43, #45), there
is an issue in dtn7-go that results in a deadlock after some time, making it
impossible to sent bundles. To be able to test and debug this issue, some more
extensive tests are required. Thus, we developed a test infrastructure to
simulate multiple virtual nodes using the
[CORE emulator](https://github.com/coreemu/core).
This tests environment is located in the
[dtn7-playground](https://github.com/dtn7/dtn7-playground) repository. To be
able to trigger these tests, this PR adds a workflow file, that uses the
[repository_dispatch event](https://docs.github.com/en/actions/reference/events-that-trigger-workflows#repository_dispatch)
sent to the dtn7-playground repo.
oxzi pushed a commit that referenced this pull request Aug 16, 2021
* Workflow for extended tests

As can be seen in recent issues and PRs (#39, #40, #41, #42, #43, #45), there
is an issue in dtn7-go that results in a deadlock after some time, making it
impossible to sent bundles. To be able to test and debug this issue, some more
extensive tests are required. Thus, we developed a test infrastructure to
simulate multiple virtual nodes using the
[CORE emulator](https://github.com/coreemu/core).
This tests environment is located in the
[dtn7-playground](https://github.com/dtn7/dtn7-playground) repository. To be
able to trigger these tests, this PR adds a workflow file, that uses the
[repository_dispatch event](https://docs.github.com/en/actions/reference/events-that-trigger-workflows#repository_dispatch)
sent to the dtn7-playground repo.

* Fix copyright header

Co-authored-by: Artur Sterz <[email protected]>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants