Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove or expand buffer limits on RequestManager and ResponseManager queues #249

Open
hannahhoward opened this issue Oct 21, 2021 · 2 comments
Labels
effort/hours Estimated to take one or several hours exp/intermediate Prior experience is likely helpful need/analysis Needs further analysis before proceeding P3 Low: Not priority right now

Comments

@hannahhoward
Copy link
Collaborator

hannahhoward commented Oct 21, 2021

What

The basic architecture of the request and response manager is as follows: all public methods put messages into a buffered channel that is processed sequentially by an internal thread. (see https://github.com/ipfs/go-graphsync/blob/main/docs/architecture.md#actor-pattern-in-requestmanager-and-responsemanager for explanation)

We've now seen multiple deadlocks from these channels reaching their buffer limits. Moreover, the problem is usually that actual message processing of the internal thread indirectly gets stuck sending more messages to the queue.

I think we should remove limits on these queues for now. We can use use well tread architectures for a single consumer thread-safe queues using sync.Mutex and sync.Cond.Wait -- see https://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html, https://dev.to/narasimha1997/crafting-a-concurrent-queue-in-golang-2n, and the notifications system in this repo: https://github.com/ipfs/go-graphsync/blob/main/notifications/publisher.go#L214-L235

If we need to set an upper limit, it's not hard to add.

@hannahhoward hannahhoward added need/triage Needs initial labeling and prioritization effort/hours Estimated to take one or several hours exp/intermediate Prior experience is likely helpful P1 High: Likely tackled by core team if no one steps up need/analysis Needs further analysis before proceeding and removed need/triage Needs initial labeling and prioritization labels Oct 21, 2021
@rvagg
Copy link
Member

rvagg commented Oct 22, 2021

we move from a global queue to per request message queues, with a single go-routine per request processing messages

To be clear on this - you're suggesting a single go routine for a request (same as it is now), but a queue per message type (the structs in requestmanager/messages.go & responsemanager/messages.go)? Is this going to be a problem for prioritisation of messages or are we going to need to resort to timestamps or some kind of monotonic counter to determine what to do next? This is normally where the multiple-producer + single-consumer pattern becomes a problem and you need to resort to various prioritisation strategies to pick what should be done next.

@hannahhoward
Copy link
Collaborator Author

hannahhoward commented Oct 26, 2021

@rvagg wow that got added to this issue accidentally -- more clearly explained as a seperate more longterm approach in #251

@hannahhoward hannahhoward added P2 Medium: Good to have, but can wait until someone steps up P3 Low: Not priority right now and removed P1 High: Likely tackled by core team if no one steps up P2 Medium: Good to have, but can wait until someone steps up labels Oct 26, 2021
marten-seemann pushed a commit that referenced this issue Mar 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort/hours Estimated to take one or several hours exp/intermediate Prior experience is likely helpful need/analysis Needs further analysis before proceeding P3 Low: Not priority right now
Projects
None yet
Development

No branches or pull requests

2 participants