Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VSCPackets should have timeout on provider #283

Closed
mpoke opened this issue Aug 23, 2022 · 8 comments · Fixed by #422
Closed

VSCPackets should have timeout on provider #283

mpoke opened this issue Aug 23, 2022 · 8 comments · Fixed by #422
Assignees

Comments

@mpoke
Copy link
Contributor

mpoke commented Aug 23, 2022

Currently, VSCPackets (like any other IBC packets) have a timeout from the perspective of the consumer chains. This means that if a consumer chain is not live, the VSCPackets sent to that consumer chain will be "frozen in time", i.e., they cannot time out, which means that the unbonding operations associated with these VSCPackets cannot complete.

A solution would be to add a timeout period on the provider. Basically, for every VSCPacket sent to a consumer chain, unless the corresponding MaturedVSCPacket is received back within the timeout period, the consumer chain is removed. This would also provide a bound on the maximum time an unbonding operation takes on the provider chain.

This issue is related to #278

@mpoke
Copy link
Contributor Author

mpoke commented Aug 23, 2022

This would also fix the following issue:

@shaspitz
Copy link
Contributor

Good stuff. As discussed the semantics of a timeout initiated on a consumer chain should be clearly distinguished from this new timeout initiated on the provider chain. The provider would (eventually) process either type of timeout

@jtremback
Copy link
Contributor

I think this is a great idea and mitigates a lot of the worries I have around eternally-frozen unbondings

@mpoke
Copy link
Contributor Author

mpoke commented Aug 31, 2022

This could also solve #278. However, having two timeouts, one for VSCPackets and one for the channel initialization, would make it easier to choose timeout values.

@danwt
Copy link
Contributor

danwt commented Oct 19, 2022

Is there a rough idea of what the timeout value will be?

@danwt
Copy link
Contributor

danwt commented Oct 19, 2022

If the timeout value is determined by the provider, not on a chain-by-chain basis then also it strikes me as more efficient to store a list of timeouts once rather than per-chain and store a map <chainId, greatestVSCIDMaturityReceived>. The values of that map tell you the index in the list (or allow you to search via binary search), and the smallest value in the map corresponds to the first entry in the list (after pruning).

I'm not sure that we need a list of all timeouts anyway, we could accept some latency in the timeliness of timing out and store less. I'd have to think more about that.

EDIT: Also, if the timeout value (duration) is provider determined, through governance, might it make more sense to store the vscid send times, and add the timeout duration when needed to get the actual timeout timestamp?

@mpoke
Copy link
Contributor Author

mpoke commented Oct 20, 2022

more efficient to store a list of timeouts once rather than per-chain

The point of this timeout is to enable unbonding operations to complete when a chain is halted, in which case the IBC timeout mechanism will not work. For security reasons, it's important to remove the halted chain from the list of consumer chains.

@danwt
Copy link
Contributor

danwt commented Oct 25, 2022

Yeah but you can still implement that with one list of timeouts

timeouts : [t0, t1, t2, t3, t4, ... ]

For each consumer you can store for example the last vscid matured, then you have a map of vscid into timeouts, or equivalent. For a given consumer you get his vscid and thus the index into timeouts, call the index i. There is some index j of timeouts where the prefix [0:j] has all timed out. If i < j you stop the consumer.
In every end block you prune the timed out prefix.

@mpoke mpoke moved this from In Progress to Waiting for review in Replicated Security Nov 1, 2022
Repository owner moved this from Waiting for review to Done in Replicated Security Nov 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Status: Done
Development

Successfully merging a pull request may close this issue.

4 participants