Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unreachable MissingLocalConsensusState error in verify_consensus_proof #33

Closed
5 tasks
vitorenesduarte opened this issue Feb 22, 2021 · 4 comments · Fixed by #333
Closed
5 tasks

Unreachable MissingLocalConsensusState error in verify_consensus_proof #33

vitorenesduarte opened this issue Feb 22, 2021 · 4 comments · Fixed by #333
Assignees
Labels
A: bug Admin: something isn't working A: good-first-issue Admin: good for newcomers

Comments

@vitorenesduarte
Copy link

vitorenesduarte commented Feb 22, 2021

Crate

Summary of Bug

Summary by @adizere in informalsystems/hermes#686:

in conn open try & ack handlers there are two verifications on the field msg.proofs.consensus_proof().height as follows:

  1. first we verify for staleness with check_client_consensus_height, i.e., verify that this height is not older than the oldest height of the host (local) chain, nor newer than the current height of the host chain.
  2. second we verify that the consensus state for this height exists in the local chain, as part of verify_consensus_proof.

I think the first verification is more of a fail-fast. While the second verification is a pre-check as part of the whole ensemble of proof verification.

Version

02aef9c

Steps to Reproduce


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate milestone (priority) applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned
@adizere
Copy link
Contributor

adizere commented Feb 25, 2022

The test case for exercising this error is still disabled. I think it's a difficult one to reproduce, so we'll keep this issue open until we investigate more on how to exercise the error.

@AlianBenabdallah
Copy link

I am not sure how sure what exactly does ctx.host_oldest_height() returns. However, ABCI has a retain_height parameter (see specs and documentation).

Relying on this parameter should guarantee that every blocks with a height greater than retain_height should be stored by every validators. But there are still some edge cases :

  • State Sync bootstraps a new node by syncing state machine snapshots at a given height, but not historical blocks and associated data
  • Validators could decide to prune this data. From the documentation :

Use RetainHeight with caution! If all nodes in the network remove historical blocks then this data is permanently lost, and no new nodes will be able to join the network and bootstrap. Historical blocks may also be required for other purposes, e.g. auditing, replay of non-persisted heights, light client verification, and so on.

It might be possible for any validator to arbitrarily remove a historical block with a height greater than retain_height without being punished. This could happen in case of failure, misbehavior and possibly many other reasons.

If it's possible for validators then it should also be the case for full nodes.

@hu55a1n1 hu55a1n1 transferred this issue from informalsystems/hermes Sep 29, 2022
@hu55a1n1 hu55a1n1 added the A: bug Admin: something isn't working label Oct 6, 2022
@plafer
Copy link
Contributor

plafer commented Nov 14, 2022

I think this is now resolved since we no longer have the host_oldest_height check.

We should also remove the outdated commented-out test.

@Farhad-Shabani Farhad-Shabani added this to the Fix known bugs and issues milestone Feb 3, 2023
shuoer86 pushed a commit to shuoer86/ibc-rs that referenced this issue Nov 4, 2023
Use cosmos plugin to generate proof proto files for compatibility with SDK
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A: bug Admin: something isn't working A: good-first-issue Admin: good for newcomers
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

7 participants
@adizere @vitorenesduarte @plafer @hu55a1n1 @Farhad-Shabani @AlianBenabdallah and others