Issues with Light Client verification for relayer v0 MVP #410
Labels
I: CLI
Internal: related to the relayer's CLI
I: documentation
Internal: improvements or additions to documentation
I: logic
Internal: related to the relaying logic
O: code-hygiene
Objective: cause to improve code hygiene
Milestone
Crate
relayer
Summary of Bug
There are a few issues to solve after #363
(fixed in Minimal implementation of backward verification for IBC relayer tendermint-rs#709)
Multiple on-chain clients may be at different heights, let's say
h1
andh2, h1 < h2
. Assume the relayer local client is ath2
. Currently, if we want to update the on-chain client that is ath1
, we callverify_to_target(h1)
and this returns an error ash1
is smaller than the latest trusted height (h2
).(Cannot run relayer commands concurrently #501) If light clients are configured to use the same store path, running two relayer processes concurrently to create clients on different chains will fail with:
error: client create failed: tx error: Light client error: IO error: could not acquire lock on "data/DEEB0AB3F2D7BEC0B8C9FF1532715635314F54D5/db": Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" }
(fixed in Fix formatting of tendermint::Time values tendermint-rs#775)
Once in a while on update client and other clis:
Most of the times the command will succeed the second time.
(fixed in Specify proposer when constructing validator set in light client tendermint-rs#706) LightBlock's
ValidatorSet
must have the proposer and voting power set. This is temp fixed incosmos.rs:fix_validator_set()
.(Allow overriding peer_id, height and hash in light add command #428, Add option to
light rm
command to remove all peers #431) It difficult do manage the client section in the configuration file. It is more now during development as we restart the chains often and the config file keeps around the old configuration (unless we manually clean it up) that may sometimes be in conflict with the new one. In addition we need to add witnesses manually.There seems to be leaking between global
rpc_addr
andaddress
in client sections, with some queries using the client address.For Admin Use
The text was updated successfully, but these errors were encountered: