Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reply_channel_range is now too strict. #804

Closed
rustyrussell opened this issue Oct 14, 2020 · 2 comments · Fixed by #826
Closed

reply_channel_range is now too strict. #804

rustyrussell opened this issue Oct 14, 2020 · 2 comments · Fixed by #826

Comments

@rustyrussell
Copy link
Collaborator

We insist that block numbers increase, eg. replies should be (first_block=100, num_blocks=10), (first_block=110, num_blocks=10).

This requires implementations to split on block boundaries, and also has the (previously raised but currently purely theoretical) problem that a single block with more than 8k channels would be unsendable without compressed encoding.

Moreover, lnd simply packs in channel ids to the limit, thus it can repeat a block on the edge case, and allows this explicitly:

syncer.go:736

		// If we've previously received a reply for this query, look at
		// its last block to ensure the current reply properly follows
		// it.
		if g.prevReplyChannelRange != nil {
			prevReply := g.prevReplyChannelRange
			prevReplyLastHeight := prevReply.LastBlockHeight()

			// The current reply can either start from the previous
			// reply's last block, if there are still more channels
			// for the same block, or the block after.
			if msg.FirstBlockHeight != prevReplyLastHeight &&
				msg.FirstBlockHeight != prevReplyLastHeight+1 {

				return fmt.Errorf("first block of reply %v "+
					"does not continue from last block of "+
					"previous %v", msg.FirstBlockHeight,
					prevReplyLastHeight)
			}
		}
	}

So I think we should allow this: in particular, c-lightning will probably start doing the same thing, packing up to 8k descriptors into a reply chunk, then compressing. (We currently do a binary split when a block doesn't fit, which is pretty dumb).

@rustyrussell
Copy link
Collaborator Author

... except this introduces an ambiguity :(

If you split the final queried block, receiver will think that the reply is complete. Then you send another reply...

@t-bast
Copy link
Collaborator

t-bast commented Oct 14, 2020

If you split the final queried block, receiver will think that the reply is complete. Then you send another reply...

Yes, IIRC that's why we made it strict to include the whole block in one message...
The other option was to be less strict and allow block results to be scattered across multiple messages, and add a new bool field that explicitly says "sync is complete", but we didn't go down that route (but I think we could revisit that decision).

t-bast added a commit that referenced this issue Dec 14, 2020
We previously insisted that `reply_channel_range` messages were not
overlapping: blocks content could not be split across multiple messages.

This made it possible to implicitly figure out when sync was complete, so we
re-purposed the previous `complete` field to a `full_information` field.

We now revert that change to allow blocks to be split across multiple
messages. An explicit flag is thus needed to signal that sync is complete.

Fixes #804
t-bast added a commit that referenced this issue Feb 15, 2021
We previously insisted that `reply_channel_range` messages were not
overlapping: blocks content could not be split across multiple messages.

This made it possible to implicitly figure out when sync was complete, so we
re-purposed the previous `complete` field to a `full_information` field.

We now revert that change to allow blocks to be split across multiple
messages. An explicit flag is thus needed to signal that sync is complete.

Fixes #804
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants
@rustyrussell @t-bast and others