Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tapchannelmsg: increase channel data reader max record size #959

Merged
merged 1 commit into from
Jun 20, 2024

Conversation

ffranr
Copy link
Contributor

@ffranr ffranr commented Jun 19, 2024

Testing indicated that when using large but valid asset amounts the channel custom records buffer size was greater than tlv.MaxRecordSize.

This commit introduces a new greater constant which will be used as the channel data max record size.

Addresses bug reported here: #942 (comment)

What is the correct value for this new constant?

NOTE: I haven't finished investigating why the record takes up more memory.

@ffranr ffranr added this to the v0.4 milestone Jun 19, 2024
@ffranr ffranr self-assigned this Jun 19, 2024
@@ -14,10 +14,14 @@ import (
"google.golang.org/protobuf/proto"
)

// ChannelDataMaxRecordSize is the maximum size of the custom channel data that
// can be read from a reader.
const ChannelDataMaxRecordSize = tlv.MaxRecordSize * 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have we been able to validate that this resolves the issue?

The other thing to consider here is that w/o adding any proof level deduplication, this'll expand along with the number of active HTLCs. So perhaps it should be a multiplier based on the current max proof suffix size. We know the max amount of HTLCs in either direction, so we can base it off that.

Also unlike other contexts, we're the one creating this value, so it's a bit more trusted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The data is already in memory. I've pushed up a new commit which adds a "max read size" argument which we can set based on the data that's already in memory.

I've tested this with the litd itest and it solves the max read size issue.

I've also created a unit test on this branch so that we can play around with the custom channel data which exceeded the previous max read limit:
2a90bb3

I didn't see anything particularly interesting in that custom channel data example. It's commitment portion just bigger than the default TLV record size.

This commit introduces a `max read size` argument to the
`ReadOpenChannel` and `ReadCommitment` functions.

Testing showed that with large but valid asset amounts, the channel
custom records buffer size exceeded `tlv.MaxRecordSize`. To address
this, we have two options: increase the max record size for channel data
or dynamically set the max read size. Given that the data is already in
memory, the latter approach is more robust.
@ffranr ffranr force-pushed the channel-cr-read-overflow branch from abcb080 to 9b97e4e Compare June 20, 2024 13:55
@ffranr ffranr requested review from Roasbeef and guggero June 20, 2024 14:00
Copy link
Member

@guggero guggero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🎉

Def makes sense to just use the length here as we exactly know how many bytes there are.
Thanks for validating with the linked unit test. Looking at the test data, I see there are only 14 HTLCs that cause the big size, probably mostly the proofs.

@ffranr ffranr mentioned this pull request Jun 20, 2024
21 tasks
Copy link
Member

@Roasbeef Roasbeef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🎢

@Roasbeef Roasbeef merged commit 4d44446 into main Jun 20, 2024
14 checks passed
@guggero guggero deleted the channel-cr-read-overflow branch June 21, 2024 08:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: ✅ Done
Development

Successfully merging this pull request may close these issues.

3 participants