-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tapchannelmsg: increase channel data reader max record size #959
Conversation
tapchannelmsg/custom_channel_data.go
Outdated
@@ -14,10 +14,14 @@ import ( | |||
"google.golang.org/protobuf/proto" | |||
) | |||
|
|||
// ChannelDataMaxRecordSize is the maximum size of the custom channel data that | |||
// can be read from a reader. | |||
const ChannelDataMaxRecordSize = tlv.MaxRecordSize * 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we been able to validate that this resolves the issue?
The other thing to consider here is that w/o adding any proof level deduplication, this'll expand along with the number of active HTLCs. So perhaps it should be a multiplier based on the current max proof suffix size. We know the max amount of HTLCs in either direction, so we can base it off that.
Also unlike other contexts, we're the one creating this value, so it's a bit more trusted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data is already in memory. I've pushed up a new commit which adds a "max read size" argument which we can set based on the data that's already in memory.
I've tested this with the litd itest and it solves the max read size issue.
I've also created a unit test on this branch so that we can play around with the custom channel data which exceeded the previous max read limit:
2a90bb3
I didn't see anything particularly interesting in that custom channel data example. It's commitment portion just bigger than the default TLV record size.
This commit introduces a `max read size` argument to the `ReadOpenChannel` and `ReadCommitment` functions. Testing showed that with large but valid asset amounts, the channel custom records buffer size exceeded `tlv.MaxRecordSize`. To address this, we have two options: increase the max record size for channel data or dynamically set the max read size. Given that the data is already in memory, the latter approach is more robust.
abcb080
to
9b97e4e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉
Def makes sense to just use the length here as we exactly know how many bytes there are.
Thanks for validating with the linked unit test. Looking at the test data, I see there are only 14 HTLCs that cause the big size, probably mostly the proofs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎢
Testing indicated that when using large but valid asset amounts the channel custom records buffer size was greater than
tlv.MaxRecordSize
.This commit introduces a new greater constant which will be used as the channel data max record size.
Addresses bug reported here: #942 (comment)
What is the correct value for this new constant?
NOTE: I haven't finished investigating why the record takes up more memory.