Set increased traversal limit in enumeration deserialization #5365
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[sc-58991]
So far we'd only apply an increased traversal limit to the deserialization of certain objects like Query, groups and metadata of all sorts. For the rest of them, like Array Schema evolution we use the default Capnp value that is actually 64MB.
We had a real life scenario where evolving the array schema hit the traversal limit and failed. This is most probably due to the addition of a lot of enumerations in a new schema, and it seems that large enumerations are commonly used in certain scientific use cases. To cope with such cases, this PR sets an increased traversal limit to every deserialization that includes enumerations: schema evolution, array schema and load enumerations response.
TYPE: BUG
DESC: Set increased traversal limit in enumeration deserialization