You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The different ledger implementations do not handle consistently transactions with deeply nested values and crash more or less nicely when dealing with those.
We currently do not have clear specification for dealing with those transactions. in particular, LF transaction specification does not clearly specify the maximal depth of values. however, The engine ensures the values coming from the ledger API are not deeper than 100.
I propose to use the 100 limit as a hard limit:
A ledger should handle successfully the creation of a transaction that contains values with a nesting lower or equal to 100.
A ledger should not fails nicely when ask to create a transaction that contains values with a nesting strictly greater than 100.
This limits since to be bigger than all the current (practical) limits of our ledgers (sandbox classic, KV, Canton), so we can hope reaching a consistent behavior to all our ledgers in a backward compatible way.
We disregard the case of concurrent system where some older node (that do not handle this limit) communicate to newer node (that do handle the limit) as Canton is not production ready.
The solution we implemented produce a small slow down during (de)serialization + (de/en)coding.
For the record, here are the results of benchmark //ledger/participant-state/kvutils/tools:benchmark-codec
Reference (without changes from #10443 and #10393):
The different ledger implementations do not handle consistently transactions with deeply nested values and crash more or less nicely when dealing with those.
We currently do not have clear specification for dealing with those transactions. in particular, LF transaction specification does not clearly specify the maximal depth of values. however, The engine ensures the values coming from the ledger API are not deeper than 100.
I propose to use the 100 limit as a hard limit:
This limits since to be bigger than all the current (practical) limits of our ledgers (sandbox classic, KV, Canton), so we can hope reaching a consistent behavior to all our ledgers in a backward compatible way.
We disregard the case of concurrent system where some older node (that do not handle this limit) communicate to newer node (that do handle the limit) as Canton is not production ready.
The solution we implemented produce a small slow down during (de)serialization + (de/en)coding.
For the record, here are the results of benchmark
//ledger/participant-state/kvutils/tools:benchmark-codec
Reference (without changes from #10443 and #10393):
After (with change from #10443 and #10393):
The text was updated successfully, but these errors were encountered: