You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When constructing TokenResolver we consume all tokens lexed by inlines lexer into a single Vec. This causes a very large memory allocation when parsing inlines in large content.
This should not happen very often (we're speaking of paragraph with hundreds of lines), but should be solved regardless.
One possible solution for this problem is to use tape-like data structure. The idea is, instead of directly using a vector of all tokens, use something like VecDeque and consume only those tokens that are necessary in order to resolve the first next token that should be returned by the TokenIterator.
The text was updated successfully, but these errors were encountered:
When constructing
TokenResolver
we consume all tokens lexed by inlines lexer into a singleVec
. This causes a very large memory allocation when parsing inlines in large content.unimarkup-rs/inline/src/lexer/resolver/mod.rs
Lines 75 to 84 in bcdd1ef
This should not happen very often (we're speaking of paragraph with hundreds of lines), but should be solved regardless.
One possible solution for this problem is to use tape-like data structure. The idea is, instead of directly using a vector of all tokens, use something like
VecDeque
and consume only those tokens that are necessary in order to resolve the first next token that should be returned by theTokenIterator
.The text was updated successfully, but these errors were encountered: