You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current Roadroller algorithm takes linear time relative to the input length, so the length-reducing preprocessor is desirable as long as it doesn't interfere with context modelling. In fact the current JS preprocessor is one of them (it essentially replaces "text" models where the context unit is a word or token with more general bytewise models). But it doesn't work for non-JS inputs, so we need more general solutions.
In particular LZP is a promising candidate because of its simplicity. I even hoped that for shorter inputs LZP-only mode can be possible, but it doesn't seem to be the case at the moment (PNG bootstrap works better).
The text was updated successfully, but these errors were encountered:
The current Roadroller algorithm takes linear time relative to the input length, so the length-reducing preprocessor is desirable as long as it doesn't interfere with context modelling. In fact the current JS preprocessor is one of them (it essentially replaces "text" models where the context unit is a word or token with more general bytewise models). But it doesn't work for non-JS inputs, so we need more general solutions.
In particular LZP is a promising candidate because of its simplicity. I even hoped that for shorter inputs LZP-only mode can be possible, but it doesn't seem to be the case at the moment (PNG bootstrap works better).
The text was updated successfully, but these errors were encountered: