Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility of using 16-bit integers for current LMS state. #56

Open
ccawley2011 opened this issue Sep 27, 2024 · 1 comment
Open

Possibility of using 16-bit integers for current LMS state. #56

ccawley2011 opened this issue Sep 27, 2024 · 1 comment

Comments

@ccawley2011
Copy link
Contributor

The LMS state is stored within QOA files using 16-bit integers, however qoa_lms_t uses 32-bit integers on most platforms. Some third-party implementations use 16-but integers for the currently loaded LMS state, and it would be nice to do so in general for reducing memory usage and possible performance improvements, but there is a possibility of the weights overflowing the range of a 16-bit integer, and the documentation is somewhat unclear about whether this is required behaviour or not.

@phoboslab
Copy link
Owner

lms->history[n] is guaranteed to stay within the 16bit range, as the samples are clamped before being handed over to qoa_lms_update(). Sadly, this is not the case for lms->weights[n] which in certain problem cases will exceed the 16 bit range with the current encoder.

For what it's worth, all of the 150 test cases on https://qoaformat.org/samples/ stay within the 16bit range for the weights.

You could modify the encoder so that it never exceeds 16bit for the weights – even for problem cases. This would still produce QOA files that adhere to the spec, but would likely involve some backtracking and have performance implications during encoding. It should not have any impact on decoding performance.

This repo also uses int instead of short, because int should be faster on 32/64 bit systems. Whether that's actually the case is another question...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants