Fix incorrect target length when parsing multibyte UTF-8 characters #8
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request is based on the 0.5.1 release because I was unable to run the current master version
The character array buffer used for decoding UTF-8 characters is resized dynamically based on strings that are being parsed. However, the UTF-8 decoding always assumed that the length of the allocated array was equal to the expected length of the string. This means that if a very long string was parsed before a string containing a multi-byte character, the length would be incorrectly interpreted to be that of the previous long string instead of the string being currently parsed, resulting in the code reading past the end of the string and reading garbage.
This bug caused chunks that contained items with lore text using multi-byte characters to be completely ignored in statistics.
The fix is to pass the expected length into the readUTF function, and use it instead of relying on the size of the buffer.