-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] Fix language identification bug when multi-languages are present #80675
[ML] Fix language identification bug when multi-languages are present #80675
Conversation
Pinging @elastic/ml-core (Team:ML) |
Language identification works fairly well when only one language and script type is present. But when multiple are present, it can return some unexpected results Example: "행 레이블 this is english text obviously and 생성 tom said to test it" Which appears to a human to be english text (Latin unicode) with Korean via Hangul unicode is erroneously categorized as Japanese. It should be categorized as English as it is the dominate language and script type. This commit fixes this bug by doing the following: - Input text is partitioned into common, continuous, unicode script sections - Those sections individual language scores are gathered - Each score is then weighted according to the number of utf-8 bytes in each section - The resulting weight scores are transformed into probabilities - The final probabilities are the ones returned to the user.
81f5888
to
f339cde
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Each score is then weighted according to the number of utf-8 bytes in
each section
I'd like to know why it's the number of UTF-8 bytes and not the number of characters. The number of characters seems more natural to me. If I have 60 characters of English and 40 characters of Russian why would I want to give the Russian weight 80 and the English weight 60?
@@ -214,10 +234,75 @@ public void process(Map<String, Object> fields) { | |||
text = FeatureUtils.cleanAndLowerText(text); | |||
text = FeatureUtils.truncateToNumValidBytes(text, MAX_STRING_SIZE_IN_BYTES); | |||
String finalText = text; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be clearer if it's explicitly final
String finalText = text; | |
final String finalText = text; |
.map((featureExtractor) -> featureExtractor.extractFeatures(finalText)) | ||
.collect(Collectors.toList()); | ||
fields.put(destField, concatEmbeddings(processedFeatures)); | ||
if (text.isEmpty() || text.isBlank()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems potentially confusing to mix text
and finalText
in the main algorithm. Since finalText
needs to be used in lambdas I'd just use it everywhere to avoid making the reader double-check if there's a difference.
if (text.isEmpty() || text.isBlank()) { | |
if (finalText.isEmpty() || finalText.isBlank()) { |
if (text.isEmpty() || text.isBlank()) { | ||
fields.put( | ||
destField, | ||
Arrays.asList( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arrays.asList( | |
Collections.singletonList( |
(because Arrays.asList
with 1 item causes an IntelliJ warning)
Arrays.asList( | ||
new ByteSizeAndEmbedding( | ||
// Don't count white spaces as bytes for the prediction | ||
finalText.trim().getBytes(StandardCharsets.UTF_8).length, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
finalText.trim().getBytes(StandardCharsets.UTF_8).length, | |
0, |
If this is wrong please add a comment saying how the trimmed length of a blank or empty string can be > 0
continue; | ||
} | ||
CustomWordEmbedding.ByteSizeAndEmbedding byteSizeAndEmbedding = (CustomWordEmbedding.ByteSizeAndEmbedding) vec; | ||
int square = (int) Math.pow(byteSizeAndEmbedding.getUtf8ByteSize(), 2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I strongly suspect multiplying two integers is much faster than using some generic x^y algorithm that works on arbitrary floating point numbers.
int square = (int) Math.pow(byteSizeAndEmbedding.getUtf8ByteSize(), 2); | |
int square = byteSizeAndEmbedding.getUtf8ByteSize() * byteSizeAndEmbedding.getUtf8ByteSize(); |
@@ -43,6 +45,24 @@ | |||
*/ | |||
public class CustomWordEmbedding implements LenientlyParsedPreProcessor, StrictlyParsedPreProcessor { | |||
|
|||
public static class ByteSizeAndEmbedding { | |||
final int utf8ByteSize; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find it very strange that the weighting is the number of UTF-8 bytes, not the number of characters.
That means that if I have some text that's 100 characters of Roman alphabet and 100 Chinese characters then the Chinese could get a weighting of 300 while the western language gets a weighting of 100. Is the byte count a sneaky heuristic for saying each Chinese character embeds more information than a Roman alphabet character? It would be good to add a comment with the justification.
Mainly because that is how prior art handles this. I could switch it to character count pretty simply and make sure all the examples continue to pass. |
OK fair enough. We can copy that then since we copied the rest of the algorithm. Are there any comments in the code we ported that say why it’s bytes mot characters? |
@droberts195 there are zero comments. I am guessing because the rest of the code works according to UTF-8 bytes. In Java, we have more robust text manipulation tools. I switched it to string length and the tests continued to pass. It makes sense to use string length as languages with their own special unicode class usually have higher confidence than those without. Artificially increasing that confidence by weighing them according to byte length is unintuitive. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@elasticmachine update branch |
Some conceptual comment. They don't target this PR, but something to think about longer term.
This depends on the use case, it's not that easy:
This assumption does not work well for CJK, e.g. '棪' means 'tree'. CJK tends to be shorter in characters (but has a larger alphabet). |
This is what I was getting at in: But if that's what we end up doing that then we should have a comment saying why we're doing it, because it would be very much a crude heuristic rather than a scientific algorithm. |
Shorter contiguous sequences of the same unicode script are weighed lower than longer ones.
Right now character seems OK. But, we can switch it to byte length again in the future after more testing. |
@elasticmachine update branch |
…elastic#80675) Language identification works fairly well when only one language and script type is present. But when multiple are present, it can return some unexpected results Example: "행 레이블 this is english text obviously and 생성 tom said to test it" Which appears to a human to be english text (Latin unicode) with Korean via Hangul unicode is erroneously categorized as Japanese. It should be categorized as English as it is the dominate language and script type. This commit fixes this bug by doing the following: - Input text is partitioned into common, continuous, unicode script sections - Those sections individual language scores are gathered - Each score is then weighted according to the number of characters in each section - The resulting weight scores are transformed into probabilities - The final probabilities are the ones returned to the user.
…present (#80675) (#80707) * [ML] Fix language identification bug when multi-languages are present (#80675) Language identification works fairly well when only one language and script type is present. But when multiple are present, it can return some unexpected results Example: "행 레이블 this is english text obviously and 생성 tom said to test it" Which appears to a human to be english text (Latin unicode) with Korean via Hangul unicode is erroneously categorized as Japanese. It should be categorized as English as it is the dominate language and script type. This commit fixes this bug by doing the following: - Input text is partitioned into common, continuous, unicode script sections - Those sections individual language scores are gathered - Each score is then weighted according to the number of characters in each section - The resulting weight scores are transformed into probabilities - The final probabilities are the ones returned to the user. * fixing compilation
…#80675) (#80706) Language identification works fairly well when only one language and script type is present. But when multiple are present, it can return some unexpected results Example: "행 레이블 this is english text obviously and 생성 tom said to test it" Which appears to a human to be english text (Latin unicode) with Korean via Hangul unicode is erroneously categorized as Japanese. It should be categorized as English as it is the dominate language and script type. This commit fixes this bug by doing the following: - Input text is partitioned into common, continuous, unicode script sections - Those sections individual language scores are gathered - Each score is then weighted according to the number of characters in each section - The resulting weight scores are transformed into probabilities - The final probabilities are the ones returned to the user.
…81876) LangIdent was recently updated to handle multiple unicode scripts (#80675). But this introduced some bugs fixed with this commit. 1. Sections with the same scripted were weighted by Java string length (utf-16) encoding. This is not accurate as certain languages (like Chinese and Korean) convey much more information with fewer utf-16 characters. FIX weight by utf-8 length. 2. The weighing of different language scores was done via the raw score from the neural network. This caused languages with a high score (but low compared to most likely language) from the network to be inaccurately weighted. FIX We are now instead weighing the probabilities of the sections of the text. 3. To split the input across the multiple scripts, we split on the "paired down" CDL3 script types. Java has superior support for unicode script blocks. FIX split by Java unicode script blocks not by the paired down CDL3 scripts
…lastic#81876) LangIdent was recently updated to handle multiple unicode scripts (elastic#80675). But this introduced some bugs fixed with this commit. 1. Sections with the same scripted were weighted by Java string length (utf-16) encoding. This is not accurate as certain languages (like Chinese and Korean) convey much more information with fewer utf-16 characters. FIX weight by utf-8 length. 2. The weighing of different language scores was done via the raw score from the neural network. This caused languages with a high score (but low compared to most likely language) from the network to be inaccurately weighted. FIX We are now instead weighing the probabilities of the sections of the text. 3. To split the input across the multiple scripts, we split on the "paired down" CDL3 script types. Java has superior support for unicode script blocks. FIX split by Java unicode script blocks not by the paired down CDL3 scripts
…lastic#81876) LangIdent was recently updated to handle multiple unicode scripts (elastic#80675). But this introduced some bugs fixed with this commit. 1. Sections with the same scripted were weighted by Java string length (utf-16) encoding. This is not accurate as certain languages (like Chinese and Korean) convey much more information with fewer utf-16 characters. FIX weight by utf-8 length. 2. The weighing of different language scores was done via the raw score from the neural network. This caused languages with a high score (but low compared to most likely language) from the network to be inaccurately weighted. FIX We are now instead weighing the probabilities of the sections of the text. 3. To split the input across the multiple scripts, we split on the "paired down" CDL3 script types. Java has superior support for unicode script blocks. FIX split by Java unicode script blocks not by the paired down CDL3 scripts
…lastic#81876) LangIdent was recently updated to handle multiple unicode scripts (elastic#80675). But this introduced some bugs fixed with this commit. 1. Sections with the same scripted were weighted by Java string length (utf-16) encoding. This is not accurate as certain languages (like Chinese and Korean) convey much more information with fewer utf-16 characters. FIX weight by utf-8 length. 2. The weighing of different language scores was done via the raw score from the neural network. This caused languages with a high score (but low compared to most likely language) from the network to be inaccurately weighted. FIX We are now instead weighing the probabilities of the sections of the text. 3. To split the input across the multiple scripts, we split on the "paired down" CDL3 script types. Java has superior support for unicode script blocks. FIX split by Java unicode script blocks not by the paired down CDL3 scripts
…81876) (#81890) LangIdent was recently updated to handle multiple unicode scripts (#80675). But this introduced some bugs fixed with this commit. 1. Sections with the same scripted were weighted by Java string length (utf-16) encoding. This is not accurate as certain languages (like Chinese and Korean) convey much more information with fewer utf-16 characters. FIX weight by utf-8 length. 2. The weighing of different language scores was done via the raw score from the neural network. This caused languages with a high score (but low compared to most likely language) from the network to be inaccurately weighted. FIX We are now instead weighing the probabilities of the sections of the text. 3. To split the input across the multiple scripts, we split on the "paired down" CDL3 script types. Java has superior support for unicode script blocks. FIX split by Java unicode script blocks not by the paired down CDL3 scripts
…81876) (#81889) LangIdent was recently updated to handle multiple unicode scripts (#80675). But this introduced some bugs fixed with this commit. 1. Sections with the same scripted were weighted by Java string length (utf-16) encoding. This is not accurate as certain languages (like Chinese and Korean) convey much more information with fewer utf-16 characters. FIX weight by utf-8 length. 2. The weighing of different language scores was done via the raw score from the neural network. This caused languages with a high score (but low compared to most likely language) from the network to be inaccurately weighted. FIX We are now instead weighing the probabilities of the sections of the text. 3. To split the input across the multiple scripts, we split on the "paired down" CDL3 script types. Java has superior support for unicode script blocks. FIX split by Java unicode script blocks not by the paired down CDL3 scripts
…81876) (#81888) LangIdent was recently updated to handle multiple unicode scripts (#80675). But this introduced some bugs fixed with this commit. 1. Sections with the same scripted were weighted by Java string length (utf-16) encoding. This is not accurate as certain languages (like Chinese and Korean) convey much more information with fewer utf-16 characters. FIX weight by utf-8 length. 2. The weighing of different language scores was done via the raw score from the neural network. This caused languages with a high score (but low compared to most likely language) from the network to be inaccurately weighted. FIX We are now instead weighing the probabilities of the sections of the text. 3. To split the input across the multiple scripts, we split on the "paired down" CDL3 script types. Java has superior support for unicode script blocks. FIX split by Java unicode script blocks not by the paired down CDL3 scripts
Language identification works fairly well when only one language
and script type is present.
But when multiple are present, it can return some unexpected results
Example:
"행 레이블 this is english text obviously and 생성 tom said to test it"
Which appears to a human to be english text (Latin unicode) with Korean
via Hangul unicode is erroneously categorized as Japanese.
It should be categorized as English as it is the dominate language and
script type.
This commit fixes this bug by doing the following:
sections
each section