-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistencies in dataset annotations #9
Comments
Hi Jan, Yes, you're right. This first issue stems from the dataset itself, where negators ('no', 'not', etc) and intensifiers ('very', 'extremely') are not explicitly included in the polar expression, but instead attached as properties. In the conversion script, we decided to leave them separate, but it is true that this choice is arbitrary. Regarding the missing indices, I'll have to have a deeper look into the code to see why that is happening. Thanks for bringing it up! |
Thanks for your quick reply!
that information actually helps a lot! {
"sent_id": "Colorado_Technical_University_Online_69_10-14-2005-1",
"text": "They have used one of the books that was used by a professor of mine from a SUNY school that would only teach with graduate level books for undergraduate courses .",
"opinions": [
{
"Source": [
[],
[]
],
"Target": [
[
"They"
],
[
"0:4"
]
],
"Polar_expression": [
[
"no",
"complaints"
],
[]
],
"Polarity": "Positive",
"Intensity": "Average"
}
]
}, |
Thanks for pointing that out. I'll have a look and try to get back to you soon. |
Ok, I've confirmed that this is only a problem in the Darmstadt dataset, only affects polar expressions, but occurs in all splits. The problem comes from the fact that the original annotations often span several sentences. That means that if you have a document with a target in the first sentence and a polar expression much later. When we divide the annotations into sentences, the polar expression is no longer in the sentence, which gives null offsets. I will refactor the code a bit to remove these sentences and push later today. |
Ok, I've updated the preprocessing script to remove the annotations that were problematic. Let me know if it works on your end and I'll close the issue. |
Thanks! Looks like that removed the issues. If I find something else I'll just reopen ;) |
I believe that there are some other cases of wrong annotations. Example from {
"sent_id": "corpora/ca/quintessence-Miriam-1",
"text": "La porteria i l ' escala .",
"opinions": []
},
{
"sent_id": "corpora/ca/quintessence-Miriam-2",
"text": "Son poc accesibles quan vas amb nens petits i no posaba res a l ' anunci",
"opinions": [
{
"Source": [
[],
[]
],
"Target": [
[
"l ' escala"
],
[
"14:24"
]
],
"Polar_expression": [
[
"poc accesibles"
],
[
"4:18"
]
],
"Polarity": "Negative",
"Intensity": "Standard"
}, the target doesn't exist in the source text, but in the sentence right before :O {
"sent_id": "../opener/en/kaf/hotel/english00200_e8f707795fc0c7f605a1f7115c3da711-2",
"text": "Hotel Premiere Classe Orly Rungis is near the airport and close to Orly",
"opinions": [
{
"Source": [
[],
[]
],
"Target": [
[
"Hotel Premiere Classe Orly Rungis"
],
[
"0:33"
]
],
"Polar_expression": [
[
"near the airport"
],
[
"37:53"
]
],
"Polarity": "Negative",
"Intensity": "Standard"
},
{
"Source": [
[],
[]
],
"Target": [
[
"Hotel Premiere Classe Orly Rungis"
],
[
"0:33"
]
],
"Polar_expression": [
[
"close to Orly major highways"
],
[
"0:71"
]
],
"Polarity": "Negative",
"Intensity": "Standard"
}
]
},
{
"sent_id": "../opener/en/kaf/hotel/english00200_e8f707795fc0c7f605a1f7115c3da711-3",
"text": "major highways ( all night heard the noise of passing large vehicles ) .",
"opinions": [
{
"Source": [
[],
[]
],
"Target": [
[
"Hotel Premiere Classe Orly Rungis"
],
[
"0:33"
]
],
"Polar_expression": [
[
"noise of passing large vehicles"
],
[
"37:68"
]
],
"Polarity": "Negative",
"Intensity": "Standard"
}
]
}, my guess is that some sentences have been by accident incorrectly split into two separate sentences? |
Yes, you are correct. I will have to have a deeper look at the other datasets and will come with corrections soon. |
It seems like the problem stems from the original sentence segmentation. The annotation was performed at document-level and although we told annotators to make sure that all sources/targets/expressions were annotated within sentences, at the time it wasn't completely clear that the annotations spanned across the incorrect sentence boundaries. This will require quite a bit of work to fix and I'm afraid I'll have to leave it for now. What I will do is filter the dev/eval data to make sure they do not influence the evaluation. |
I compared the index and text representation for each segment of each elemet
Example sentence
For cases like this, where words are ommitted from the text, like "how grand looks", we could write a script to brake the element up in segments. Or just go with the index representations. Or just throw them out.
|
@egilron nice work! |
Thank you! Virtually all the dissimilarities between text span and index span that I catch, comes from the text span ommitting words while the index span covers from first to last word. Like the "how grand looks" example. I found only one sentence where a text span is larger than index span. I have 393 segments where the index span is larger than text representation. For 389 of these I find each word in the text representation inside the span representation. Like For the four spans where text words are not found in the index span, these words are from outside the sentence. Table: Is getting cluttered now. Use at own risk.
|
Hey, In the end, I was able to fix the easy ones, where the target/polar expression was split, but the offsets did not reflect this. That took care of most of them. For the ones that were split across sentences, I either filtered them, if they were incorrect annotations (the original annotation spanned a sentence boundary) or combined the text and fixed them otherwise (incorrect sentence segmentation). I think that should fix most of the issues, but let me know if you happen to find anything else. |
Hi @jerbarnes, after this change do I have to retrain the models? |
It depends a bit. There were relatively few examples that were affected, so I doubt that retraining anything based on pre-trained languaged models will see large benefits. On the other hand, if you have smaller models that you can train quickly, it might be worth it. |
Hi Jeremy, from mpqa: {
"sent_id": "xbank/wsj_0583-27",
"text": "Sansui , he said , is a perfect fit for Polly Peck 's electronics operations , which make televisions , videocassette recorders , microwaves and other products on an \" original equipment maker \" basis for sale under other companies ' brand names .",
"opinions": [
{
"Source": [
[
"sa"
],
[
"12:14"
]
],
"Target": [
[
","
],
[
"7:8"
]
],
"Polar_expression": [
[
","
],
[
"17:18"
]
],
"Polarity": "Positive",
"Intensity": "Average"
}
]
}, I didn't create script to find issues like these tho :/ |
It looks like it's a problem in the original annotation file in MPQA. In that particular file, lots of the the indices seem like they're off. Not sure what would have happened. I can remove this one in the preprocessing script, but I don't currently have a way to search for similar kinds of errors. |
Today I redownloaded the repo, and re-extracted the data. Now my import script only catches a handful of darmstadt-unis sentences with some expression text/spans issues. Looking good! |
Great to hear! |
Hi!
I've just found an inconsistency in the Darmstadt dataset dev split. I haven't checked whether this also occurs in different datasets or in different splits.
Two back-to-back examples in the dev split look like this:
Usually the datapoints are handled like in the second sentence: polar expressions (as well as source and target fields for that matter) are whitespace separated, even if the words are directly back-to-back. In the first sentence the whole polar expression is listed as a whole though and the span(
"2:33"
) even includes the target word("that"|"22:26"
) while it is not present in the string("can't overemphasize enough"
).I'm actually unsure whether this issue stems for the provided preprocessing function or the underlying dataset.
I also noticed that for this example sentence both splitting methods are applied for
polar_expression
:sometimes the indices for the polarity_expression strings are also missing:
The text was updated successfully, but these errors were encountered: