-
Notifications
You must be signed in to change notification settings - Fork 2.3k
AllenNLP biased towards BERT #5711
Comments
Hey @pvcastro, a couple questions:
|
Hi @epwalsh , thanks for the feedback!
|
Gotcha! Oh yes, I meant So I think the most likely source for a bug would be in the allennlp/allennlp/data/tokenizers/pretrained_transformer_tokenizer.py Lines 295 to 311 in 8571d93
|
I was assuming that just running some unit tests from the AllenNLP repository, to confirm that these embedders/tokenizers are producing tokens with the same special tokens as RoBERTa architecture would be enough to discard these. I ran some tests using RoBERTa and confirmed that it's not relying on CLS. Was this too superficial to reach any conclusions? |
I'm not sure. I mean, I thought we did have pretty good test coverage there, but I know for a fact that's one of the most brittle pieces of code in the whole library. It would break all of the time with new releases of |
Do you think it makes sense for me to run additional tests for the embedder comparing embeddings produced by a raw RobertaModel and the actual PretrainedTransformerMismatchedEmbedder? To try to see if they are somehow getting "corrupted" in the framework. |
I guess I would start by looking very closely at the exact tokens that are being used for each word by the |
Ok, thanks! |
This issue is being closed due to lack of activity. If you think it still needs to be addressed, please comment on this thread 👇 |
This issue is being closed due to lack of activity. If you think it still needs to be addressed, please comment on this thread 👇 |
Sorry, I'll try to get back to this next week, haven't had the time yet 😞 |
No rush, I thought adding the "question" label would stop @github-actions bot from closing this, but I guess not. |
Checklist
main
branch of AllenNLP.pip freeze
.Description
I've started using AllenNLP since 2018, and I have already run thousands of NER benchmarks with it...since ELMo, and following with transformers, it's CrfTagger model has always yielded superior results in every possible benchmark for this task. However, since my research group trained different RoBERTa models for Portuguese, we have been conducting benchmarks comparing them with an existing BERT model, but we have been getting inconsistent results compared to other frameworks, such as huggingface's transformers.
Sorted results for AllenNLP grid search on CoNLL2003 using optuna (all berts' results are better than all the robertas'):
Sorted results for huggingface's transformers grid search on CoNLL2003 (all robertas' results are better than all the berts'):
I originally opened this as a question on stackoverflow, as suggested in the issues guidelines (additional details already provided there), but I have failed to discover the problem by myself. I have run several unit tests from AllenNLP, concerning the tokenizers and embedders, and couldn't notice anything wrong, but I'm betting something is definetely wrong in the training process, since the results are so inferior for non-BERT models.
Although I'm reporting details with the current release version, I'd like to point out that I had already run this CoNLL 2003 benchmark with RoBERTa/AllenNLP a long time ago too, so it's not something new. At the time the results for RoBERTa were quite below bert-base, but at the time I just thought RoBERTa wasn't competitive for NER (which is not true at all).
It is expected that the results using AllenNLP are at least as good as the ones obtained using huggingface's framework.
Related issues or possible duplicates
Environment
OS: Linux
Python version: 3.8.13
Output of
pip freeze
:Steps to reproduce
I'm attaching some parameters I used for running the CoNLL 2003 grid search.
Example source:
optuna_conll2003.jsonnet
optuna-grid-search-conll2003.json
optuna-grid-search-conll2003-hparams.json
The text was updated successfully, but these errors were encountered: