Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mode="Strict" for seqeval? #61

Open
tan-js opened this issue Jun 11, 2024 · 0 comments
Open

mode="Strict" for seqeval? #61

tan-js opened this issue Jun 11, 2024 · 0 comments

Comments

@tan-js
Copy link

tan-js commented Jun 11, 2024

Hi again @tomaarsen,

Based on your thesis, the strict evaluation metric is used:

"Strict evaluation metrics are applied, relying on both the correctness of the entity boundary and the entity class"

However, when I inspected evaluation.py, I don't see the mode="strict" parameter being set.

I admit that I might be missing something simple. I tried to pass my own compute_metrics function for trainer, but I can't get it to work although the same function worked for a standard transformers.Trainer method.

I even tried to copy the entire compute_f1_via_seqeval function and pass it as the compute_metrics argument of trainer, after making an edit to set results = seqeval.compute(mode='strict'), but I still got some errors related to structuring the data correctly or passing required variables

For now, my dirty solution would be to edit the evaluation.py script.

Is there an easier way to do this? Or am I missing something that shows that the 'strict' evaluation metric is already being used?

Thank you for your time.

@tan-js tan-js changed the title Allow for us to set mode="Strict" for seqeval mode="Strict" for seqeval? Jun 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant