You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As it was discussed previously, our current idea is to train different models and then made an ensemble model which will be used to pre-annotate new texts (which will be proofread by the editor later) to expand the corpus even more.
To implement that we need a CLI python script which will convert results of different models back to BRAT format. Next stage is merging :)
The text was updated successfully, but these errors were encountered:
For model evaluation purposes results were converted to IOB format so it might be easier to use IOB as a universal format. @dchaplinsky what do you think
Except for merging model results what intended usages do you foresee?
the only downside of IOB is that it is not a sparse format and so will take more space, but it does not seem to be that big of a deal.
As it was discussed previously, our current idea is to train different models and then made an ensemble model which will be used to pre-annotate new texts (which will be proofread by the editor later) to expand the corpus even more.
To implement that we need a CLI python script which will convert results of different models back to BRAT format. Next stage is merging :)
The text was updated successfully, but these errors were encountered: