We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In current Paddle API in Python, we should get the strong-typed metric when training, such as Error Rate in float.
We should add some interfaces to C++ class paddle::Evaluator to get the metric result firstly, and then we could expose them to SWIG/Python.
paddle::Evaluator
Each current evaluator could contain zero to many metrics inside.
So basically, the metrics in Evaluator should be a map or dictionary, the key is a metric name, the value is the strong-typed result.
The text was updated successfully, but these errors were encountered:
Update AutoModel Docs (PaddlePaddle#1389)
f793342
* modify transforner-rst * modify roformer tokenizer * delete modifications * modify chunk * delete changes * first modify * modify auto * modify automapping * modify automapping * modify baseautomodel * add tokenizer file * add tokenizer * update tokenizer * modify tokenizer * update * modify auto modeling * test modeling * update * modify automodel * test auto * update * test tokenizer * add diffs * update * fix errors * test * update * modify some errors * fix errors * update * updat4e * add task_choice * update * update * update * add causal lm mapping * remove dir * update * update * add models * delete print * task para * task para * task para * fix errors * update * ernie-gen * update * update * doc update * modify auto docs * update docs * update * update * update * update
reyoung
No branches or pull requests
In current Paddle API in Python, we should get the strong-typed metric when training, such as Error Rate in float.
We should add some interfaces to C++ class
paddle::Evaluator
to get the metric result firstly, and then we could expose them to SWIG/Python.Each current evaluator could contain zero to many metrics inside.
So basically, the metrics in Evaluator should be a map or dictionary, the key is a metric name, the value is the strong-typed result.
The text was updated successfully, but these errors were encountered: