Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get Strong Typed Metric From paddle::Evaluator #1389

Closed
reyoung opened this issue Feb 20, 2017 · 0 comments
Closed

Get Strong Typed Metric From paddle::Evaluator #1389

reyoung opened this issue Feb 20, 2017 · 0 comments
Assignees

Comments

@reyoung
Copy link
Collaborator

reyoung commented Feb 20, 2017

In current Paddle API in Python, we should get the strong-typed metric when training, such as Error Rate in float.

We should add some interfaces to C++ class paddle::Evaluator to get the metric result firstly, and then we could expose them to SWIG/Python.

Each current evaluator could contain zero to many metrics inside.

  • each evaluator should include one metric result, like classification evaluator, contains ErrorRate.
  • There are some evaluators just used for debugging, they often called XXXPrinter.
  • There are some evaluators will contain many metric results, like precision-recall evaluator. It because some metrics are calculated together.

So basically, the metrics in Evaluator should be a map or dictionary, the key is a metric name, the value is the strong-typed result.

@reyoung reyoung self-assigned this Feb 21, 2017
wangxicoding pushed a commit to wangxicoding/Paddle that referenced this issue Dec 9, 2021
* modify transforner-rst

* modify roformer tokenizer

* delete modifications

* modify chunk

* delete changes

* first modify

* modify auto

* modify automapping

* modify automapping

* modify baseautomodel

* add tokenizer file

* add tokenizer

* update tokenizer

* modify tokenizer

* update

* modify auto modeling

* test modeling

* update

* modify automodel

* test auto

* update

* test tokenizer

* add diffs

* update

* fix errors

* test

* update

* modify some errors

* fix errors

* update

* updat4e

* add task_choice

* update

* update

* update

* add causal lm mapping

* remove dir

* update

* update

* add models

* delete print

* task para

* task para

* task para

* fix errors

* update

* ernie-gen

* update

* update

* doc update

* modify auto docs

* update docs

* update

* update

* update

* update
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant