Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Chatllama] Evaluation Function and Loop with metrics #319

Open
1 of 5 tasks
PierpaoloSorbellini opened this issue Mar 31, 2023 · 0 comments
Open
1 of 5 tasks

[Chatllama] Evaluation Function and Loop with metrics #319

PierpaoloSorbellini opened this issue Mar 31, 2023 · 0 comments
Labels
chatllama Issue related to the ChatLLaMA module good first issue Good for newcomers

Comments

@PierpaoloSorbellini
Copy link
Collaborator

PierpaoloSorbellini commented Mar 31, 2023

Description

Currently each training loop has an evaluation loop but it is not debugged nor used so far.

It needs to be generalised to be launched also outside the training activities, and to support specific language modelling metrics.
It would be nice if a report can be generated highlighting the performance achieved also in comparison with other models.

TODO

  • Understand that libraries such as openai/evals or FastChat can be adapted to be used as an evaluation tool
  • Debug Evaluation of the model.
  • Collect and Compute relevant metrics.
  • Launch the evaluation loop also outside the training.
  • Produce a meaningful report that can compare the performance of one or more models.
@PierpaoloSorbellini PierpaoloSorbellini added good first issue Good for newcomers chatllama Issue related to the ChatLLaMA module labels Mar 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chatllama Issue related to the ChatLLaMA module good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

1 participant