Skip to content

Latest commit

 

History

History
18 lines (17 loc) · 1.24 KB

evaluation.md

File metadata and controls

18 lines (17 loc) · 1.24 KB

Evaluation

An example script to perform evaluation from a pretrained model is provided in this file. To run it, simply navigate to the folder and execute this command in the virtualenv created upon installation:

python test_job_1.py

Similarly to BaseTrainingJob, a helper class for test jobs is provided: BaseTestJob. Evaluation can be performed by:

  • Initializing the instance of BaseTestJob with:
    • The name of the training job to use for evaluation (argument training_job_name);
    • The path to the root folder containing the folder <training_job_name> as a subfolder (argument log_folder);
    • The type of task for which the model was training and that should be used to compute the test accuracy (argument task_type, cf. factory method compute_num_correct_predictions);
    • Optionally, the parameters of a different dataset and/or data loader than the ones used for training (same type of parameters as those used in BaseTrainingJob).
  • Running the test job:
    test_job.test()
    (if test_job is the instance of BaseTestJob used to perform evaluation).