-
git clone https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable
git clone -b jp-stable https://github.com/Stability-AI/lm-evaluation-harness.git cd lm-evaluation-harness pip install -e ".[ja]"
-
Choose your prompt template based on docs/prompt_templates.md
-
Replace
TEMPLATE
to the version and changeMODEL_PATH
. And, save the script asharness.sh
MODEL_ARGS="pretrained=MODEL_PATH" TASK="jsquad-1.1-TEMPLATE,jcommonsenseqa-1.1-TEMPLATE,jnli-1.1-TEMPLATE,marc_ja-1.1-TEMPLATE" python main.py \ --model hf-causal \ --model_args $MODEL_ARGS \ --tasks $TASK \ --num_fewshot "2,3,3,3" \ --device "cuda" \ --output_path "result.json"
-
Run!
sh harness.sh
We evaluated some open-sourced Japanese LMs. Pleasae refer to harness.sh
inside models
folder.
For more details, please see docs/jptasks.md.
Tasks | Supported Prompt Templates |
---|---|
JSQuAD | 0.1 / 0.2 / 0.3 / 0.4 |
JCommonsenseQA | 0.1 / 0.2 / 0.3 / 0.4 |
JNLI | 0.2 / 0.3 / 0.4 |
MARC-ja | 0.2 / 0.3 / 0.4 |
JaQuAD | 0.1 / 0.2 / 0.3 / 0.4 |
JBLiMP | - |
XLSum-ja | 0.0 / 0.3 / 0.4 |
JAQKET | 0.1 / 0.2 / 0.3 / 0.4 |
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
Features:
- 200+ tasks implemented. See the task-table for a complete list.
- Support for the Hugging Face
transformers
library, GPT-NeoX, Megatron-DeepSpeed, and the OpenAI API, with flexible tokenization-agnostic interface. - Support for evaluation on adapters (e.g. LoRa) supported in Hugging Face's PEFT library.
- Task versioning to ensure reproducibility.
To install lm-eval
from the github repository main branch, run:
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
To install additional multilingual tokenization and text segmentation packages, you must install the package with the multilingual
extra:
pip install -e ".[multilingual]"
Note: When reporting results from eval harness, please include the task versions (shown in
results["versions"]
) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the Task Versioning section for more info.
To evaluate a model hosted on the Hugging Face Hub (e.g. GPT-J-6B) on tasks with names matching the pattern lambada_*
and hellaswag
you can use the following command:
python main.py \
--model hf-causal \
--model_args pretrained=EleutherAI/gpt-j-6B \
--tasks lambada_*,hellaswag \
--device cuda:0
Additional arguments can be provided to the model constructor using the --model_args
flag. Most notably, this supports the common practice of using the revisions
feature on the Hub to store partially trained checkpoints:
python main.py \
--model hf-causal \
--model_args pretrained=EleutherAI/pythia-160m,revision=step100000 \
--tasks lambada_openai,hellaswag \
--device cuda:0
To evaluate models that are loaded via AutoSeq2SeqLM
in Hugging Face, you instead use hf-seq2seq
. To evaluate (causal) models across multiple GPUs, use --model hf-causal-experimental
Warning: Choosing the wrong model may result in erroneous outputs despite not erroring.
To use with PEFT, take the call you would run to evaluate the base model and add ,peft=PATH
to the model_args
argument as shown below:
python main.py \
--model hf-causal-experimental \
--model_args pretrained=EleutherAI/gpt-j-6b,peft=nomic-ai/gpt4all-j-lora \
--tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \
--device cuda:0
Our library also supports the OpenAI API:
export OPENAI_API_SECRET_KEY=YOUR_KEY_HERE
python main.py \
--model gpt3 \
--model_args engine=davinci \
--tasks lambada_openai,hellaswag
While this functionality is only officially maintained for the official OpenAI API, it tends to also work for other hosting services that use the same API such as goose.ai with minor modification. We also have an implementation for the TextSynth API, using --model textsynth
.
To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the --check_integrity
flag:
python main.py \
--model gpt3 \
--model_args engine=davinci \
--tasks lambada_openai,hellaswag \
--check_integrity
To evaluate mesh-transformer-jax models that are not available on HF, please invoke eval harness through this script.
💡 Tip: You can inspect what the LM inputs look like by running the following command:
python write_out.py \
--tasks all_tasks \
--num_fewshot 5 \
--num_examples 10 \
--output_base_path /path/to/output/folder
This will write out one text file for each task.
To implement a new task in the eval harness, see this guide.
To help improve reproducibility, all tasks have a VERSION
field. When run from the command line, this is reported in a column in the table, or in the "version" field in the evaluator return dict. The purpose of the version is so that if the task definition changes (i.e to fix a bug), then we can know exactly which metrics were computed using the old buggy implementation to avoid unfair comparisons. To enforce this, there are unit tests that make sure the behavior of all tests remains the same as when they were first implemented. Task versions start at 0, and each time a breaking change is made, the version is incremented by one.
When reporting eval harness results, please also report the version of each task. This can be done either with a separate column in the table, or by reporting the task name with the version appended as such: taskname-v0.
To address concerns about train / test contamination, we provide utilities for comparing results on a benchmark using only the data points nto found in the model training set. Unfortunately, outside of models trained on the Pile and C4, its very rare that people who train models disclose the contents of the training data. However this utility can be useful to evaluate models you have trained on private data, provided you are willing to pre-compute the necessary indices. We provide computed indices for 13-gram exact match deduplication against the Pile, and plan to add additional precomputed dataset indices in the future (including C4 and min-hash LSH deduplication).
For details on text decontamination, see the decontamination guide.
Note that the directory provided to the --decontamination_ngrams_path
argument should contain the ngram files and info.json. See the above guide for ngram generation for the pile, this could be adapted for other training sets.
python main.py \
--model gpt2 \
--tasks sciq \
--decontamination_ngrams_path path/containing/training/set/ngrams \
--device cuda:0
@software{eval-harness,
author = {Gao, Leo and
Tow, Jonathan and
Biderman, Stella and
Black, Sid and
DiPofi, Anthony and
Foster, Charles and
Golding, Laurence and
Hsu, Jeffrey and
McDonell, Kyle and
Muennighoff, Niklas and
Phang, Jason and
Reynolds, Laria and
Tang, Eric and
Thite, Anish and
Wang, Ben and
Wang, Kevin and
Zou, Andy},
title = {A framework for few-shot language model evaluation},
month = sep,
year = 2021,
publisher = {Zenodo},
version = {v0.0.1},
doi = {10.5281/zenodo.5371628},
url = {https://doi.org/10.5281/zenodo.5371628}
}