This is a repo I use to run human-eval on code models, adjust as needed. Some scripts were adjusted from wizardcoder repo (process_eval.py). The evaluation code is duplicated in several files, mostly to handle edge cases around model tokenizing and loading (will clean it up).
Table is sorted by pass@1 score.
model | size | pass@1 | pass@10 | screenshot |
---|---|---|---|---|
sahil2801/replit-code-instruct-glaive | 3B | 63.5% | 67% | |
WizardCoder-15B-V1.0 | 15B | 57% | 68.9% | |
bigcode/starcoder | 15B | 34.6% | 48.7% | |
openchat/opencoderplus | 15B | 27.3% | 43.9% | |
teknium/Replit-v1-CodeInstruct-3B | 3B | 25.8% | 42.6% | |
teknium/Replit-v2-CodeInstruct-3B | 3B | 21.5% | 31% | |
replit-code-v1-3b | 3B | 17.1% | 29.8% | |
mpt-7b | 7B | 15.9% | 23.7% | |
xgen-7b-8k-base | 7B | 14.9% | 22.5% | |
openllama-7b-v2 | 7B | 14% | 23.1% | |
llama-2-7b | 7B | 13.1% | 21.9% | |
llama-7b | 7B | 12.1% | 18.9% | |
mpt-30b | 30B | pending | pending | pending |
Why is there a discrepancy on some of the scores between official numbers?
Because it is not obvious or published what prompt or processing the official models used to conduct their evaluation on this benchmark. The goal here is to try and best reproduce those numbers, in many cases it is possible to get very close to the published numbers.
All of the scores here were run independently of any published numbers and are reproducible by cloning the repo and following the setup.
Why do some models have a filter_code post generation step?
Base models can in many cases repeat outputs, breaking the benchmark scores. Instruct models don't have this problem and so you won't see this step, they tend to output a end of sequence token.
Create python environment
python -m venv env && source env/bin/activate
Install dependencies
pip install -r requirements.txt
Run the eval script
# replace script file name for various models:
# eval_wizard.py
# eval_opencode.py
# eval_mpt.py
# eval_starcoder.py
# eval_replit.py
# eval_replit_glaive.py
# eval_replit_instruct.py
python eval_wizard.py
Process the jsonl file to extract code samples from model completions.
Note: Only wizard & opencoder require this, they return markdown output with code.
# replace args for various models:
# --path results/wizard --out_path results/wizard/eval.jsonl
# --path results/opencode --out_path results/opencode/eval.jsonl
python process_eval.py --path results/wizard --out_path results/wizard/processed.jsonl --add_prompt
Then get the results
# replace args for various models:
# results/wizard/processed.jsonl
# results/starcoder/eval.jsonl
# results/mpt/eval.jsonl
# results/opencode/processed.jsonl
# results/replit_instruct/eval.jsonl
# results/replit_glaive/eval.jsonl
# results/replit/eval.jsonl
evaluate_functional_correctness results/wizard/processed.jsonl