Leaderboard π | RewardBench Dataset | Existing Test Sets | Results π | Paperπ
RewardBench is a benchmark designed to evaluate the capabilities and safety of reward models (including those trained with Direct Preference Optimization, DPO). The repository includes the following:
- Common inference code for a variety of reward models (Starling, PairRM, OpenAssistant, DPO, and more).
- Common dataset formatting and tests for fair reward model inference.
- Analysis and visualization tools.
The two primary scripts to generate results (more in scripts/
):
scripts/run_rm.py
: Run evaluations for reward models.scripts/run_dpo.py
: Run evaluations for direct preference optimization (DPO) models (and other models using implicit rewards, such as KTO).scripts/train_rm.py
: A basic RM training script built on TRL.
RewardBench let's you quickly evaluate any reward model on any preference set. To install for quick usage, install with pip as:
pip install rewardbench
Then, run a following:
rewardbench --model={yourmodel} --dataset={yourdataset} --batch_size=8
For a DPO model, pass --ref_model={} and the script will automatically route there. Automatically uses Tokenizers chat templates, but can also use fastchat conv templates.
To run the core Reward Bench evaluation set, run:
rewardbench --model={yourmodel}
Examples:
- Normal operation
rewardbench --model=OpenAssistant/reward-model-deberta-v3-large-v2 --dataset=allenai/ultrafeedback_binarized_cleaned --split=test_gen --chat_template=raw
- DPO model from local dataset (note
--load_json
)
rewardbench --model=Qwen/Qwen1.5-0.5B-Chat --ref_model=Qwen/Qwen1.5-0.5B --dataset=/net/nfs.cirrascale/allennlp/jacobm/herm/data/berkeley-nectar-binarized-preferences-random-rejected.jsonl --load_json
Experimental: Generative RMs can be run from the pip install by running:
pip install rewardbench[generative]
And then:
rewardbench-gen --model={}
For more information, see scripts/run_generative.py
.
The extra requirement for local models is VLLM and the requesite API for API models (OpenAI, Anthropic, and Together are supported).
To install from source, please install torch
on your system, and then install the following requirements.
pip install -e .
Add the following to your .bashrc
:
export HF_TOKEN="{your_token}"
For now, in order to contribute your model to the leaderboard, open an issue with the model name on HuggingFace (you can still evaluate local models with RewardBench, see below).
If custom code is needed, please open a PR that enables it in our inference stack (see rewardbench/models
for more information).
For reference configs, see scripts/configs/eval_configs.yaml
.
For reference on Chat Templates, many models follow the base / sft model terminology here.
A small model for debugging is available at natolambert/gpt2-dummy-rm
.
The core scripts automatically evaluate our core evaluation set. To run these on existing preference sets, add the argument --pref_sets
.
To run individual models with scripts/run_rm.py
, use any of the following examples:
python scripts/run_rm.py --model=openbmb/UltraRM-13b --chat_template=openbmb --batch_size=8
python scripts/run_rm.py --model=OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5 --chat_template=oasst_pythia
python scripts/run_rm.py --model=PKU-Alignment/beaver-7b-v1.0-cost --chat_template=pku-align --batch_size=16
python scripts/run_rm.py --model=IDEA-CCNL/Ziya-LLaMA-7B-Reward --batch_size=32 --trust_remote_code --chat_template=Ziya
To run these models with AI2 infrastructure, run:
python scripts/submit_eval_jobs.py
Or for example, the best of N sweep on the non-default image:
python scripts/submit_eval_jobs.py --eval_on_bon --image=nathanl/herm_bon
Note: for AI2 users, you must set beaker secret write HF_TOKEN <your_write_token_here>
to make the scripts work.
Models using the default abstraction AutoModelForSequenceClassification.from_pretrained
can also be loaded locally. Expanding this functionality is TODO. E.g.
python scripts/run_rm.py --model=/net/nfs.cirrascale/allennlp/hamishi/EasyLM/rm_13b_3ep --chat_template=tulu --batch_size=8
And for DPO:
python scripts/run_dpo.py --model=stabilityai/stablelm-zephyr-3b --ref_model=stabilityai/stablelm-3b-4e1t --batch_size=8
python scripts/run_dpo.py --model=stabilityai/stablelm-2-zephyr-1_6b --ref_model=stabilityai/stablelm-2-1_6b --batch_size=16
For reward models already in RewardBench, you can run an offline ensemble test to approximate using multiple reward models in your system. To try this, you can run:
python analysis/run_ensemble_offline.py --models sfairXC/FsfairX-LLaMA3-RM-v0.1 openbmb/Eurus-RM-7b Nexusflow/Starling-RM-34B
Local and API models are supported. For example, run OpenAI's models like:
python scripts/run_generative.py --model=gpt-3.5-turbo-0125
Local models are loaded from HuggingFace, though some are also available via Together's API. Run Llama 3 locally with
python scripts/run_generative.py --model=meta-llama/Llama-3-70b-chat-hf --force_local
Or, with Together's API with:
python scripts/run_generative.py --model=meta-llama/Llama-3-70b-chat-hf
We are adding support for generative ensembles (only via API for now), run with:
python scripts/run_generative.py --model gpt-3.5-turbo-0125 claude-3-sonnet-20240229 meta-llama/Llama-3-70b-chat-hf
Note: these must be an odd number of models > 1.
To create the ranking across the dataset, run (best_of 8 being placeholder, 16 should be fine as eval logic will handle lower best of N numbers):
python scripts/run_bon.py --model=OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5 --chat_template=oasst_pythia --best_of=8 --debug
Important: We use prompt-weighed scores for the sections Chat, Chat Hard, Safety, and Reasoning (with math equalized to code here) to avoid assigning too much credit to small subsets (e.g. MT Bench ones). Use the following code to compute the scores for each category, assuming RewardBench
is installed:
from rewardbench.constants import EXAMPLE_COUNTS, SUBSET_MAPPING
from rewardbench.utils import calculate_scores_per_section
metrics = {
"alpacaeval-easy": 0.5,
"alpacaeval-hard": 0.7052631578947368,
"alpacaeval-length": 0.5894736842105263,
"chat_template": "tokenizer",
"donotanswer": 0.8235294117647058,
"hep-cpp": 0.6280487804878049,
"hep-go": 0.6341463414634146,
"hep-java": 0.7073170731707317,
"hep-js": 0.6646341463414634,
"hep-python": 0.5487804878048781,
"hep-rust": 0.6463414634146342,
"llmbar-adver-GPTInst": 0.391304347826087,
"llmbar-adver-GPTOut": 0.46808510638297873,
"llmbar-adver-manual": 0.3695652173913043,
"llmbar-adver-neighbor": 0.43283582089552236,
"llmbar-natural": 0.52,
"math-prm": 0.2953020134228188,
"model": "PKU-Alignment/beaver-7b-v1.0-cost",
"model_type": "Seq. Classifier",
"mt-bench-easy": 0.5714285714285714,
"mt-bench-hard": 0.5405405405405406,
"mt-bench-med": 0.725,
"refusals-dangerous": 0.97,
"refusals-offensive": 1,
"xstest-should-refuse": 1,
"xstest-should-respond": 0.284
}
# Calculate and print the scores per section
scores_per_section = calculate_scores_per_section(EXAMPLE_COUNTS, SUBSET_MAPPING, metrics)
print(scores_per_section)
βββ README.md <- The top-level README for researchers using this project
βββ analysis/ <- Directory of tools to analyze RewardBench results or other reward model properties
βββ rewardbench/ <- Core utils and modeling files
| βββ models/ βββ Standalone files for running existing reward models
| βββ *.py βββ RewardBench tools and utilities
βββ scripts/ <- Scripts and configs to train and evaluate reward models
βββ tests <- Unit tests
βββ Dockerfile <- Build file for reproducible and scaleable research at AI2
βββ LICENSE
βββ Makefile <- Makefile with commands like `make style`
βββ setup.py <- Makes project pip installable (pip install -e .) so `alignment` can be imported
This section is designed for AI2 usage, but may help others evaluating models with Docker.
When updating this repo, the docker image should be rebuilt to include those changes.
For AI2 members, please update the list below with any images you use regularly.
For example, if you update scripts/run_rm.py
and include a new package (or change a package version), you should rebuild the image and verify it still works on known models.
To update the image, run these commands in the root directory of this repo:
docker build -t <local_image_name> . --platform linux/amd64
beaker image create <local_image_name> -n <beaker_image_name>
Notes: Do not use the character - in image names for beaker,
When updating the Dockerfile
, make sure to see the instructions at the top to update the base cuda version.
In development, we have the following docker images (most recent first as it's likely what you need). TODO: Update it so one image has VLLM (for generative RM only) and one without. Without will load much faster.
nathanl/rb_v23
, Jul. 2024: Include support for bfloat16 models from command linenathanl/rb_v22
, Jul. 2024: Include new Generalizable Reward Modelnathanl/rb_v20
: Fixes to DPO handling (minor) + llama 3 not quantized for dponathanl/rb_v18
: Improvements to RewardBench CLInathanl/rb_v17
(with VLLM): add support for vllm + llm as a judge,rb_v16
is similar without prometheus and some OpenAI modelsnathanl/rb_v12
: add support for llama3nathanl/rewardbench_v10
: add support formightbe/Better-PairRM
via jinja2nathanl/rewardbench_v8
: add support foropenbmb/Eurus-RM-7b
and starcoder2nathanl/rewardbench_v5
: improve saving with DPO scriptnathanl/rewardbench_v4
: fix EOS token bug on FastChat models (GH #90)nathanl/rewardbench_v2
: fix beaver cost modelnathanl/rewardbench_v1
: release version
Please cite our work with the following:
@misc{lambert2024rewardbench,
title={RewardBench: Evaluating Reward Models for Language Modeling},
author={Nathan Lambert and Valentina Pyatkin and Jacob Morrison and LJ Miranda and Bill Yuchen Lin and Khyathi Chandu and Nouha Dziri and Sachin Kumar and Tom Zick and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi},
year={2024},
eprint={2403.13787},
archivePrefix={arXiv},
primaryClass={cs.LG}
}