Skip to content

Commit

Permalink
Fix pr #5223: Fix issue #5222: [Refactor]: Refactor the evaluation di…
Browse files Browse the repository at this point in the history
…rectory
  • Loading branch information
openhands-agent committed Nov 23, 2024
1 parent c6627f8 commit 4136c53
Show file tree
Hide file tree
Showing 54 changed files with 180 additions and 147 deletions.
55 changes: 44 additions & 11 deletions CREDITS.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,33 +24,66 @@ OpenHands includes and adapts the following open source projects. We are gratefu
### Reference Implementations for Evaluation Benchmarks
OpenHands integrates code of the reference implementations for the following agent evaluation benchmarks:

#### [HumanEval](https://github.com/openai/human-eval)
- License: MIT License

#### [DSP](https://github.com/microsoft/DataScienceProblems)
- License: MIT License
#### [EDA](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/EDA)
- Description: Exploratory Data Analysis benchmark

#### [HumanEvalPack](https://github.com/bigcode-project/bigcode-evaluation-harness)
#### [AgentBench](https://github.com/THUDM/AgentBench)
- License: Apache License 2.0

#### [AgentBench](https://github.com/THUDM/AgentBench)
#### [Aider Bench](https://github.com/paul-gauthier/aider)
- License: Apache License 2.0

#### [SWE-Bench](https://github.com/princeton-nlp/SWE-bench)
- License: MIT License
#### [BioCoder](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/biocoder)
- Description: Benchmark for biological code generation tasks

#### [BIRD](https://bird-bench.github.io/)
- License: MIT License
- Dataset: CC-BY-SA 4.0

#### [Browsing Delegation](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/browsing_delegation)
- Description: Web browsing delegation benchmark

#### [Commit0 Bench](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/commit0_bench)
- Description: Git commit analysis benchmark

#### [DiscoveryBench](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/discoverybench)
- Description: Benchmark for discovery tasks

#### [GAIA](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/gaia)
- Description: General AI Assistant benchmark

#### [Gorilla APIBench](https://github.com/ShishirPatil/gorilla)
- License: Apache License 2.0

#### [GPQA](https://github.com/idavidrein/gpqa)
- License: MIT License

#### [ProntoQA](https://github.com/asaparov/prontoqa)
- License: Apache License 2.0
#### [HumanEvalFix](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/humanevalfix)
- Description: Code fixing benchmark based on HumanEval

#### [Logic Reasoning](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/logic_reasoning)
- Description: Benchmark for logical reasoning tasks

#### [MiniWoB](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/miniwob)
- Description: Mini World of Bits benchmark

#### [MINT](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/mint)
- Description: Machine learning INTerpretation benchmark

#### [ML Bench](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/ml_bench)
- Description: Machine Learning benchmark

#### [ScienceAgentBench](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/scienceagentbench)
- Description: Benchmark for scientific tasks

#### [SWE-Bench](https://github.com/princeton-nlp/SWE-bench)
- License: MIT License

#### [ToolQA](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/toolqa)
- Description: Tool-based Question Answering benchmark

#### [WebArena](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/webarena)
- Description: Web interaction benchmark


## Open Source licenses
Expand Down
4 changes: 2 additions & 2 deletions evaluation/benchmarks/EDA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Please follow instruction [here](../README.md#setup) to setup your local develop

```bash
export OPENAI_API_KEY="sk-XXX"; # This is required for evaluation (to simulate another party of conversation)
./evaluation/benchmarks/EDA/scripts/run_infer.sh [model_config] [git-version] [agent] [dataset] [eval_limit]
./evaluation/benchmarks/benchmarks/EDA/scripts/run_infer.sh [model_config] [git-version] [agent] [dataset] [eval_limit]
```

where `model_config` is mandatory, while `git-version`, `agent`, `dataset` and `eval_limit` are optional.
Expand All @@ -33,7 +33,7 @@ to `CodeActAgent`.
For example,

```bash
./evaluation/benchmarks/EDA/scripts/run_infer.sh eval_gpt4o_2024_05_13 0.6.2 CodeActAgent things
./evaluation/benchmarks/benchmarks/EDA/scripts/run_infer.sh eval_gpt4o_2024_05_13 0.6.2 CodeActAgent things
```

## Reference
Expand Down
4 changes: 2 additions & 2 deletions evaluation/benchmarks/EDA/scripts/run_infer.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/bash
set -eo pipefail

source "evaluation/utils/version_control.sh"
source "evaluation/benchmarks/utils/version_control.sh"

MODEL_CONFIG=$1
COMMIT_HASH=$2
Expand Down Expand Up @@ -43,7 +43,7 @@ echo "AGENT_VERSION: $AGENT_VERSION"
echo "MODEL_CONFIG: $MODEL_CONFIG"
echo "DATASET: $DATASET"

COMMAND="poetry run python evaluation/benchmarks/EDA/run_infer.py \
COMMAND="poetry run python evaluation/benchmarks/benchmarks/EDA/run_infer.py \
--agent-cls $AGENT \
--llm-config $MODEL_CONFIG \
--dataset $DATASET \
Expand Down
6 changes: 3 additions & 3 deletions evaluation/benchmarks/agent_bench/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Please follow instruction [here](../README.md#setup) to setup your local develop
## Start the evaluation

```bash
./evaluation/benchmarks/agent_bench/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit]
./evaluation/benchmarks/benchmarks/agent_bench/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit]
```

- `model_config`, e.g. `eval_gpt4_1106_preview`, is the config group name for your
Expand All @@ -25,7 +25,7 @@ in order to use `eval_limit`, you must also set `agent`.

Following is the basic command to start the evaluation.

You can update the arguments in the script `evaluation/benchmarks/agent_bench/scripts/run_infer.sh`, such as `--max-iterations`, `--eval-num-workers` and so on.
You can update the arguments in the script `evaluation/benchmarks/benchmarks/agent_bench/scripts/run_infer.sh`, such as `--max-iterations`, `--eval-num-workers` and so on.

- `--agent-cls`, the agent to use. For example, `CodeActAgent`.
- `--llm-config`: the LLM configuration to use. For example, `eval_gpt4_1106_preview`.
Expand All @@ -34,5 +34,5 @@ You can update the arguments in the script `evaluation/benchmarks/agent_bench/sc
- `--eval-n-limit`: the number of examples to evaluate. For example, `100`.

```bash
./evaluation/benchmarks/agent_bench/scripts/run_infer.sh eval_gpt35_turbo HEAD CodeActAgent 1
./evaluation/benchmarks/benchmarks/agent_bench/scripts/run_infer.sh eval_gpt35_turbo HEAD CodeActAgent 1
```
4 changes: 2 additions & 2 deletions evaluation/benchmarks/agent_bench/scripts/run_infer.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/bash
set -eo pipefail

source "evaluation/utils/version_control.sh"
source "evaluation/benchmarks/utils/version_control.sh"

MODEL_CONFIG=$1
COMMIT_HASH=$2
Expand All @@ -26,7 +26,7 @@ echo "AGENT: $AGENT"
echo "AGENT_VERSION: $AGENT_VERSION"
echo "MODEL_CONFIG: $MODEL_CONFIG"

COMMAND="export PYTHONPATH=evaluation/benchmarks/agent_bench:\$PYTHONPATH && poetry run python evaluation/benchmarks/agent_bench/run_infer.py \
COMMAND="export PYTHONPATH=evaluation/benchmarks/benchmarks/agent_bench:\$PYTHONPATH && poetry run python evaluation/benchmarks/benchmarks/agent_bench/run_infer.py \
--agent-cls $AGENT \
--llm-config $MODEL_CONFIG \
--max-iterations 30 \
Expand Down
14 changes: 7 additions & 7 deletions evaluation/benchmarks/aider_bench/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ development environment and LLM.
## Start the evaluation

```bash
./evaluation/benchmarks/aider_bench/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit] [eval-num-workers] [eval_ids]
./evaluation/benchmarks/benchmarks/aider_bench/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit] [eval-num-workers] [eval_ids]
```

- `model_config`, e.g. `eval_gpt4_1106_preview`, is the config group name for
Expand All @@ -42,7 +42,7 @@ export SKIP_NUM=12 # skip the first 12 instances from the dataset
Following is the basic command to start the evaluation.

You can update the arguments in the script
`evaluation/benchmarks/aider_bench/scripts/run_infer.sh`, such as `--max-iterations`,
`evaluation/benchmarks/benchmarks/aider_bench/scripts/run_infer.sh`, such as `--max-iterations`,
`--eval-num-workers` and so on:

- `--agent-cls`, the agent to use. For example, `CodeActAgent`.
Expand All @@ -53,33 +53,33 @@ You can update the arguments in the script
- `--eval-ids`: the IDs of the examples to evaluate (comma separated). For example, `"1,3,10"`.

```bash
./evaluation/benchmarks/aider_bench/scripts/run_infer.sh eval_gpt35_turbo HEAD CodeActAgent 100 1 "1,3,10"
./evaluation/benchmarks/benchmarks/aider_bench/scripts/run_infer.sh eval_gpt35_turbo HEAD CodeActAgent 100 1 "1,3,10"
```

### Run Inference on `RemoteRuntime` (experimental)

This is in limited beta. Contact Xingyao over slack if you want to try this out!

```bash
./evaluation/benchmarks/aider_bench/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit] [eval-num-workers] [eval_ids]
./evaluation/benchmarks/benchmarks/aider_bench/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit] [eval-num-workers] [eval_ids]

# Example - This runs evaluation on CodeActAgent for 133 instances on aider_bench test set, with 2 workers running in parallel
export ALLHANDS_API_KEY="YOUR-API-KEY"
export RUNTIME=remote
export SANDBOX_REMOTE_RUNTIME_API_URL="https://runtime.eval.all-hands.dev"
./evaluation/benchmarks/aider_bench/scripts/run_infer.sh llm.eval HEAD CodeActAgent 133 2
./evaluation/benchmarks/benchmarks/aider_bench/scripts/run_infer.sh llm.eval HEAD CodeActAgent 133 2
```

## Summarize Results

```bash
poetry run python ./evaluation/benchmarks/aider_bench/scripts/summarize_results.py [path_to_output_jsonl_file]
poetry run python ./evaluation/benchmarks/benchmarks/aider_bench/scripts/summarize_results.py [path_to_output_jsonl_file]
```

Full example:

```bash
poetry run python ./evaluation/benchmarks/aider_bench/scripts/summarize_results.py evaluation/evaluation_outputs/outputs/AiderBench/CodeActAgent/claude-3-5-sonnet@20240620_maxiter_30_N_v1.9/output.jsonl
poetry run python ./evaluation/benchmarks/benchmarks/aider_bench/scripts/summarize_results.py evaluation/benchmarks/evaluation_outputs/outputs/AiderBench/CodeActAgent/claude-3-5-sonnet@20240620_maxiter_30_N_v1.9/output.jsonl
```

This will list the instances that passed and the instances that failed. For each
Expand Down
4 changes: 2 additions & 2 deletions evaluation/benchmarks/aider_bench/scripts/run_infer.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/bash
set -eo pipefail

source "evaluation/utils/version_control.sh"
source "evaluation/benchmarks/utils/version_control.sh"

MODEL_CONFIG=$1
COMMIT_HASH=$2
Expand Down Expand Up @@ -39,7 +39,7 @@ if [ "$USE_UNIT_TESTS" = true ]; then
EVAL_NOTE=$EVAL_NOTE-w-test
fi

COMMAND="export PYTHONPATH=evaluation/benchmarks/aider_bench:\$PYTHONPATH && poetry run python evaluation/benchmarks/aider_bench/run_infer.py \
COMMAND="export PYTHONPATH=evaluation/benchmarks/benchmarks/aider_bench:\$PYTHONPATH && poetry run python evaluation/benchmarks/benchmarks/aider_bench/run_infer.py \
--agent-cls $AGENT \
--llm-config $MODEL_CONFIG \
--max-iterations 30 \
Expand Down
4 changes: 2 additions & 2 deletions evaluation/benchmarks/biocoder/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ To reproduce this image, please see the Dockerfile_Openopenhands in the `biocode


```bash
./evaluation/benchmarks/biocoder/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit]
./evaluation/benchmarks/benchmarks/biocoder/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit]
```

where `model_config` is mandatory, while `git-version`, `agent`, `dataset` and `eval_limit` are optional.
Expand All @@ -43,7 +43,7 @@ with current OpenHands version, then your command would be:
## Examples

```bash
./evaluation/benchmarks/biocoder/scripts/run_infer.sh eval_gpt4o_2024_05_13 HEAD CodeActAgent 1
./evaluation/benchmarks/benchmarks/biocoder/scripts/run_infer.sh eval_gpt4o_2024_05_13 HEAD CodeActAgent 1
```

## Reference
Expand Down
4 changes: 2 additions & 2 deletions evaluation/benchmarks/biocoder/scripts/run_infer.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/bash
set -eo pipefail

source "evaluation/utils/version_control.sh"
source "evaluation/benchmarks/utils/version_control.sh"

MODEL_CONFIG=$1
COMMIT_HASH=$2
Expand All @@ -28,7 +28,7 @@ echo "AGENT_VERSION: $AGENT_VERSION"
echo "MODEL_CONFIG: $MODEL_CONFIG"
echo "DATASET: $DATASET"

COMMAND="poetry run python evaluation/benchmarks/biocoder/run_infer.py \
COMMAND="poetry run python evaluation/benchmarks/benchmarks/biocoder/run_infer.py \
--agent-cls $AGENT \
--llm-config $MODEL_CONFIG \
--max-iterations 10 \
Expand Down
4 changes: 2 additions & 2 deletions evaluation/benchmarks/bird/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Please follow instruction [here](../README.md#setup) to setup your local develop
## Run Inference on Bird

```bash
./evaluation/benchmarks/bird/scripts/run_infer.sh [model_config] [git-version]
./evaluation/benchmarks/benchmarks/bird/scripts/run_infer.sh [model_config] [git-version]
```

- `model_config`, e.g. `eval_gpt4_1106_preview`, is the config group name for your
Expand All @@ -31,7 +31,7 @@ For each problem, OpenHands is given a set number of iterations to fix the faili
"agent_class": "CodeActAgent",
"model_name": "gpt-4-1106-preview",
"max_iterations": 5,
"eval_output_dir": "evaluation/evaluation_outputs/outputs/bird/CodeActAgent/gpt-4-1106-preview_maxiter_5_N_v1.5",
"eval_output_dir": "evaluation/benchmarks/evaluation_outputs/outputs/bird/CodeActAgent/gpt-4-1106-preview_maxiter_5_N_v1.5",
"start_time": "2024-05-29 02:00:22",
"git_commit": "ae105c2fafc64ad3eeb7a8bea09119fcb5865bc4"
},
Expand Down
4 changes: 2 additions & 2 deletions evaluation/benchmarks/bird/scripts/run_infer.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/bash
set -eo pipefail

source "evaluation/utils/version_control.sh"
source "evaluation/benchmarks/utils/version_control.sh"

MODEL_CONFIG=$1
COMMIT_HASH=$2
Expand All @@ -26,7 +26,7 @@ echo "AGENT: $AGENT"
echo "AGENT_VERSION: $AGENT_VERSION"
echo "MODEL_CONFIG: $MODEL_CONFIG"

COMMAND="poetry run python evaluation/benchmarks/bird/run_infer.py \
COMMAND="poetry run python evaluation/benchmarks/benchmarks/bird/run_infer.py \
--agent-cls $AGENT \
--llm-config $MODEL_CONFIG \
--max-iterations 5 \
Expand Down
4 changes: 2 additions & 2 deletions evaluation/benchmarks/browsing_delegation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ Please follow instruction [here](../README.md#setup) to setup your local develop
## Run Inference

```bash
./evaluation/benchmarks/browsing_delegation/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit]
# e.g., ./evaluation/swe_bench/scripts/run_infer.sh llm.eval_gpt4_1106_preview_llm HEAD CodeActAgent 300
./evaluation/benchmarks/benchmarks/browsing_delegation/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit]
# e.g., ./evaluation/benchmarks/swe_bench/scripts/run_infer.sh llm.eval_gpt4_1106_preview_llm HEAD CodeActAgent 300
```

where `model_config` is mandatory, while `agent` and `eval_limit` are optional.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/bash
set -eo pipefail

source "evaluation/utils/version_control.sh"
source "evaluation/benchmarks/utils/version_control.sh"

MODEL_CONFIG=$1
COMMIT_HASH=$2
Expand All @@ -28,7 +28,7 @@ echo "MODEL_CONFIG: $MODEL_CONFIG"

EVAL_NOTE="$AGENT_VERSION"

COMMAND="poetry run python evaluation/benchmarks/browsing_delegation/run_infer.py \
COMMAND="poetry run python evaluation/benchmarks/benchmarks/browsing_delegation/run_infer.py \
--agent-cls $AGENT \
--llm-config $MODEL_CONFIG \
--max-iterations 1 \
Expand Down
12 changes: 6 additions & 6 deletions evaluation/benchmarks/commit0_bench/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,10 @@ Make sure your Docker daemon is running, and you have ample disk space (at least
When the `run_infer.sh` script is started, it will automatically pull the `lite` split in Commit0. For example, for instance ID `commit-0/minitorch`, it will try to pull our pre-build docker image `wentingzhao/minitorch` from DockerHub. This image will be used create an OpenHands runtime image where the agent will operate on.

```bash
./evaluation/benchmarks/commit0_bench/scripts/run_infer.sh [repo_split] [model_config] [git-version] [agent] [eval_limit] [max_iter] [num_workers] [dataset] [dataset_split]
./evaluation/benchmarks/benchmarks/commit0_bench/scripts/run_infer.sh [repo_split] [model_config] [git-version] [agent] [eval_limit] [max_iter] [num_workers] [dataset] [dataset_split]

# Example
./evaluation/benchmarks/commit0_bench/scripts/run_infer.sh lite llm.eval_sonnet HEAD CodeActAgent 16 100 8 wentingzhao/commit0_combined test
./evaluation/benchmarks/benchmarks/commit0_bench/scripts/run_infer.sh lite llm.eval_sonnet HEAD CodeActAgent 16 100 8 wentingzhao/commit0_combined test
```

where `model_config` is mandatory, and the rest are optional.
Expand Down Expand Up @@ -56,25 +56,25 @@ Let's say you'd like to run 10 instances using `llm.eval_sonnet` and CodeActAgen
then your command would be:

```bash
./evaluation/benchmarks/commit0_bench/scripts/run_infer.sh lite llm.eval_sonnet HEAD CodeActAgent 10 30 1 wentingzhao/commit0_combined test
./evaluation/benchmarks/benchmarks/commit0_bench/scripts/run_infer.sh lite llm.eval_sonnet HEAD CodeActAgent 10 30 1 wentingzhao/commit0_combined test
```

### Run Inference on `RemoteRuntime` (experimental)

This is in limited beta. Contact Xingyao over slack if you want to try this out!

```bash
./evaluation/benchmarks/commit0_bench/scripts/run_infer.sh [repo_split] [model_config] [git-version] [agent] [eval_limit] [max_iter] [num_workers] [dataset] [dataset_split]
./evaluation/benchmarks/benchmarks/commit0_bench/scripts/run_infer.sh [repo_split] [model_config] [git-version] [agent] [eval_limit] [max_iter] [num_workers] [dataset] [dataset_split]

# Example - This runs evaluation on CodeActAgent for 10 instances on "wentingzhao/commit0_combined"'s test set, with max 30 iteration per instances, with 1 number of workers running in parallel
ALLHANDS_API_KEY="YOUR-API-KEY" RUNTIME=remote SANDBOX_REMOTE_RUNTIME_API_URL="https://runtime.eval.all-hands.dev" EVAL_DOCKER_IMAGE_PREFIX="docker.io/wentingzhao" \
./evaluation/benchmarks/commit0_bench/scripts/run_infer.sh lite llm.eval_sonnet HEAD CodeActAgent 10 30 1 wentingzhao/commit0_combined test
./evaluation/benchmarks/benchmarks/commit0_bench/scripts/run_infer.sh lite llm.eval_sonnet HEAD CodeActAgent 10 30 1 wentingzhao/commit0_combined test
```

To clean-up all existing runtime you've already started, run:

```bash
ALLHANDS_API_KEY="YOUR-API-KEY" ./evaluation/benchmarks/commit0_bench/scripts/cleanup_remote_runtime.sh
ALLHANDS_API_KEY="YOUR-API-KEY" ./evaluation/benchmarks/benchmarks/commit0_bench/scripts/cleanup_remote_runtime.sh
```

### Specify a subset of tasks to run infer
Expand Down
2 changes: 1 addition & 1 deletion evaluation/benchmarks/commit0_bench/run_infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def get_instruction(instance: pd.Series, metadata: EvalMetadata):
test_cmd = instance['test']['test_cmd']
test_dir = instance['test']['test_dir']
# Instruction based on Anthropic's official trajectory
# https://github.com/eschluntz/swe-bench-experiments/tree/main/evaluation/verified/20241022_tools_claude-3-5-sonnet-updated/trajs
# https://github.com/eschluntz/swe-bench-experiments/tree/main/evaluation/benchmarks/verified/20241022_tools_claude-3-5-sonnet-updated/trajs
instruction = (
'<uploaded_files>\n'
f'/workspace/{workspace_dir_name}\n'
Expand Down
Loading

0 comments on commit 4136c53

Please sign in to comment.