Skip to content

Commit

Permalink
copy
Browse files Browse the repository at this point in the history
  • Loading branch information
paul-gauthier committed Dec 22, 2024
1 parent 7f0860d commit 3abb8d3
Showing 1 changed file with 4 additions and 8 deletions.
12 changes: 4 additions & 8 deletions benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,10 +50,7 @@ cd aider
mkdir tmp.benchmarks
# Clone the repo with the exercises
# Copy the practice exercises into the benchmark scratch dir
cp -rp python/exercises/practice tmp.benchmarks/exercism-python
git clone https://github.com/Aider-AI/polyglot-benchmark tmp.benchmarks/polyglot-benchmark
# Build the docker container
./benchmark/docker_build.sh
Expand All @@ -72,11 +69,11 @@ Launch the docker container and run the benchmark inside it:
pip install -e .
# Run the benchmark:
./benchmark/benchmark.py a-helpful-name-for-this-run --model gpt-3.5-turbo --edit-format whole --threads 10
./benchmark/benchmark.py a-helpful-name-for-this-run --model gpt-3.5-turbo --edit-format whole --threads 10 --exercises-dir polyglot-benchmark
```

The above will create a folder `tmp.benchmarks/YYYY-MM-DD-HH-MM-SS--a-helpful-name-for-this-run` with benchmarking results.
Run like this, the script will run all 133 exercises in a random order.
Run like this, the script will run all 225 exercises in a random order.

You can run `./benchmark/benchmark.py --help` for a list of all the arguments, but here are the most useful to keep in mind:

Expand All @@ -101,7 +98,7 @@ The benchmark report is a yaml record with statistics about the run:

```yaml
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
test_cases: 133
test_cases: 225
model: claude-3.5-sonnet
edit_format: diff
commit_hash: 35f21b5
Expand Down Expand Up @@ -142,7 +139,6 @@ You can see examples of the benchmark report yaml in the

## Limitations, notes

- Benchmarking all 133 exercises against Claude 3.5 Sonnet will cost about $4.
- Contributions of benchmark results are welcome! Submit results by opening a PR with edits to the
[aider leaderboard data files](https://github.com/Aider-AI/aider/blob/main/aider/website/_data/).
- These scripts are not intended for use by typical aider end users.
Expand Down

0 comments on commit 3abb8d3

Please sign in to comment.