Update README.md
Browse files
README.md
CHANGED
|
@@ -57,6 +57,15 @@ Each task includes a `docker_image` field pointing to a pre-built Docker Hub ima
|
|
| 57 |
|
| 58 |
We follow the standard SWE-Agent scaffold, and we provide a sample config (with the prompts) in [default_qa_config.yaml](default_qa_config.yaml)
|
| 59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
|
| 61 |
Evaluation is performed by an LLM judge (Claude Opus 4.5) that scores the agent's answer against each rubric criterion independently. Each criterion receives a binary score (met or not met) indicating and is then aggregated.
|
| 62 |
|
|
|
|
| 57 |
|
| 58 |
We follow the standard SWE-Agent scaffold, and we provide a sample config (with the prompts) in [default_qa_config.yaml](default_qa_config.yaml)
|
| 59 |
|
| 60 |
+
To run tasks, you can pull the docker image and run the container, and reset the environment to the base commit:
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
cd /app
|
| 64 |
+
git config --global --add safe.directory /app
|
| 65 |
+
git restore .
|
| 66 |
+
git reset --hard <repository_base_commit>
|
| 67 |
+
git clean -fdq
|
| 68 |
+
```
|
| 69 |
|
| 70 |
Evaluation is performed by an LLM judge (Claude Opus 4.5) that scores the agent's answer against each rubric criterion independently. Each criterion receives a binary score (met or not met) indicating and is then aggregated.
|
| 71 |
|