Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 7 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ If you're planning on using any API-based models, make sure you define your rele
The images and text are stored on the HuggingFace hub, as a .zip. You may download it directly from there, using `huggingface-cli` (recommended):

```bash
huggingface-cli download answerdotai/ReadBench readbench.zip --repo-type dataset
huggingface-cli download answerdotai/ReadBench readbench.zip --repo-type dataset --local-dir .
```

Alternatively, if you are unable to use `huggingface-cli`, you may use the direct download URL, as provided by HuggingFace:
Expand All @@ -42,26 +42,27 @@ unzip readbench.zip

The authors of GPQA have requested that the dataset should not be reshared as-is, to minimise model contamination. We follow their wishes, which means you need to generate the GPQA images yourself, absed on the original GPQA dataset. You can do so by running the following commands:
```bash
python data_prep.py --datasets gpqa
python datagen.py --datasets gpqa
```
You might get an error that the dataset is gated and that you need to accept terms on the HF hub. To resolve, just follow the link, accept, and try again.

5. **Prepare the benchmark**

You may now run the following command to prepare the metadata file which will be used to run the benchmark:

```bash
python downsampler.py --root rendered_images_ft12 --split standard
python data_prep.py --root rendered_images_ft12 --split standard
```

#### tl;dr

Running the commands below will download and prepare the full ReadBench benchmark, as used in the paper:

```bash
huggingface-cli download answerdotai/ReadBench readbench.zip --type dataset
huggingface-cli download answerdotai/ReadBench readbench.zip --repo-type dataset --local_dir .
unzip readbench.zip
python data_prep.py --datasets gpqa
python downsampler.py --root rendered_images_ft12 --split standard
python datagen.py --datasets gpqa
python data_prep.py --root rendered_images_ft12 --split standard
```


Expand Down
2 changes: 1 addition & 1 deletion eval_config/dataset2prompt.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@
"narrativeqa": "You are given a story, which can be either a novel or a movie script, and a question. Answer the question asconcisely as you can, using a single phrase if possible. Do not provide any explanation.\n\nStory: {context}\n\nNow, answer the question based on the story asconcisely as you can, using a single phrase if possible. Do not provide any explanation.\n\nQuestion: {input}\n\nAnswer:",
"hotpotqa": "Answer the question based on the given passages. Only give me the answer and do not output any other words.\n\nThe following are given passages.\n{context}\n\nAnswer the question based on the given passages. Only give me the answer and do not output any other words.\n\nQuestion: {input}\nAnswer:",
"2wikimqa": "Answer the question based on the given passages. Only give me the answer and do not output any other words.\n\nThe following are given passages.\n{context}\n\nAnswer the question based on the given passages. Only give me the answer and do not output any other words.\n\nQuestion: {input}\nAnswer:",
"triviaqa": "Answer the question based on the given passage. Only give me the answer and do not output any other words. The following are some examples.\n\n{context}\n\n{input}",
"triviaqa": "Answer the question based on the given passage. Only give me the answer and do not output any other words. The following are some examples.\n\n{context}\n\n{input}"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The extra comma made it an invalid json file

}
22 changes: 16 additions & 6 deletions run_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,14 @@
from cosette import Client as CosetteClient
from vertexauth import get_claudette_client
from openai import AzureOpenAI
azure_endpoint = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-12-01-preview",
)
try:
# Azure secrets should be set if using Azure OpenAI models
azure_endpoint = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-12-01-preview",
)
except Exception as e: pass
Comment on lines +36 to +43
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the Claudette client, azure client/endpoint is needed only when running OpenAI, so it's optional


import babilong_prompts as bp
import longbench_prompts as lbp
Expand Down Expand Up @@ -117,6 +120,10 @@ def question_tag(ds: str, rec: dict) -> str | None:
"o3-mini-2025-01-31": {"in": 1.10 / 1_000_000, "out": 4.40 / 1_000_000},
# o1-mini
"o1-mini-2024-09-12": {"in": 1.10 / 1_000_000, "out": 4.40 / 1_000_000},
"gemini-2.5-pro-preview-06-05": {"in": 0.15 / 1_000_000, "out": 3.50 / 1_000_000},
"gemini-2.5-pro-preview": {"in": 0.15 / 1_000_000, "out": 3.50 / 1_000_000},
"gemini-2.5-flash": {"in": 0.30 / 1_000_000, "out": 2.50 / 1_000_000},
"gemini-2.5-flash-lite-preview-06-17": {"in": 0.10 / 1_000_000, "out": 0.40 / 1_000_000},
"gemini-2.0-flash": {"in": 0.10 / 1_000_000, "out": 0.40 / 1_000_000},
"gemini-2.5-flash-preview-04-17": {"in": 0.15 / 1_000_000, "out": 0.60 / 1_000_000},
"gemini-2.0-flash-lite": {"in": 0.0075 / 1_000_000, "out": 0.30 / 1_000_000},
Expand Down Expand Up @@ -232,7 +239,10 @@ def gemini_model_call(
*,
debug: bool = False,
) -> Tuple[str, int | None, int | None]:
client = genai.Client(api_key=os.getenv("GEMINI_API_KEY"))
client = genai.Client(
api_key=os.getenv("GEMINI_API_KEY"),
# 5 minutes timeout per call
http_options={"timeout": 5 * 60 * 1000})
Comment on lines +242 to +245
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By default, the timeout was set to None. This caused some bad requests to never fail, and block the eval loop.

I've set a long timeout limit to fix those cases, and not cause timeout of valid requests.


parts = []
for seg in segments:
Expand Down