|
1 |
| -# DCVLR: Data Curation for Vision Language Reasoning |
| 1 | +# DCVLR - Getting Under the Hood |
2 | 2 |
|
3 | 3 | [](https://neurips.cc/Conferences/2025)
|
4 | 4 | [](https://dcvlr.org)
|
|
16 | 16 |
|
17 | 17 | ---
|
18 | 18 |
|
| 19 | +## What is this directory? |
19 | 20 |
|
20 |
| -DCVLR is the first open-data, open-models, open-source competition for data curation in vision-language reasoning, hosted at NeurIPS 2025. |
| 21 | +This directory is intended to accompany the [2025 DCVLR (Data Curation for Vision-Language Reasoning) NeurIPS competition](https://dcvlr-neurips.github.io/). If you don't know what that is, you should go read the competition website and then come back here! |
21 | 22 |
|
| 23 | +## DCVLR: Digging Deeper |
22 | 24 |
|
23 |
| -## 🎯 Challenge |
| 25 | +The DCVLR competition was explicitly designed to have a *low barrier to entry*, allowing a diverse collection of teams to compete. However, we know that many teams may be interested in digging deeper into the data and the tasks in order to optimize the performance of their allowed submissions. If that's you, you've come to the right place. This directory will give you all the building blocks necessary to reproduce the train and eval pipeline used in the DCVLR competition on your own cluster. |
24 | 26 |
|
25 |
| -Participants can leverage any source datasets to curate high-quality instruction-tuning datasets (1K or 10K examples). Participants are encouraged to explore diverse curation strategies, from synthetic data generation to subset selection. Submissions will be evaluated by fine-tuning an undisclosed, open-source vision-language model on the curated data and measuring performance across a wide variety of benchmarks. |
| 27 | +## What You Will Need |
26 | 28 |
|
27 |
| -## 🚀 Quick Start |
| 29 | +In order to reproduce our experimental pipeline with the model architectures we consider for this competition (which range from 7B to 10B parameters), you will need access to a cluster with at least 8 A100 GPUs, and 1TB of disk space. If you don't have access, you can rent a cluster, e.g. on [Lambda](https://lambdalabs.com/service/gpu-cloud). All DCVLR participants are eligible for a credit on Lambda which they can use to run experiments for the competition. |
28 | 30 |
|
29 |
| -Get started with training in minutes: |
| 31 | +We plan to provide add examples of how to experiment on smaller architectures (e.g. 1B parameters) to this directory at a later date, so stay tuned. You can also refer to the [Oumi documentation](https://oumi.ai/docs/en/latest/index.html) for more information on how to run experiments on smaller clusters. |
| 32 | + |
| 33 | +### Data Sourcing |
| 34 | + |
| 35 | +Where can you source data that might be suitable for training for this competition? If you want to draw on existing datasets, here are a few we recommend looking into -- |
| 36 | + |
| 37 | +[Llava-O1](https://huggingface.co/datasets/Xkev/LLaVA-CoT-100k) |
| 38 | + |
| 39 | +[Math-Llava](https://huggingface.co/datasets/Zhiqiang007/MathV360K) |
| 40 | + |
| 41 | +[Geo-170K](https://huggingface.co/datasets/Luckyjhg/Geo170K) |
| 42 | + |
| 43 | +[Open-R1](https://huggingface.co/datasets/lmms-lab/multimodal-open-r1-8k-verified) |
| 44 | + |
| 45 | +[AIDC Ovis](https://huggingface.co/datasets/AIDC-AI/Ovis-dataset) |
| 46 | + |
| 47 | +[Llava 1V](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data) |
| 48 | + |
| 49 | +### Data Curation |
| 50 | + |
| 51 | +We will add documentation on how to use Oumi for synthetic data curation and data transformation here soon. Stay tuned! |
| 52 | + |
| 53 | +For now, you will have to BYOD (bring your own dataset) in an Oumi-supported dataset format. For this competition, we highly recommend the flexible "hf_vision" format, which allows you to load a wide range of VL datasets from the Hugging Face Hub. Here's an example we used for training on a filtered version of the Multimodal Open-R1 dataset: |
30 | 54 |
|
31 | 55 | ```bash
|
32 |
| -# Install oumi |
33 |
| -uv pip install "oumi[gpu]" |
| 56 | +datasets: |
| 57 | + - dataset_name: "hf_vision" |
| 58 | + split: "train" |
| 59 | + shuffle: True |
| 60 | + seed: 42 |
| 61 | + trust_remote_code: True |
| 62 | + transform_num_workers: "auto" |
| 63 | + dataset_kwargs: |
| 64 | + hf_dataset_path: "penfever/multimodal-open-r1-8192-filtered-tighter" |
| 65 | + image_column: "image" |
| 66 | + question_column: "problem" |
| 67 | + answer_column: "solution" |
| 68 | + return_tensors: True |
| 69 | +``` |
| 70 | + |
| 71 | +### Model Training |
34 | 72 |
|
35 |
| -# Train with Molmo-7B-O |
36 |
| -oumi train -c molmo-o --dataset dataset.jsonl |
| 73 | +#### Setup and Environment |
37 | 74 |
|
38 |
| -# Train with Qwen2.5-VL-7B-Instruct |
39 |
| -oumi train -c qwen2.5-vl-7b-instruct --dataset dataset.jsonl |
| 75 | +DCVLR experiments can be run using the main branch of the Oumi repository. We provide a [DOCKERFILE](https://github.com/oumi-ai/oumi/blob/main/Dockerfile) for building Oumi, or you can follow the instructions in the [Quickstart](https://oumi.ai/docs/en/latest/get_started/quickstart.html). |
| 76 | + |
| 77 | +#### Commands |
| 78 | + |
| 79 | +Model training is extremely straightforward, requiring only a single command: |
| 80 | + |
| 81 | +```bash |
| 82 | +export MY_CONFIG=<PATH/TO/qwenvl-openr1.yaml> |
| 83 | +torchrun --nproc-per-node 8 --standalone -m oumi train -c $MY_CONFIG |
40 | 84 | ```
|
41 | 85 |
|
42 |
| -## 📅 Key Dates |
| 86 | +We provide configurations for three models; Molmo-D, Molmo-O, and QwenVL-2.5. Other models such as InternVL3 may also be used in the competition. |
| 87 | + |
| 88 | +Depending on how `training: output_dir` is set in the config file, the model checkpoints will be saved in the base of the specified directory. |
| 89 | + |
| 90 | +We then recommend syncing the trained model to HuggingFace Hub using the `huggingface-cli` tool to enable version control and ease of future access. The repository need not exist in advance, it will be automatically created when you use this command. |
43 | 91 |
|
44 |
| -| Date | Milestone | |
45 |
| -|------|-----------| |
46 |
| -| **June 11, 2025** | Release of Competition Materials | |
47 |
| -| **July 1, 2025** | Submission Portal Opens | |
48 |
| -| **October 1, 2025** | Final Submission Deadline | |
49 |
| -| **November 1, 2025** | Results Announced | |
50 |
| -| **December 2025** | NeurIPS 2025 Presentation | |
| 92 | +```bash |
| 93 | +huggingface-cli upload-large-folder <YOUR_HF_REPO> <YOUR_OUTPUT_DIRECTORY> --repo-type=model |
| 94 | +``` |
| 95 | + |
| 96 | +### Model Evaluation |
51 | 97 |
|
| 98 | +#### Setup and Environment |
52 | 99 |
|
53 |
| -## 📚 Competition Resources |
| 100 | +We use a modified version of [VLMEvalKit](https://github.com/oumi-ai/VLMEvalKit) for our evaluation harness. You can clone and install it following the directions in the repo, or use the provided [DOCKERFILE](https://github.com/oumi-ai/VLMEvalKit/blob/main/docker/Dockerfile.cuda12.9-oumi-molmo-qwen). |
54 | 101 |
|
55 |
| -| Resource | Description | Link | |
56 |
| -|----------|-------------|------| |
57 |
| -| 📊 **Starter Kit** | Comprehensive starter kit with example datasets, training scripts, and best practices | [Access Starter Kit](https://huggingface.co/datasets/oumi-ai/dcvlr-starter-kit) | |
58 |
| -| 💻 **Training Scripts** | Starting scripts for fine-tuning multiple vision-language models | [View Scripts](https://github.com/oumi-ai/oumi/tree/main/configs/projects/dcvlr) | |
59 |
| -| 🧪 **Evaluation Code** | Scripts to evaluate model outputs on diverse benchmark development sets | [Get Code](https://github.com/oumi-ai/oumi/tree/main/configs/projects/dcvlr) | |
60 |
| -| ☁️ **Compute Resources** | GPU credits from Lambda Labs for participants | [Apply for Credits](https://oumi-ai.typeform.com/to/OGPuRt6U") | |
61 |
| -| 📚 **Documentation** | Complete guides and tutorials | [View Documentation](https://oumi.ai/docs) | |
| 102 | +#### Commands |
62 | 103 |
|
63 |
| -## 🤝 Sponsors |
| 104 | +Model evaluation can also be conducted using a simple one-line command. We give an example with four datasets; these datasets are not guaranteed to be the ones we use in the competition, however, they are a good starting point for the types of tasks we are considering. |
64 | 105 |
|
65 |
| -- **Lambda Labs** - Compute Resources |
66 |
| -- **Oumi.ai** - Competition Support |
| 106 | +```bash |
| 107 | +export MODEL_NAME=<YOUR/HF/MODEL/PATH> |
| 108 | +export WORK_DIR=<YOUR/OUTPUT/DIRECTORY> |
| 109 | +mkdir -p "$WORK_DIR" |
| 110 | +export DATASETS="VMCBench_DEV WeMath MathVista_MINI LiveXivVQA" |
| 111 | +python scripts/wandb_logger.py --run-and-log \ |
| 112 | + --data $DATASETS \ |
| 113 | + --work-dir $WORK_DIR \ |
| 114 | + --use-vllm \ |
| 115 | + --save-detailed-eval \ |
| 116 | + --save-judge-responses \ |
| 117 | + --max-output-tokens 4096 \ |
| 118 | + --pass-custom-model $MODEL_NAME |
| 119 | +``` |
67 | 120 |
|
68 |
| -## 📞 Contact |
| 121 | +## How to Cite DCVLR |
69 | 122 |
|
70 |
| -Have questions? Get in touch with the DCVLR team: |
| 123 | +If you wish to refer to DCVLR in your work, please cite the following: |
71 | 124 |
|
72 |
| -- **Website**: [dcvlr.org](https://dcvlr.org) |
73 |
| -- **Email**: [Contact Form](https://dcvlr.org/contact) |
| 125 | +```bib |
| 126 | +@misc{DCVLR: Data Curation for Vision-Language Reasoning, |
| 127 | + author = {Feuer, Benjamin and Tripathi, Rohun and Elachqar, Oussama and Zhang, Yuhui and Hulkund, Neha and Nguyen, Thao and Shabtay, Nimrod and Udandarao, Vishaal and Wang, Xiaohan and Webb, Stefan and Koukoumidis, Emmanouil and Schmidt, Ludwig and Xie, Saining and Yeung-Levy, Serena and Liang, Paul and Beery, Sara and Gkioxari, Georgia} |
| 128 | + month = June, |
| 129 | + title = {{DCVLR}}, |
| 130 | + year = {2025} |
| 131 | +} |
| 132 | +``` |
0 commit comments