Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 2 additions & 4 deletions .buildkite/test-pipeline.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -733,16 +733,14 @@ steps:
commands:
- pytest -v -s models/language/pooling_mteb_test

- label: Multi-Modal Processor and Models Test (CPU) # 44min
- label: Multi-Modal Processor Test # 44min
timeout_in_minutes: 60
no_gpu: true
source_file_dependencies:
- vllm/
- tests/models/multimodal
commands:
- pip install git+https://github.com/TIGER-AI-Lab/Mantis.git
- pytest -v -s models/multimodal/processing
Comment on lines +736 to 743
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While this revert correctly moves test_mapping.py back to a GPU-enabled test step, it also moves the models/multimodal/processing tests to a GPU runner. The title of the original PR being reverted ('[CI/Build] Use CPU for mm processing test on CI') suggests that these processing tests can run on CPU.

To optimize CI resource usage and costs, it would be better to keep these tests on a CPU-only runner. You can achieve this by re-introducing no_gpu: true to this step and updating the label to reflect it runs on CPU.

  - label: Multi-Modal Processor Test (CPU) # 44min
    timeout_in_minutes: 60
    no_gpu: true
    source_file_dependencies:
    - vllm/
    - tests/models/multimodal
    commands:
      - pip install git+https://github.com/TIGER-AI-Lab/Mantis.git
      - pytest -v -s models/multimodal/processing

- pytest -v -s models/multimodal/test_mapping.py

- label: Multi-Modal Models Test (Standard) # 60min
timeout_in_minutes: 80
Expand All @@ -754,7 +752,7 @@ steps:
commands:
- pip install git+https://github.com/TIGER-AI-Lab/Mantis.git
- pip freeze | grep -E 'torch'
- pytest -v -s models/multimodal -m core_model --ignore models/multimodal/test_mapping.py --ignore models/multimodal/generation/test_whisper.py --ignore models/multimodal/processing
- pytest -v -s models/multimodal -m core_model --ignore models/multimodal/generation/test_whisper.py --ignore models/multimodal/processing
- cd .. && VLLM_WORKER_MULTIPROC_METHOD=spawn pytest -v -s tests/models/multimodal/generation/test_whisper.py -m core_model # Otherwise, mp_method="spawn" doesn't work

- label: Multi-Modal Accuracy Eval (Small Models) # 50min
Expand Down
Loading