-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
Revert "[CI/Build] Use CPU for mm processing test on CI (#27522)" #27531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This reverts commit d63cd9f. Signed-off-by: DarkLight1337 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request reverts a previous change that moved some multi-modal tests to a CPU runner. The revert seems to be motivated by an issue with test_mapping.py on CPU. While moving test_mapping.py back to a GPU runner is correct, the full revert also moves processing tests back to GPU, which is likely unnecessary and inefficient. My review includes a suggestion to perform a more targeted fix to keep the processing tests on a CPU runner to save resources.
| - label: Multi-Modal Processor Test # 44min | ||
| timeout_in_minutes: 60 | ||
| no_gpu: true | ||
| source_file_dependencies: | ||
| - vllm/ | ||
| - tests/models/multimodal | ||
| commands: | ||
| - pip install git+https://github.com/TIGER-AI-Lab/Mantis.git | ||
| - pytest -v -s models/multimodal/processing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this revert correctly moves test_mapping.py back to a GPU-enabled test step, it also moves the models/multimodal/processing tests to a GPU runner. The title of the original PR being reverted ('[CI/Build] Use CPU for mm processing test on CI') suggests that these processing tests can run on CPU.
To optimize CI resource usage and costs, it would be better to keep these tests on a CPU-only runner. You can achieve this by re-introducing no_gpu: true to this step and updating the label to reflect it runs on CPU.
- label: Multi-Modal Processor Test (CPU) # 44min
timeout_in_minutes: 60
no_gpu: true
source_file_dependencies:
- vllm/
- tests/models/multimodal
commands:
- pip install git+https://github.com/TIGER-AI-Lab/Mantis.git
- pytest -v -s models/multimodal/processing…#27522)" (vllm-project#27531) Signed-off-by: DarkLight1337 <[email protected]>
…#27522)" (vllm-project#27531) Signed-off-by: DarkLight1337 <[email protected]>
…#27522)" (vllm-project#27531) Signed-off-by: DarkLight1337 <[email protected]>
…#27522)" (vllm-project#27531) Signed-off-by: DarkLight1337 <[email protected]>
This reverts commit d63cd9f.
Purpose
See #27522 (comment)
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.