Skip to content

Conversation

@psychedelicious
Copy link
Contributor

Summary

Let the LLaVA OneVision model report its own size.

Still cannot use the 7B model on my 4090.

Related Issues / Discussions

n/a

QA Instructions

n/a

Merge Plan

n/a

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)
  • Updated What's New copy (if doing a release after this PR)

@github-actions github-actions bot added python PRs that change python files backend PRs that change backend files labels Mar 26, 2025
@psychedelicious psychedelicious enabled auto-merge (rebase) March 26, 2025 22:26
@psychedelicious psychedelicious force-pushed the psyche/fix/vllm-model-size branch from 82c1a5d to debafe6 Compare March 26, 2025 22:26
@psychedelicious psychedelicious merged commit 7004fde into main Mar 26, 2025
15 checks passed
@psychedelicious psychedelicious deleted the psyche/fix/vllm-model-size branch March 26, 2025 22:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend PRs that change backend files python PRs that change python files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants