Skip to content

Conversation

@heheda12345
Copy link
Collaborator

@heheda12345 heheda12345 commented Nov 3, 2025

Purpose

flashinfer-ai/flashinfer#1993 reports this combination is not correct, so this PR disable it.

Thanks @vadiklyutiy for the exploration on this problem #27704

Test Plan

vllm serve Qwen/Qwen3-Next-80B-A3B-Instruct -tp 4
lm_eval --model local-completions --model_args model=Qwen/Qwen3-Next-80B-A3B-Instruct,base_url=http://localhost:8000/v1/completions -t gsm8k --num_fewshot 5 --batch_size 250

Test Result

|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.8522|±  |0.0098|
|     |       |strict-match    |     5|exact_match|↑  |0.8150|±  |0.0107|

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Chen Zhang <[email protected]>
@mergify mergify bot added the v1 label Nov 3, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly disables the TRTLLM attention backend for the problematic combination of head_size=256 and block_size=16, as reported in the linked issue. The changes are well-contained, modifying the use_trtllm_attention utility and updating its call sites.

My main feedback is to add a warning when this override occurs while a user has explicitly forced TRTLLM attention via an environment variable. This ensures consistency with how other configuration overrides are handled and improves the user's debugging experience.

Comment on lines +274 to +277
if head_size == 256 and block_size == 16:
## https://github.com/flashinfer-ai/flashinfer/issues/1993 reports that`
# head size 256 and block size 16 is incorrect on blackwell.
return False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

When VLLM_USE_TRTLLM_ATTENTION is set to 1, the user is explicitly forcing the use of TRTLLM attention. If we override this setting due to a known bug, we should inform the user with a warning, similar to how other incompatible configurations are handled in this function. This improves transparency and helps with debugging. I've also cleaned up the comment for clarity.

Suggested change
if head_size == 256 and block_size == 16:
## https://github.com/flashinfer-ai/flashinfer/issues/1993 reports that`
# head size 256 and block size 16 is incorrect on blackwell.
return False
if head_size == 256 and block_size == 16:
# Per https://github.com/flashinfer-ai/flashinfer/issues/1993, this
# combination is incorrect on Blackwell GPUs.
if force_use_trtllm:
logger.warning_once(
"Disabling TRTLLM attention for head_size=256 and "
"block_size=16 due to a known bug, even though "
"VLLM_USE_TRTLLM_ATTENTION is set to 1."
)
return False

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +274 to +276
if head_size == 256 and block_size == 16:
## https://github.com/flashinfer-ai/flashinfer/issues/1993 reports that`
# head size 256 and block size 16 is incorrect on blackwell.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Disable TRTLLM without updating quantization support

The new early return when head_size == 256 and block_size == 16 prevents TRTLLM from being used, but the rest of the FlashInfer backend still treats this configuration as TRTLLM-capable because support_trtllm_attn is computed via can_use_trtllm_attention (which ignores head/block sizes) and supports_quant_query_input() continues to return True. During initialization the attention layer will therefore enable query FP8 quantization, yet at runtime metadata building falls back q_data_type to the model dtype. When the quantized query tensor reaches FlashInferAttentionImpl.forward the assertion attn_metadata.q_data_type == query.dtype trips and inference aborts for the 256/16 combination this change was meant to unblock. can_use_trtllm_attention or supports_quant_query_input should incorporate the new restriction (or clear query_quant) so that query quantization is not attempted when TRTLLM is disabled by this guard.

Useful? React with 👍 / 👎.

@heheda12345 heheda12345 closed this Nov 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant