Skip to content

Commit b6192ba

Browse files
authored
Remove redundant max_num_tokens assignment
Remove redundant assignment of max_num_tokens.
1 parent 086e3c0 commit b6192ba

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

vllm/v1/worker/gpu_model_runner.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -287,7 +287,6 @@ def __init__(
287287
scheduler_config.prefill_max_num_batched_tokens,
288288
)
289289
self.dcp_rank = 0 if self.dcp_world_size <= 1 else get_dcp_group().rank_in_group
290-
self.max_num_tokens = scheduler_config.max_num_batched_tokens
291290
self.max_num_reqs = scheduler_config.max_num_seqs
292291

293292
# Broadcast PP output for external_launcher (torchrun)

0 commit comments

Comments
 (0)