Weekly release: 0.19.0.dev2025040100 #3204
                  
                    
                      kaiyux
                    
                  
                
                  announced in
                Announcements
              
            Replies: 0 comments
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
The TensorRT-LLM team is pleased to announce that we have updated a weekly release
0.19.0.dev2025040100, and pushed an update to the Triton backend this April 1, 2025.The
0.19.0.dev2025040100dev release includes:examples/exaone/README.md. (feat: Add EXAONE-Deep #3054)disaggServerBenchmark. (feat: Add BW measurement #3070)trtllm-benchcommand (perf: [AutoDeploy] Enable AutoDeploy as a backend in trtllm-bench #3041)trtllm-bench(perf: Readd iteration logging for trtllm-bench. #3039)BuildConfigarguments toLlmArgs. (chore: [TRTLLM-3694] Move functional args to llmargs #3036)find_library()did not find any library (fix: Early exit cmake if find_library() does not find any lib #3113)trtllm-llmapi-launchcommand (fix: fix hang in mgmn with trtllm-llmapi-launch command #3119)gpus_per_nodeissue intrtllm-benchwhenworld_sizeis less thandevice_count(fix: gpus_per_node in trtllm-bench when world_size < device_count #3007)cp_sizeis greater thankvHeadNum(fix: fix for cp > kvHeadNum #3002)draft_token_numsfor dummy requests during torch compilation with MTP (fix: Set correct draft_token_nums to dummy requests for torch compilation with MTP #3053)The cut-off commit to this release is 7549573. The code changes can be seen here: c2ffce7...7549573.
Thanks,
The TensorRT-LLM Engineering Team
Beta Was this translation helpful? Give feedback.
All reactions