TensorRT-LLM 0.14.0 Release #2403
kaiyux
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
We are very pleased to announce the 0.14.0 version of TensorRT-LLM. This update includes:
Key Features and Enhancements
LLMclass in the LLM API.finish_reasonandstop_reason.__repr__methods for classModule, thanks to the contribution from @1ytic in Add module __repr__ methods #2191.customAllReduceperformance.orchestratormode. This fast logits copy reduces the delay between draft token generation and the beginning of target model inference.API Changes
max_batch_sizeof thetrtllm-buildcommand is set to2048.builder_optfrom theBuildConfigclass and thetrtllm-buildcommand.ModelRunnerCppclass.isParticipantmethod to the C++ExecutorAPI to check if the current process is a participant in the executor instance.Model Updates
examples/nemotron_nas/README.md.examples/deepseek_v1/README.md.examples/phi/README.md.Fixed Issues
tensorrt_llm/models/model_weights_loader.py, thanks to the contribution from @wangkuiyi in Update model_weights_loader.py #2152.tensorrt_llm/runtime/generation.py, thanks to the contribution from @lkm2835 in Fix duplicated import module #2182.share_embeddingfor the models that have nolm_headin legacy checkpoint conversion path, thanks to the contribution from @lkm2835 in Fix check_share_embedding #2232.kv_cache_typeissue in the Python benchmark, thanks to the contribution from @qingquansong in Fix kv_cache_type issue #2219.trtllm-build --fast-buildwith fake or random weights. Thanks to @ZJLi2013 for flagging it in trtllm-build with --fast-build ignore transformer layers #2135.use_fused_mlpwhen constructingBuildConfigfrom dict, thanks for the fix from @ethnzhng in Include use_fused_mlp when constructing BuildConfig from dict #2081.numNewTokensCumSum. ([Bug] Lookahead decoding is nondeterministic and wrong after the first call to runner.generate #2263)Infrastructure Changes
Documentation
Known Issues
We are updating the
mainbranch regularly with new features, bug fixes and performance optimizations. Therelbranch will be updated less frequently, and the exact frequencies depend on your feedback.Thanks,
The TensorRT-LLM Engineering Team
This discussion was created from the release TensorRT-LLM 0.14.0 Release.
Beta Was this translation helpful? Give feedback.
All reactions