Update on the development branch #2756
DanBlanaru
announced in
Announcements
Replies: 1 comment
-
Hi! Could you please show the difference or explain the problem? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
The TensorRT-LLM team is pleased to announce that we have restarted the updates to the development branch (and the Triton backend) starting today.
Today's update includes the changes made with release 0.17:
examples/multimodal/README.md.LLMAPI andtrtllm-benchcommand.tensorrt_llm._torch. The following is a list of supported infrastructure, models, and features that can be used with the PyTorch workflow.LLMAPI.min_p. Refer to https://arxiv.org/pdf/2407.01082.examples/enc_dec/README.md.examples/dora/README.md.numDraftTokens == 0in Target-Draft model speculative decoding.paged_context_fmhaandfp8_context_fmhaare enabled by default.paged_context_fmhais enabled.tokens_per_blockis set to 32 by default.--concurrencysupport for thethroughputsubcommand oftrtllm-bench.cluster_keyfor auto parallelism feature. ([feature request] Can we add H200 in infer_cluster_key() method? #2552)__post_init__function ofLLmArgsClass. Thanks for the contribution from @topenkoff in Fix kwarg name #2691.nvcr.io/nvidia/pytorch:25.01-py3.nvcr.io/nvidia/tritonserver:25.01-py3.--extra-index-url https://pypi.nvidia.comwhen runningpip install tensorrt-llmdue to new third-party dependencies.Beta Was this translation helpful? Give feedback.
All reactions