Skip to content

Conversation

b8zhong
Copy link
Collaborator

@b8zhong b8zhong commented Oct 6, 2025

Motivation

Note: ROCm only change, I tested this on MI355X.

There is a sampling operators available. It's more performant than the torch native implementation. Note this does not affect the greedy case, as if we have top_p = 1 and top_k = 1, we will still use torch.argsort. There was another operator for this, but I found that it has some correctness issues (or TBD maybe my fault of the usage), so it was not integrated in this PR.

LM-Eval result

Before

Command

lm_eval \
  --model local-completions \
  --tasks gsm8k_platinum \
  --model_args model=amd/Llama-3.1-8B-Instruct-FP8-KV,base_url=http://localhost:30000/v1/completions \
  --trust_remote_code \
  --num_fewshot 8 \
  --batch_size 256 \
  --gen_kwargs "do_sample=True,temperature=0.7,top_p=0.9,top_k=50,max_new_tokens=256"

Results

local-completions (model=amd/Llama-3.1-8B-Instruct-FP8-KV,base_url=http://localhost:30000/v1/completions,trust_remote_code=True), gen_kwargs: (do_sample=True,temperature=0.7,top_p=0.9,top_k=50,max_new_tokens=256), limit: None, num_fewshot: 8, batch_size: 256
|    Tasks     |Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|--------------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k_platinum|      3|flexible-extract|     8|exact_match|↑  |0.7535|±  |0.0124|
|              |       |strict-match    |     8|exact_match|↑  |0.7304|±  |0.0128|

After

Command

lm_eval \
  --model local-completions \
  --tasks gsm8k_platinum \
  --model_args model=amd/Llama-3.1-8B-Instruct-FP8-KV,base_url=http://localhost:30000/v1/completions \
  --trust_remote_code \
  --num_fewshot 8 \
  --batch_size 256 \
  --gen_kwargs "do_sample=True,temperature=0.7,top_p=0.9,top_k=50,max_new_tokens=256"

Results

local-completions (model=amd/Llama-3.1-8B-Instruct-FP8-KV,base_url=http://localhost:30000/v1/completions,trust_remote_code=True), gen_kwargs: (do_sample=True,temperature=0.7,top_p=0.9,top_k=50,max_new_tokens=256), limit: None, num_fewshot: 8, batch_size: 256
|    Tasks     |Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|--------------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k_platinum|      3|flexible-extract|     8|exact_match|↑  |0.7593|±  |0.0123|
|              |       |strict-match    |     8|exact_match|↑  |0.7146|±  |0.0130|

Generally, the results are inline with the existing sampling behaviour.


Benchmarking and Profiling

Before

Command

python3 -m sglang.bench_serving \
  --backend sglang \
  --host localhost \
  --port 30000 \
  --num-prompts 4096 \
  --max-concurrency 64 \
  --flush-cache \
  --extra-request-body '{"sampling_params": {"top_p": 0.95, "top_k": 50}}'

Results

============ Serving Benchmark Result ============
Backend:                                 sglang    
Traffic request rate:                    inf       
Max request concurrency:                 64        
Successful requests:                     4096      
Benchmark duration (s):                  91.53     
Total input tokens:                      1273566   
Total generated tokens:                  515768    
Total generated tokens (retokenized):    512974    
Request throughput (req/s):              44.75     
Input token throughput (tok/s):          13913.89  
Output token throughput (tok/s):         5634.84   
Total token throughput (tok/s):          19548.72  
Concurrency:                             63.23     
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   1412.88   
Median E2E Latency (ms):                 1391.99   
---------------Time to First Token----------------
Mean TTFT (ms):                          161.93    
Median TTFT (ms):                        151.67    
P99 TTFT (ms):                           497.77    
---------------Inter-Token Latency----------------
Mean ITL (ms):                           10.06     
Median ITL (ms):                         8.07      
P95 ITL (ms):                            20.33     
P99 ITL (ms):                            39.54     
Max ITL (ms):                            262.73    
==================================================

After

Command

python3 -m sglang.bench_serving \
  --backend sglang \
  --host localhost \
  --port 30000 \
  --num-prompts 4096 \
  --max-concurrency 64 \
  --flush-cache \
  --extra-request-body '{"sampling_params": {"top_p": 0.95, "top_k": 50}}'

Results

============ Serving Benchmark Result ============
Backend:                                 sglang    
Traffic request rate:                    inf       
Max request concurrency:                 64        
Successful requests:                     4096      
Benchmark duration (s):                  85.91     
Total input tokens:                      1273566   
Total generated tokens:                  508990    
Total generated tokens (retokenized):    505977    
Request throughput (req/s):              47.68     
Input token throughput (tok/s):          14824.53  
Output token throughput (tok/s):         5924.73   
Total token throughput (tok/s):          20749.26  
Concurrency:                             63.18     
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   1325.21   
Median E2E Latency (ms):                 1327.94   
---------------Time to First Token----------------
Mean TTFT (ms):                          140.70    
Median TTFT (ms):                        135.48    
P99 TTFT (ms):                           468.12    
---------------Inter-Token Latency----------------
Mean ITL (ms):                           9.65      
Median ITL (ms):                         7.29      
P95 ITL (ms):                            21.44     
P99 ITL (ms):                            39.73     
Max ITL (ms):                            240.66    
==================================================

The changes improved throughput and latency by roughly 6%. After some further testing for various top_p and top_k values in individual sampling, it outperforms by a decent margin in nearly all combinations.

Copy link
Contributor

Summary of Changes

Hello @b8zhong, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the AITER sampling backend for ROCm devices, aiming to significantly boost performance over the existing PyTorch implementation. The changes include dynamic backend selection, robust error handling, and a fallback for specific sampling parameters, leading to measurable improvements in processing speed and responsiveness.

Highlights

  • AITER Sampling Integration: Implemented the AITER sampling backend specifically for ROCm devices, replacing the native PyTorch implementation to leverage performance optimizations.
  • Performance Improvement: Benchmarking results demonstrate an approximate 6% improvement in throughput and latency, particularly for non-greedy sampling scenarios, across various top_p and top_k combinations.
  • Conditional Backend Selection: Introduced logic to dynamically select the AITER sampling backend based on ROCm device detection and AITER availability, with appropriate fallbacks to the PyTorch backend if AITER is not available or supported.
  • Min_p Sampling Fallback: Added a specific fallback mechanism for min_p sampling, as the AITER backend does not currently support it, ensuring continued functionality by reverting to the PyTorch implementation when min_p is required.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request integrates the AITER sampling implementation for ROCm devices, which provides a noticeable performance improvement as demonstrated by the benchmarks. The changes are well-structured, adding the necessary environment variables, server arguments, and backend logic with appropriate fallbacks for unsupported configurations. My review includes a few minor suggestions to improve code clarity and maintainability in the new sampling logic, such as simplifying redundant checks and making control flow more explicit.

b8zhong and others added 4 commits October 7, 2025 20:53
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants