Skip to content

Conversation

faradawn
Copy link

@faradawn faradawn commented Sep 30, 2025

Summary by CodeRabbit

  • Documentation
    • Expanded Qwen3-Next + TensorRT-LLM setup with a full YAML performance config (attention DP, CUDA graphs, MoE, KV cache, etc.).
    • Updated run commands, server launch example, curl usage, and Quick Start.
    • Added benchmarking guidance and a new bench.sh usage example.
    • Included containerized workflow with kv_cache_reuse recommendations.
    • Provided advanced usage via quickstart_advanced.py.
    • Text-only updates; no changes to executable code or logic.

Creating Qwen3 Guide for this Jira ticket: https://jirasw.nvidia.com/browse/AIPSWTME-2

Code support for Qwen3 Next: https://github.com/NVIDIA/TensorRT-LLM/pull/7892/files

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Faradawn Yang <[email protected]>
Copy link
Contributor

coderabbitai bot commented Sep 30, 2025

📝 Walkthrough

Walkthrough

Expanded documentation for Qwen3-Next in TensorRT-LLM: replaces a brief note with a full YAML configuration example, updated run/server/benchmark commands, curl usage, containerized run example (including kv_cache_reuse guidance), a new bench.sh reference, and quickstart_advanced.py instructions. No code or logic changes.

Changes

Cohort / File(s) Summary
Docs: Qwen3-Next TensorRT-LLM usage
examples/models/core/qwen/README.md
Rewrote and expanded usage docs: added detailed YAML config (attention DP, CUDA graphs, MoE, KV cache), updated run/server launch commands, curl and benchmarking examples, container workflow with kv_cache_reuse notes, quick start and quickstart_advanced.py instructions. Text-only changes.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description does not follow the required template: it lacks a properly formatted title with ticket and type, the Description section is empty with only placeholder comments, and the Test Coverage section has no content outlining relevant tests. Please add a title line following the template (for example, “[AIPSWTME-2][doc] Add Qwen3 Next Guide”), fill in the Description section with a concise explanation of the issue and the solution, and provide details in the Test Coverage section listing the specific tests that cover these changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title “[doc] Add Qwen3 Next Guide to Core README” is concise and clearly summarizes the main change, which is updating the core README with the Qwen3 Next guide.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🧪 Early access (Sonnet 4.5): enabled

We are currently testing the Sonnet 4.5 model, which is expected to improve code review quality. However, this model may lead to increased noise levels in the review comments. Please disable the early access features if the noise level causes any inconvenience.

Note:

  • Public repositories are always opted into early access features.
  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1560cca and f70e559.

📒 Files selected for processing (1)
  • examples/models/core/qwen/README.md (1 hunks)
🔇 Additional comments (2)
examples/models/core/qwen/README.md (2)

932-949: LGTM! Clear configuration guidance provided.

The YAML configuration example is well-structured and includes an important note about disabling kv_cache_reuse. The settings are appropriate for the Qwen3-Next model.


968-982: LGTM! Well-formed curl example.

The curl command demonstrates proper usage of the chat completions API endpoint with appropriate parameters.

Comment on lines +951 to +965
Below is an example command to launch the TRT-LLM server with the Qwen3-Next model from within the container. Note that we currently only support pytorch backend.

```shell
trtllm-serve Qwen/Qwen3-Next-80B-A3B-Thinking \
--host 0.0.0.0 \
--port 8000 \
--backend pytorch \
--max_batch_size 1 \
--max_num_tokens 4096 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--tp_size 4 \
--ep_size 4 \
--trust_remote_code \
--extra_llm_api_options ${EXTRA_LLM_API_FILE}
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify the max_batch_size inconsistency.

The trtllm-serve command sets --max_batch_size 1 (line 958), but the YAML configuration file sets max_batch_size: 720 in the cuda_graph_config (line 941). This could cause confusion about which value takes precedence or whether they serve different purposes.

Consider adding a comment explaining this discrepancy, or aligning the values if they should be consistent. For example:

+# Note: --max_batch_size controls the runtime batch limit, while cuda_graph_config.max_batch_size
+# defines the CUDA graph batch size configuration for performance optimization.
 trtllm-serve Qwen/Qwen3-Next-80B-A3B-Thinking \
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Below is an example command to launch the TRT-LLM server with the Qwen3-Next model from within the container. Note that we currently only support pytorch backend.
```shell
trtllm-serve Qwen/Qwen3-Next-80B-A3B-Thinking \
--host 0.0.0.0 \
--port 8000 \
--backend pytorch \
--max_batch_size 1 \
--max_num_tokens 4096 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--tp_size 4 \
--ep_size 4 \
--trust_remote_code \
--extra_llm_api_options ${EXTRA_LLM_API_FILE}
```
# Note: --max_batch_size controls the runtime batch limit, while cuda_graph_config.max_batch_size
# defines the CUDA graph batch size configuration for performance optimization.
trtllm-serve Qwen/Qwen3-Next-80B-A3B-Thinking \
--host 0.0.0.0 \
--port 8000 \
--backend pytorch \
--max_batch_size 1 \
--max_num_tokens 4096 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--tp_size 4 \
--ep_size 4 \
--trust_remote_code \
--extra_llm_api_options ${EXTRA_LLM_API_FILE}
🤖 Prompt for AI Agents
In examples/models/core/qwen/README.md around lines 951 to 965, the example
command uses --max_batch_size 1 while the YAML cuda_graph_config earlier sets
max_batch_size: 720, causing potential confusion about precedence or intent;
update the README to either align the CLI example and YAML value or add a short
clarifying comment explaining that the CLI --max_batch_size overrides YAML at
runtime (or that the YAML setting is for cuda_graph internal sizing and can be
larger), and show a consistent example or explicit note which value is
authoritative.

Comment on lines +985 to +1016
To benchmark the performance of your TensorRT-LLM server you can leverage the built-in `benchmark_serving.py` script. To do this first creating a wrapper `bench.sh` script.

```shell
cat <<'EOF' > bench.sh
#!/usr/bin/env bash
set -euo pipefail
concurrency_list="1 2 4 8 16 32 64 128 256"
multi_round=5
isl=1024
osl=1024
result_dir=/tmp/qwen3_output
for concurrency in ${concurrency_list}; do
num_prompts=$((concurrency * multi_round))
python -m tensorrt_llm.serve.scripts.benchmark_serving \
--model Qwen/Qwen3-Next-80B-A3B-Thinking \
--backend openai \
--dataset-name "random" \
--random-input-len ${isl} \
--random-output-len ${osl} \
--random-prefix-len 0 \
--random-ids \
--num-prompts ${num_prompts} \
--max-concurrency ${concurrency} \
--ignore-eos \
--tokenize-on-client \
--percentile-metrics "ttft,tpot,itl,e2el"
done
EOF
chmod +x bench.sh
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove or utilize the unused result_dir variable.

Line 996 defines result_dir=/tmp/qwen3_output, but this variable is never used in the benchmark_serving.py invocation. Either remove this variable or add output redirection to use it.

Option 1 - Remove the unused variable:

 concurrency_list="1 2 4 8 16 32 64 128 256"
 multi_round=5
 isl=1024
 osl=1024
-result_dir=/tmp/qwen3_output

Option 2 - Use the variable for output:

     python -m tensorrt_llm.serve.scripts.benchmark_serving \
         --model Qwen/Qwen3-Next-80B-A3B-Thinking \
         --backend openai \
+        --save-result \
+        --result-dir ${result_dir} \
         --dataset-name "random" \
🤖 Prompt for AI Agents
In examples/models/core/qwen/README.md around lines 985 to 1016, the variable
result_dir=/tmp/qwen3_output is declared but never used; either remove that line
to clean up the script, or wire it into the benchmark run by directing script
output into files under that directory (create the directory if needed) or by
passing it as an output/result path argument to benchmark_serving if the script
supports one (e.g., --output-dir or redirect stdout/stderr into files inside
$result_dir).

Comment on lines +1019 to 1023
In addition, below is the command to run the Qwen3-Next model using the `quickstart_advanced.py` file.

```bash
mpirun -n 1 --allow-run-as-root --oversubscribe python3 examples/llm-api/quickstart_advanced.py --model_dir /Qwen3-Next-80B-A3B-Thinking --kv_cache_fraction 0.6 --disable_kv_cache_reuse --max_batch_size 1 --tp_size 4

```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use a placeholder for the model directory path.

Line 1022 uses an absolute path /Qwen3-Next-80B-A3B-Thinking which is inconsistent with the rest of the documentation that uses placeholders like <YOUR_MODEL_DIR> (line 614, 622) or relative paths like Qwen/Qwen3-Next-80B-A3B-Thinking (line 954).

Apply this diff to maintain consistency:

-mpirun -n 1 --allow-run-as-root --oversubscribe python3 examples/llm-api/quickstart_advanced.py --model_dir /Qwen3-Next-80B-A3B-Thinking --kv_cache_fraction 0.6 --disable_kv_cache_reuse --max_batch_size 1 --tp_size 4
+mpirun -n 1 --allow-run-as-root --oversubscribe python3 examples/llm-api/quickstart_advanced.py --model_dir <YOUR_MODEL_DIR> --kv_cache_fraction 0.6 --disable_kv_cache_reuse --max_batch_size 1 --tp_size 4

Additionally, consider documenting why mpirun -n 1 is used for a single-process execution, or remove it if not necessary.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
In addition, below is the command to run the Qwen3-Next model using the `quickstart_advanced.py` file.
```bash
mpirun -n 1 --allow-run-as-root --oversubscribe python3 examples/llm-api/quickstart_advanced.py --model_dir /Qwen3-Next-80B-A3B-Thinking --kv_cache_fraction 0.6 --disable_kv_cache_reuse --max_batch_size 1 --tp_size 4
```
In addition, below is the command to run the Qwen3-Next model using the `quickstart_advanced.py` file.
🤖 Prompt for AI Agents
In examples/models/core/qwen/README.md around lines 1019 to 1023, replace the
hardcoded absolute model path (/Qwen3-Next-80B-A3B-Thinking) with the
documentation placeholder (e.g. <YOUR_MODEL_DIR>) or a consistent relative path
(e.g. Qwen/Qwen3-Next-80B-A3B-Thinking) so it matches other examples; update the
example command accordingly. Also either remove the unnecessary mpirun -n 1
wrapper for single-process execution or add one short inline note immediately
before or after the command explaining why mpirun -n 1 is retained
(single-process invocation on MPI setups), so readers understand the choice.

@svc-trtllm-gh-bot svc-trtllm-gh-bot added the Community want to contribute PRs initiated from Community label Sep 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Community want to contribute PRs initiated from Community
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants