-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[doc] Add Qwen3 Next Guide to Core README #8101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Faradawn Yang <[email protected]>
📝 WalkthroughWalkthroughExpanded documentation for Qwen3-Next in TensorRT-LLM: replaces a brief note with a full YAML configuration example, updated run/server/benchmark commands, curl usage, containerized run example (including kv_cache_reuse guidance), a new bench.sh reference, and quickstart_advanced.py instructions. No code or logic changes. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🧪 Early access (Sonnet 4.5): enabledWe are currently testing the Sonnet 4.5 model, which is expected to improve code review quality. However, this model may lead to increased noise levels in the review comments. Please disable the early access features if the noise level causes any inconvenience. Note:
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
examples/models/core/qwen/README.md
(1 hunks)
🔇 Additional comments (2)
examples/models/core/qwen/README.md (2)
932-949
: LGTM! Clear configuration guidance provided.The YAML configuration example is well-structured and includes an important note about disabling
kv_cache_reuse
. The settings are appropriate for the Qwen3-Next model.
968-982
: LGTM! Well-formed curl example.The curl command demonstrates proper usage of the chat completions API endpoint with appropriate parameters.
Below is an example command to launch the TRT-LLM server with the Qwen3-Next model from within the container. Note that we currently only support pytorch backend. | ||
|
||
```shell | ||
trtllm-serve Qwen/Qwen3-Next-80B-A3B-Thinking \ | ||
--host 0.0.0.0 \ | ||
--port 8000 \ | ||
--backend pytorch \ | ||
--max_batch_size 1 \ | ||
--max_num_tokens 4096 \ | ||
--kv_cache_free_gpu_memory_fraction 0.6 \ | ||
--tp_size 4 \ | ||
--ep_size 4 \ | ||
--trust_remote_code \ | ||
--extra_llm_api_options ${EXTRA_LLM_API_FILE} | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarify the max_batch_size
inconsistency.
The trtllm-serve
command sets --max_batch_size 1
(line 958), but the YAML configuration file sets max_batch_size: 720
in the cuda_graph_config
(line 941). This could cause confusion about which value takes precedence or whether they serve different purposes.
Consider adding a comment explaining this discrepancy, or aligning the values if they should be consistent. For example:
+# Note: --max_batch_size controls the runtime batch limit, while cuda_graph_config.max_batch_size
+# defines the CUDA graph batch size configuration for performance optimization.
trtllm-serve Qwen/Qwen3-Next-80B-A3B-Thinking \
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Below is an example command to launch the TRT-LLM server with the Qwen3-Next model from within the container. Note that we currently only support pytorch backend. | |
```shell | |
trtllm-serve Qwen/Qwen3-Next-80B-A3B-Thinking \ | |
--host 0.0.0.0 \ | |
--port 8000 \ | |
--backend pytorch \ | |
--max_batch_size 1 \ | |
--max_num_tokens 4096 \ | |
--kv_cache_free_gpu_memory_fraction 0.6 \ | |
--tp_size 4 \ | |
--ep_size 4 \ | |
--trust_remote_code \ | |
--extra_llm_api_options ${EXTRA_LLM_API_FILE} | |
``` | |
# Note: --max_batch_size controls the runtime batch limit, while cuda_graph_config.max_batch_size | |
# defines the CUDA graph batch size configuration for performance optimization. | |
trtllm-serve Qwen/Qwen3-Next-80B-A3B-Thinking \ | |
--host 0.0.0.0 \ | |
--port 8000 \ | |
--backend pytorch \ | |
--max_batch_size 1 \ | |
--max_num_tokens 4096 \ | |
--kv_cache_free_gpu_memory_fraction 0.6 \ | |
--tp_size 4 \ | |
--ep_size 4 \ | |
--trust_remote_code \ | |
--extra_llm_api_options ${EXTRA_LLM_API_FILE} |
🤖 Prompt for AI Agents
In examples/models/core/qwen/README.md around lines 951 to 965, the example
command uses --max_batch_size 1 while the YAML cuda_graph_config earlier sets
max_batch_size: 720, causing potential confusion about precedence or intent;
update the README to either align the CLI example and YAML value or add a short
clarifying comment explaining that the CLI --max_batch_size overrides YAML at
runtime (or that the YAML setting is for cuda_graph internal sizing and can be
larger), and show a consistent example or explicit note which value is
authoritative.
To benchmark the performance of your TensorRT-LLM server you can leverage the built-in `benchmark_serving.py` script. To do this first creating a wrapper `bench.sh` script. | ||
|
||
```shell | ||
cat <<'EOF' > bench.sh | ||
#!/usr/bin/env bash | ||
set -euo pipefail | ||
concurrency_list="1 2 4 8 16 32 64 128 256" | ||
multi_round=5 | ||
isl=1024 | ||
osl=1024 | ||
result_dir=/tmp/qwen3_output | ||
for concurrency in ${concurrency_list}; do | ||
num_prompts=$((concurrency * multi_round)) | ||
python -m tensorrt_llm.serve.scripts.benchmark_serving \ | ||
--model Qwen/Qwen3-Next-80B-A3B-Thinking \ | ||
--backend openai \ | ||
--dataset-name "random" \ | ||
--random-input-len ${isl} \ | ||
--random-output-len ${osl} \ | ||
--random-prefix-len 0 \ | ||
--random-ids \ | ||
--num-prompts ${num_prompts} \ | ||
--max-concurrency ${concurrency} \ | ||
--ignore-eos \ | ||
--tokenize-on-client \ | ||
--percentile-metrics "ttft,tpot,itl,e2el" | ||
done | ||
EOF | ||
chmod +x bench.sh | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove or utilize the unused result_dir
variable.
Line 996 defines result_dir=/tmp/qwen3_output
, but this variable is never used in the benchmark_serving.py
invocation. Either remove this variable or add output redirection to use it.
Option 1 - Remove the unused variable:
concurrency_list="1 2 4 8 16 32 64 128 256"
multi_round=5
isl=1024
osl=1024
-result_dir=/tmp/qwen3_output
Option 2 - Use the variable for output:
python -m tensorrt_llm.serve.scripts.benchmark_serving \
--model Qwen/Qwen3-Next-80B-A3B-Thinking \
--backend openai \
+ --save-result \
+ --result-dir ${result_dir} \
--dataset-name "random" \
🤖 Prompt for AI Agents
In examples/models/core/qwen/README.md around lines 985 to 1016, the variable
result_dir=/tmp/qwen3_output is declared but never used; either remove that line
to clean up the script, or wire it into the benchmark run by directing script
output into files under that directory (create the directory if needed) or by
passing it as an output/result path argument to benchmark_serving if the script
supports one (e.g., --output-dir or redirect stdout/stderr into files inside
$result_dir).
In addition, below is the command to run the Qwen3-Next model using the `quickstart_advanced.py` file. | ||
|
||
```bash | ||
mpirun -n 1 --allow-run-as-root --oversubscribe python3 examples/llm-api/quickstart_advanced.py --model_dir /Qwen3-Next-80B-A3B-Thinking --kv_cache_fraction 0.6 --disable_kv_cache_reuse --max_batch_size 1 --tp_size 4 | ||
|
||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a placeholder for the model directory path.
Line 1022 uses an absolute path /Qwen3-Next-80B-A3B-Thinking
which is inconsistent with the rest of the documentation that uses placeholders like <YOUR_MODEL_DIR>
(line 614, 622) or relative paths like Qwen/Qwen3-Next-80B-A3B-Thinking
(line 954).
Apply this diff to maintain consistency:
-mpirun -n 1 --allow-run-as-root --oversubscribe python3 examples/llm-api/quickstart_advanced.py --model_dir /Qwen3-Next-80B-A3B-Thinking --kv_cache_fraction 0.6 --disable_kv_cache_reuse --max_batch_size 1 --tp_size 4
+mpirun -n 1 --allow-run-as-root --oversubscribe python3 examples/llm-api/quickstart_advanced.py --model_dir <YOUR_MODEL_DIR> --kv_cache_fraction 0.6 --disable_kv_cache_reuse --max_batch_size 1 --tp_size 4
Additionally, consider documenting why mpirun -n 1
is used for a single-process execution, or remove it if not necessary.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
In addition, below is the command to run the Qwen3-Next model using the `quickstart_advanced.py` file. | |
```bash | |
mpirun -n 1 --allow-run-as-root --oversubscribe python3 examples/llm-api/quickstart_advanced.py --model_dir /Qwen3-Next-80B-A3B-Thinking --kv_cache_fraction 0.6 --disable_kv_cache_reuse --max_batch_size 1 --tp_size 4 | |
``` | |
In addition, below is the command to run the Qwen3-Next model using the `quickstart_advanced.py` file. | |
🤖 Prompt for AI Agents
In examples/models/core/qwen/README.md around lines 1019 to 1023, replace the
hardcoded absolute model path (/Qwen3-Next-80B-A3B-Thinking) with the
documentation placeholder (e.g. <YOUR_MODEL_DIR>) or a consistent relative path
(e.g. Qwen/Qwen3-Next-80B-A3B-Thinking) so it matches other examples; update the
example command accordingly. Also either remove the unnecessary mpirun -n 1
wrapper for single-process execution or add one short inline note immediately
before or after the command explaining why mpirun -n 1 is retained
(single-process invocation on MPI setups), so readers understand the choice.
Summary by CodeRabbit
Creating Qwen3 Guide for this Jira ticket: https://jirasw.nvidia.com/browse/AIPSWTME-2
Code support for Qwen3 Next: https://github.com/NVIDIA/TensorRT-LLM/pull/7892/files
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.