Skip to content

Conversation

@tjtanaa
Copy link

@tjtanaa tjtanaa commented Dec 1, 2025

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

This PR documents how to setup and run vllm-omni on ROCm.

Add example instructions that works on ROCm.

Test Plan

Validated all the script locally.

Test Result

All the offline and online examples can be run.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: root <[email protected]>

Signed-off-by:  <>

Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
@tjtanaa tjtanaa marked this pull request as ready for review December 1, 2025 11:42
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 57 to 60
??? abstract "meta.lst"
``````lst
--8<-- "examples/offline_inference/qwen2_5_omni/meta.lst"
``````

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Remove includes to nonexistent Qwen2.5 outputs

The Qwen2.5 offline example page now pulls in meta.lst and a series of output_audio/top*.txt files, but none of those artifacts exist under examples/offline_inference/qwen2_5_omni (find examples/offline_inference/qwen2_5_omni -maxdepth 2 -type f returns only the scripts). The --8<-- snippet plugin fails hard when a target file is missing, so MkDocs builds (and this page in the rendered docs) will error until the references are removed or the files are added.

Useful? React with 👍 / 👎.

Comment on lines 49 to 51
??? abstract "output_audio/00000.txt"
``````txt
--8<-- "examples/offline_inference/qwen3_omni/output_audio/00000.txt"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Missing Qwen3 sample audio breaks snippet include

This page tries to include examples/offline_inference/qwen3_omni/output_audio/00000.txt, but that file (and the output_audio directory) is not present anywhere in the repository (find examples/offline_inference/qwen3_omni -maxdepth 2 -type f lists only the scripts). The --8<-- snippet directive will throw a file-not-found error during documentation build/rendering, so the page cannot be generated as written.

Useful? React with 👍 / 👎.

Copy link
Collaborator

@Gaohan123 Gaohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should move hardware specific commands out of examples readme. Maybe we can create a section in docs for hardware specific commands for online and offline inference. Otherwise the readme for examples will become longer and longer.

@tjtanaa
Copy link
Author

tjtanaa commented Dec 1, 2025

@Gaohan123

What if I add those flags as a note in the README.md , e.g. on ROCm, please export FLAG? Is this better?

What do you think about the if-else condition in the script like examples/offline_inference/qwen2_5_omni/run_single_prompt.sh ?

@tjtanaa
Copy link
Author

tjtanaa commented Dec 1, 2025

@Gaohan123 I have removed most of the AMD commands in the documentation and added the instruction as a NOTES.

@tjtanaa tjtanaa marked this pull request as draft December 2, 2025 07:45
@tjtanaa tjtanaa marked this pull request as draft December 2, 2025 07:45
@tjtanaa tjtanaa marked this pull request as draft December 2, 2025 07:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants