diff --git a/docs/community/volunteers.md b/docs/community/volunteers.md
new file mode 100644
index 00000000..2c25485e
--- /dev/null
+++ b/docs/community/volunteers.md
@@ -0,0 +1,12 @@
+# Volunteers for Bugfix and CI
+
+We encourage you to check current docs and [issues](https://github.com/vllm-project/vllm-omni/issues) to find possible solutions for your questions. If non of these can solve it, please propose an issue to describe your questions about bug or CI problems for developing.
+
+If you have urgent need for locating and solving bugfix or CI problems, please find community volunteers below.
+
+| Dec 4-Dec 12 | Dec 15-Dec 19 | Dec 22-Dec 26 | Dec 29- Jan 2, 2026| Jan 5-Jan 9 | Jan 12-Jan 16 |
+|----------|----------|----------|----------|----------|----------|
+| Conw729 | yinpeiqi | tzhouam | SamitHuang | gcanlin | natureofnature |
+| david6666666 | R2-Y | hsliuustc0106 | Gaohan123 | ZJY0516 | qibaoyuan |
+
+We kindly welcome more contributors to fix bugs and contribute new features!
diff --git a/docs/mkdocs/stylesheets/extra.css b/docs/mkdocs/stylesheets/extra.css
index 5f6ec03d..a29352f5 100644
--- a/docs/mkdocs/stylesheets/extra.css
+++ b/docs/mkdocs/stylesheets/extra.css
@@ -25,8 +25,11 @@ a:not(:has(svg)):not(.md-icon):not(.autorefs-external) {
a[href*="localhost"]::after,
a[href*="127.0.0.1"]::after,
-a[href*="org.readthedocs.build"]::after,
-a[href*="docs.vllm.ai"]::after {
+
+/* Hide external link icons for all links */
+a[href^="//"]::after,
+a[href^="http://"]::after,
+a[href^="https://"]::after {
display: none !important;
}
diff --git a/docs/usage/faq.md b/docs/usage/faq.md
index 88a62bda..bff520b7 100644
--- a/docs/usage/faq.md
+++ b/docs/usage/faq.md
@@ -11,3 +11,15 @@ A: If you encounter error about backend of librosa, try to install ffmpeg with c
sudo apt update
sudo apt install ffmpeg
```
+
+> Q: I encounter some bugs or CI problems, which is urgent. How can I solve it?
+
+A: At first, you can check current [issues](https://github.com/vllm-project/vllm-omni/issues) to find possible solutions. If non of these satisfy your demand and it is urgent, please find these [volunteers](https://docs.vllm.ai/projects/vllm-omni/en/latest/community/volunteers/) for help.
+
+> Q: Does vLLM-Omni support AWQ or any other quantization?
+
+A: vLLM-Omni partitions model into several stages. For AR stages, it will reuse main logic of LLMEngine in vLLM. So current quantization supported in vLLM should be also supported in vLLM-Omni for them. But systematic verification is ongoing. For quantization for DiffusionEngine, we are working on it. Please stay tuned and welcome contribution!
+
+> Q: Does vLLM-Omni support multimodal streaming input and output?
+
+A: Not yet. We already put it on the [Roadmap](https://github.com/vllm-project/vllm-omni/issues/165). Please stay tuned!
diff --git a/docs/user_guide/examples/offline_inference/qwen2_5_omni.md b/docs/user_guide/examples/offline_inference/qwen2_5_omni.md
index e26a8fbe..01d46e74 100644
--- a/docs/user_guide/examples/offline_inference/qwen2_5_omni.md
+++ b/docs/user_guide/examples/offline_inference/qwen2_5_omni.md
@@ -8,6 +8,7 @@ Source