A chat interface for LLMs. It is a SvelteKit app and it powers the HuggingChat app on hf.co/chat.
Note
Chat UI only supports OpenAI-compatible APIs via OPENAI_BASE_URL and the /models endpoint. Provider-specific integrations (legacy MODELS env var, GGUF discovery, embeddings, web-search helpers, etc.) are removed, but any service that speaks the OpenAI protocol (llama.cpp server, Ollama, OpenRouter, etc. will work by default).
Note
The old version is still available on the legacy branch
Chat UI speaks to OpenAI-compatible APIs only. The fastest way to get running is with the Hugging Face Inference Providers router plus your personal Hugging Face access token.
Step 1 – Create .env.local:
OPENAI_BASE_URL=https://router.huggingface.co/v1
OPENAI_API_KEY=hf_************************
# Fill in once you pick a database option below
MONGODB_URL=OPENAI_API_KEY can come from any OpenAI-compatible endpoint you plan to call. Pick the combo that matches your setup and drop the values into .env.local:
| Provider | Example OPENAI_BASE_URL |
Example key env |
|---|---|---|
| Hugging Face Inference Providers router | https://router.huggingface.co/v1 |
OPENAI_API_KEY=hf_xxx (or HF_TOKEN legacy alias) |
llama.cpp server (llama.cpp --server --api) |
http://127.0.0.1:8080/v1 |
OPENAI_API_KEY=sk-local-demo (any string works; llama.cpp ignores it) |
| Ollama (with OpenAI-compatible bridge) | http://127.0.0.1:11434/v1 |
OPENAI_API_KEY=ollama |
| OpenRouter | https://openrouter.ai/api/v1 |
OPENAI_API_KEY=sk-or-v1-... |
| Poe | https://api.poe.com/v1 |
OPENAI_API_KEY=pk_... |
Check the root .env template for the full list of optional variables you can override.
Step 2 – Choose where MongoDB lives: Either provision a managed cluster (for example MongoDB Atlas) or run a local container. Both approaches are described in Database Options. After you have the URI, drop it into MONGODB_URL (and, if desired, set MONGODB_DB_NAME).
Step 3 – Install and launch the dev server:
git clone https://github.com/huggingface/chat-ui
cd chat-ui
npm install
npm run dev -- --openYou now have Chat UI running against the Hugging Face router without needing to host MongoDB yourself.
Chat history, users, settings, files, and stats all live in MongoDB. You can point Chat UI at any MongoDB 6/7 deployment.
- Create a free cluster at mongodb.com.
- Add your IP (or
0.0.0.0/0for development) to the network access list. - Create a database user and copy the connection string.
- Paste that string into
MONGODB_URLin.env.local. Keep the defaultMONGODB_DB_NAME=chat-uior change it per environment.
Atlas keeps MongoDB off your laptop, which is ideal for teams or cloud deployments.
If you prefer to run MongoDB locally:
docker run -d -p 27017:27017 --name mongo-chatui mongo:latestThen set MONGODB_URL=mongodb://localhost:27017 in .env.local. You can also supply MONGO_STORAGE_PATH if you want Chat UI’s fallback in-memory server to persist under a specific folder.
After configuring your environment variables, start Chat UI with:
npm install
npm run devThe dev server listens on http://localhost:5173 by default. Use npm run build / npm run preview for production builds.
Prefer containerized setup? You can run everything in one container as long as you supply a MongoDB URI (local or hosted):
docker run \
-p 3000 \
-e MONGODB_URL=mongodb://host.docker.internal:27017 \
-e OPENAI_BASE_URL=https://router.huggingface.co/v1 \
-e OPENAI_API_KEY=hf_*** \
-v db:/data \
ghcr.io/huggingface/chat-ui-db:latesthost.docker.internal lets the container reach a MongoDB instance on your host machine; swap it for your Atlas URI if you use the hosted option. All environment variables accepted in .env.local can be provided as -e flags.
You can use a few environment variables to customize the look and feel of chat-ui. These are by default:
PUBLIC_APP_NAME=ChatUI
PUBLIC_APP_ASSETS=chatui
PUBLIC_APP_DESCRIPTION="Making the community's best AI chat models available to everyone."
PUBLIC_APP_DATA_SHARING=PUBLIC_APP_NAMEThe name used as a title throughout the app.PUBLIC_APP_ASSETSIs used to find logos & favicons instatic/$PUBLIC_APP_ASSETS, current options arechatuiandhuggingchat.PUBLIC_APP_DATA_SHARINGCan be set to 1 to add a toggle in the user settings that lets your users opt-in to data sharing with models creator.
This build does not use the MODELS env var or GGUF discovery. Configure models via OPENAI_BASE_URL only; Chat UI will fetch ${OPENAI_BASE_URL}/models and populate the list automatically. Authorization uses OPENAI_API_KEY (preferred). HF_TOKEN remains a legacy alias.
Chat UI can perform client-side routing katanemo/Arch-Router-1.5B as the routing model without running a separate router service. The UI exposes a virtual model alias called "Omni" (configurable) that, when selected, chooses the best route/model for each message.
- Provide a routes policy JSON via
LLM_ROUTER_ROUTES_PATH. No sample file ships with this branch, so you must point the variable to a JSON array you create yourself (for example, commit one in your project likeconfig/routes.chat.json). Each route entry needsname,description,primary_model, and optionalfallback_models. - Configure the Arch router selection endpoint with
LLM_ROUTER_ARCH_BASE_URL(OpenAI-compatible/chat/completions) andLLM_ROUTER_ARCH_MODEL(e.g.router/omni). The Arch call reusesOPENAI_API_KEYfor auth. - Map
otherto a concrete route viaLLM_ROUTER_OTHER_ROUTE(default:casual_conversation). If Arch selection fails, calls fall back toLLM_ROUTER_FALLBACK_MODEL. - Selection timeout can be tuned via
LLM_ROUTER_ARCH_TIMEOUT_MS(default 10000). - Omni alias configuration:
PUBLIC_LLM_ROUTER_ALIAS_ID(defaultomni),PUBLIC_LLM_ROUTER_DISPLAY_NAME(defaultOmni), and optionalPUBLIC_LLM_ROUTER_LOGO_URL.
When you select Omni in the UI, Chat UI will:
- Call the Arch endpoint once (non-streaming) to pick the best route for the last turns.
- Emit RouterMetadata immediately (route and actual model used) so the UI can display it.
- Stream from the selected model via your configured
OPENAI_BASE_URL. On errors, it tries route fallbacks.
To create a production version of your app:
npm run buildYou can preview the production build with npm run preview.
To deploy your app, you may need to install an adapter for your target environment.
