Skip to content

feat: add configurable API timeout for slow local LLMs#81

Open
JasonOA888 wants to merge 1 commit into666ghj:mainfrom
JasonOA888:fix/issue-58-configurable-timeout
Open

feat: add configurable API timeout for slow local LLMs#81
JasonOA888 wants to merge 1 commit into666ghj:mainfrom
JasonOA888:fix/issue-58-configurable-timeout

Conversation

@JasonOA888
Copy link

Summary

Fixes #58 - 允许用户配置API超时时间以支持响应较慢的本地大模型(如Ollama)。

Changes

  • 添加 VITE_API_TIMEOUT 环境变量支持
  • 默认保持300000ms(5分钟)
  • 用户可根据需要增加超时时间

Usage

.env 文件中添加:

# 本地大模型响应较慢时增加超时时间
VITE_API_TIMEOUT=600000  # 10分钟

Testing

  • 默认值300000ms正常工作
  • 配置后使用配置的值

Fixes #58

- Added VITE_API_TIMEOUT environment variable support
- Default remains 300000ms (5 minutes)
- Users can increase timeout for slow local models like Ollama
- Example: VITE_API_TIMEOUT=600000 for 10 minutes

Fixes 666ghj#58
@dosubot dosubot bot added size:XS This PR changes 0-9 lines, ignoring generated files. LLM API Any questions regarding the LLM API labels Mar 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

LLM API Any questions regarding the LLM API size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

使用ollama加载的本地大模型启动引擎时经常会遇到超时问题

1 participant