Skip to content

Commit f8c70c2

Browse files
authored
fix: Introduce LLM wait time (#577)
## 📝 Pull Request Template ### 1. Related Issue Closes # (issue number) ### 2. Type of Change (select one) Type of Change: Bug Fix ### 3. Description Previously, the LLM invocation within the compose method was not subject to a specific timeout, which could lead to indefinite blocking if the underlying LLM service failed to respond or was extremely slow. This change Adds max_llm_wait_time, which defaults to 10 minutes to ensure that the LLM composition process will time out after the configured duration, preventing the agent from being stuck indefinitely and allowing for graceful error handling. ### 4. Testing - [x] I have tested this locally. - [ ] I have updated or added relevant tests. ### 5. Checklist - [x] I have read the [Code of Conduct](./CODE_OF_CONDUCT.md) - [x] I have followed the [Contributing Guidelines](./CONTRIBUTING.md) - [x] My changes follow the project's coding style
1 parent 67348b9 commit f8c70c2

1 file changed

Lines changed: 6 additions & 1 deletion

File tree

  • python/valuecell/agents/common/trading/decision/prompt_based

python/valuecell/agents/common/trading/decision/prompt_based/composer.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
from __future__ import annotations
22

3+
import asyncio
34
import json
45
from typing import Dict
56

@@ -48,10 +49,12 @@ def __init__(
4849
*,
4950
default_slippage_bps: int = 25,
5051
quantity_precision: float = 1e-9,
52+
max_llm_wait_time_sec: float = 600.0,
5153
) -> None:
5254
self._request = request
5355
self._default_slippage_bps = default_slippage_bps
5456
self._quantity_precision = quantity_precision
57+
self._max_llm_wait_time_sec = max_llm_wait_time_sec
5558
cfg = self._request.llm_model_config
5659
self._model = model_utils.create_model_with_provider(
5760
provider=cfg.provider,
@@ -200,7 +203,9 @@ async def _call_llm(self, prompt: str) -> TradePlanProposal:
200203
agent's `response.content` is returned (or validated) as a
201204
`LlmPlanProposal`.
202205
"""
203-
response = await self.agent.arun(prompt)
206+
response = await asyncio.wait_for(
207+
self.agent.arun(prompt), timeout=self._max_llm_wait_time_sec
208+
)
204209
# Agent may return a raw object or a wrapper with `.content`.
205210
content = getattr(response, "content", None) or response
206211
logger.debug("Received LLM response {}", content)

0 commit comments

Comments
 (0)