When I configure a custom model that runs on my local machine (using LM Studio or Ollama) on http://127.0.0.1/v1 the LLM does not work.
if I configure it with a fallback, the fallback is called (which means the main did not work)
if I don't configure a fallback, it just fails
When I configure a custom model that runs on my local machine (using LM Studio or Ollama) on http://127.0.0.1/v1 the LLM does not work.
if I configure it with a fallback, the fallback is called (which means the main did not work)
if I don't configure a fallback, it just fails