You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CARE currently has no built-in way to talk to LLMs. Every LLM-based feature requires custom components, and there's no cost tracking or prompt management. This issue covers adding a generic LLM integration layer so users can plug in their own API keys, write prompts, and track usage -- all without touching code.
Problem
Users can't connect to LLMs (OpenAI, Anthropic, Google) without developer intervention.
There's no way to track what LLM calls cost or what was sent/received.
The existing NLP broker adds unnecessary latency for simple LLM API calls.
Prompts are hardcoded per feature instead of being user-configurable.
Proposed Solution
Backend
A new LLMService that calls provider APIs directly over HTTP, skipping the NLP broker.
Four new database tables: api_key, llm_provider, llm_log, and prompt_template.
AES-256-GCM encryption for stored API keys.
Full I/O logging of every LLM request for cost tracking and research.
Seeded provider entries for OpenAI, Anthropic, and Google out of the box.
Frontend
A unified LLM Dashboard page with API key management, prompt template editor, cost breakdown by provider, and a filterable request log.
An LLM Providers admin page for enabling/disabling providers and restricting models system-wide.
Vuex store integration following the same pattern as the existing NLP service.
Prompt Templates
Users write prompts with {{placeholders}} and can preview or test-run them from the dashboard.
Templates can be shared system-wide, per study, or per project.
Acceptance Criteria
Users can add, edit, enable/disable, and delete API keys from the dashboard.
Users can create, edit, duplicate, and delete prompt templates with parameter placeholders.
Every LLM call is logged with provider, model, tokens, cost, latency, input, and output.
The dashboard shows usage stats (total requests, tokens, estimated cost).
The request log supports filtering by provider, status, and time range, plus CSV export.
Admins can manage providers and control which models are available.
API keys are encrypted at rest and masked in the UI.
LLM calls bypass the NLP broker and go directly to provider APIs.
Out of Scope (for now)
Model browser / comparison page.
Automatic input mapping from UI components to prompt template parameters.
Backend enforcement of sharing scopes (study/project-level access control).
Issue: LLM Integration Layer & Dashboard
Summary
CARE currently has no built-in way to talk to LLMs. Every LLM-based feature requires custom components, and there's no cost tracking or prompt management. This issue covers adding a generic LLM integration layer so users can plug in their own API keys, write prompts, and track usage -- all without touching code.
Problem
Proposed Solution
Backend
LLMServicethat calls provider APIs directly over HTTP, skipping the NLP broker.api_key,llm_provider,llm_log, andprompt_template.Frontend
Prompt Templates
{{placeholders}}and can preview or test-run them from the dashboard.Acceptance Criteria
Out of Scope (for now)
Related Files
backend/webserver/services/llm.jsbackend/utils/encryption.jsbackend/db/models/api_key.js,llm_provider.js,llm_log.js,prompt_template.jsbackend/db/migrations/20260331100000through20260331100006frontend/src/components/dashboard/LlmDashboard.vuefrontend/src/components/dashboard/LlmProviders.vuefrontend/src/store/modules/service.jsbackend/webserver/sockets/service.js