refactor(providers): migrate to Vercel AI SDK for unified LLM provider interface#62
refactor(providers): migrate to Vercel AI SDK for unified LLM provider interface#62oshorefueled merged 12 commits intomainfrom
Conversation
- Replace individual AI provider SDKs (@anthropic-ai/sdk, @google/generative-ai, @perplexity-ai/perplexity_ai, openai) with unified @AI-SDK packages - Add @ai-sdk/anthropic, @ai-sdk/google, @ai-sdk/openai, @ai-sdk/perplexity, and @ai-sdk/azure as dependencies - Add ai package (^4.0.0) as core dependency for unified AI SDK - Remove direct openai dependency in favor of @ai-sdk/openai - Update package-lock.json with new dependency tree and resolved versions - Bump version to 2.3.0
…face - Replace individual provider implementations (OpenAI, Azure, Anthropic, Gemini) with unified VercelAIProvider using @AI-SDK packages - Migrate Perplexity search provider to use Vercel AI SDK's generateText with boundary validation for source data - Update provider factory to instantiate models through Vercel AI SDK factory functions instead of provider-specific classes - Add VercelAIConfig interface and remove provider-specific config types (AzureOpenAIConfig, AnthropicConfig, OpenAIConfig, GeminiConfig) - Export LLMResult, SearchProvider, PerplexitySearchProvider, TokenUsage utilities, and ProviderType from providers index - Remove api-client export from boundaries index as it's no longer needed - Simplify provider instantiation by consolidating configuration handling into VercelAIProvider - Add Zod schema for Perplexity source boundary validation to safely extract provider-specific fields
…l AI SDK - Remove custom provider implementations (Anthropic, OpenAI, Azure OpenAI, Gemini) - Delete API client boundary layer and response validation schemas - Remove provider-specific test files (anthropic-e2e, anthropic-provider, openai-provider) - Consolidate to unified Vercel AI SDK interface for all LLM providers - Simplify codebase by eliminating duplicate validation and schema logic
- Update PerplexitySearchProvider tests to use Vercel AI SDK's generateText instead of native Perplexity SDK - Replace @perplexity-ai/perplexity_ai mocks with @ai-sdk/perplexity and ai package mocks - Add API key validation tests and environment variable handling - Expand test coverage for edge cases including empty queries, missing fields, and result limiting - Update provider-factory tests to reflect new SDK integration - Add comprehensive Vercel AI provider tests for unified interface compatibility - Improve test assertions and error handling validation
- Add resolvePresetsDir() function to handle dual path resolution - Check built mode path first (dist/ → ../presets) - Fall back to dev mode path if meta.json not found (src/cli/ → ../../presets) - Update registerMainCommand to use new resolver function - Fixes preset loading failures when running from different build contexts
📝 WalkthroughWalkthroughConsolidates provider integrations into a single VercelAIProvider (using the Vercel AI SDK), replaces legacy provider SDK dependencies with unified Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant VercelAIProvider as VercelAIProvider
participant RequestBuilder as RequestBuilder
participant SchemaConverter as SchemaConverter
participant VercelAI as ai
participant Validator as ZodValidator
Client->>VercelAIProvider: runPromptStructured(content, promptText, schema)
VercelAIProvider->>RequestBuilder: build(promptText)
RequestBuilder-->>VercelAIProvider: systemPrompt
VercelAIProvider->>SchemaConverter: jsonSchemaToZod(schema)
SchemaConverter-->>VercelAIProvider: zodSchema
VercelAIProvider->>VercelAI: generateText({ model, system, messages, temperature, experimental_output })
VercelAI-->>VercelAIProvider: { experimental_output, text, usage }
VercelAIProvider->>Validator: validate(experimental_output, zodSchema)
Validator-->>VercelAIProvider: validatedData
VercelAIProvider->>Client: LLMResult<T> (data + usage)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
package.json (1)
72-72:⚠️ Potential issue | 🟡 MinorDon't use
zod@3.25.76— it has known breaking TypeScript build issues.While the version does exist on npm, multiple users reported that 3.25.76 introduces breaking TypeScript compatibility issues because it includes Zod v4 files under
zod/v4, potentially forcing a TS version upgrade. Consider using a more stable version from the 3.x line (e.g., 3.23.x) or upgrading to the latest stable 4.x version instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 72, The package.json pins zod to "3.25.76", which has known TypeScript-breaking files; update the dependency entry for "zod" to a safe version: either downgrade to a stable 3.x release (e.g., "3.23.x") or upgrade to the latest stable 4.x release depending on your TypeScript compatibility, then run install and rebuild; specifically change the value for the "zod" dependency (currently "3.25.76") in package.json and verify the project compiles and tests pass after npm/yarn install.tests/perplexity-provider.test.ts (1)
15-28:⚠️ Potential issue | 🟠 MajorAlign mocked sources with the provider’s input shape (
text/publishedDate).
The provider mapssource.text→snippetandsource.publishedDate→date, but the mocks currently only includesnippet/date, so assertions can fail and the normalization path isn’t exercised.✅ Proposed fix (add `text`/`publishedDate` to mocks)
const MOCK_RESULTS = [ { title: 'AI Overview', snippet: 'AI tools in 2025 are evolving fast.', + text: 'AI tools in 2025 are evolving fast.', url: 'https://example.com/ai-overview', date: '', + publishedDate: '', }, { title: 'Developer Productivity', snippet: 'AI improves developer efficiency by 40%.', + text: 'AI improves developer efficiency by 40%.', url: 'https://example.com/dev-productivity', date: '', + publishedDate: '', }, ];const incompleteResults = [ { // Missing all fields }, { title: 'Has Title', // Missing other fields }, { - snippet: 'Has snippet', + text: 'Has snippet', url: 'https://example.com', - date: '2025-01-01', + publishedDate: '2025-01-01', }, ];Also applies to: 60-70, 108-121
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/perplexity-provider.test.ts` around lines 15 - 28, The test mocks in tests/perplexity-provider.test.ts (e.g., the MOCK_RESULTS constant) don’t include the provider’s input fields `text` and `publishedDate`, so the provider’s mapping (source.text → snippet and source.publishedDate → date) isn’t exercised; update MOCK_RESULTS and the other mock arrays used later in the file to include `text` (copy of `snippet`) and `publishedDate` (copy of `date` or appropriate ISO string) for each item so the normalization code paths in the provider (look for the provider normalization/mapping functions) are triggered and assertions align with expected output.
🧹 Nitpick comments (3)
src/providers/vercel-ai-provider.ts (2)
103-113: Error re-wrapping loses the original error's stack trace and cause.Lines 107-109 and 112 wrap errors in new
Errorobjects, discarding the original stack trace. Consider using thecauseoption for proper error chaining:♻️ Proposed fix
if (NoObjectGeneratedError.isInstance(e)) { const rawText = e instanceof Error && 'text' in e ? String(e.text) : 'unknown'; throw new Error( - `LLM failed to generate valid structured output. Raw text: ${rawText}` + `LLM failed to generate valid structured output. Raw text: ${rawText}`, + { cause: e } ); } const err = e instanceof Error ? e : new Error(String(e)); - throw new Error(`Vercel AI SDK call failed: ${err.message}`); + throw new Error(`Vercel AI SDK call failed: ${err.message}`, { cause: err });As per coding guidelines: "Use custom error types with proper inheritance; catch blocks use
unknowntype."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/vercel-ai-provider.ts` around lines 103 - 113, The catch block that handles NoObjectGeneratedError and other errors (involving NoObjectGeneratedError.isInstance(e), the local e variable, and the creation of err) is re-wrapping errors into new Error objects and losing original stack/cause; update the throws to chain the original error via the Error constructor's cause option (e.g., throw new Error(message, { cause: e })) or rethrow the original Error instance when appropriate (use the err variable you already build for the generic branch), ensuring the original stack and error type are preserved while keeping the descriptive messages.
128-171:additionalPropertiesis used in schemas but not enforced inconvertSchemaNodeconversion.The schemas in the codebase (buildJudgeLLMSchema, buildCheckLLMSchema, claimSchema) use
additionalProperties: false, but the conversion function returnsz.object(zodFields)without explicitly calling.strict(). While OpenAI's strict mode enforces this at runtime, the Zod schema itself would allow additional properties. The other missing features listed (oneOf, anyOf, allOf, $ref, null type, const, format constraints, numeric constraints) are not used in practice in the actual schemas.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/vercel-ai-provider.ts` around lines 128 - 171, The convertSchemaNode function currently ignores node.additionalProperties so object schemas like those produced by buildJudgeLLMSchema, buildCheckLLMSchema, and claimSchema (which set additionalProperties: false) still allow extra keys; update convertSchemaNode (object case) to inspect node.additionalProperties and, when it's exactly false, return the Zod object as strict (e.g., call .strict() on the z.object created from zodFields or on an empty object if no properties), and if additionalProperties is true or undefined keep the existing behavior (z.record(z.unknown()) or non-strict z.object) so the Zod shape enforces the same additionalProperties semantics as the JSON schema.src/schemas/env-schemas.ts (1)
4-20: Defaults are duplicated between this file andprovider-factory.ts.The default model strings (e.g.,
'gpt-4o','claude-3-sonnet-20240229','gemini-2.5-flash','2024-02-15-preview') are defined here in the Zod schemas and repeated as??fallbacks inprovider-factory.ts(lines 44, 55, 64, 73). Since Zod.default()already applies these when parsing env vars, the factory fallbacks are redundant. Consider extracting these into shared constants to keep them in sync and avoid silent drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/schemas/env-schemas.ts` around lines 4 - 20, The default model/version strings are duplicated between the Zod schema defaults (AZURE_OPENAI_DEFAULT_CONFIG, ANTHROPIC_DEFAULT_CONFIG, OPENAI_DEFAULT_CONFIG, GEMINI_DEFAULT_CONFIG) and the fallback values in the provider factory; extract those literal defaults into a single shared set of constants (e.g., DEFAULT_MODELS or DEFAULT_PROVIDER_CONFIGS) and import/use them from both the env-schemas definitions and the provider-factory fallbacks so the Zod .default() values and the `??` fallbacks reference the same constants, then remove the hard-coded literals from either place to avoid divergence.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/providers/vercel-ai-provider.ts`:
- Around line 89-96: The assignment to result.experimental_output is typed as
any and triggers ESLint no-unsafe-assignment; update the code that reads
experimental_output (the variable result and the local variable output) to first
cast result.experimental_output to unknown and then, after your existing Zod
validation, cast/assign it to the typed output variable (e.g., const raw =
result.experimental_output as unknown; validate raw with your Output schema and
then set const output = validatedValue) so TypeScript sees the external data
passed through unknown+validation rather than any; keep the same null/undefined
check and existing error throw for missing structured output.
In `@tests/perplexity-provider.test.ts`:
- Around line 31-35: Save the original environment before mutating it in the
test suite, set process.env.PERPLEXITY_API_KEY in beforeEach (alongside
vi.clearAllMocks()), and restore the original process.env in an afterEach to
prevent cross-file leakage; for example capture const ORIGINAL_ENV = {
...process.env } at module top or in beforeEach, set
process.env.PERPLEXITY_API_KEY = 'test-api-key' in beforeEach, and restore
process.env = ORIGINAL_ENV in afterEach so other tests aren’t affected.
In `@tests/vercel-ai-provider.test.ts`:
- Around line 334-349: The test is weakening types by passing mockBuilder as any
to VercelAIProvider; replace the cast with a properly typed mock that implements
the RequestBuilder interface (or a Partial<RequestBuilder> typed as
RequestBuilder) so ESLint/type checks pass. Create a minimal mock object with
the buildPromptBodyForStructured method (e.g., const mockBuilder:
Partial<RequestBuilder> = { buildPromptBodyForStructured:
vi.fn().mockReturnValue('Built system prompt') } and then pass mockBuilder as
RequestBuilder when constructing new VercelAIProvider(config, mockBuilder as
RequestBuilder)), keeping the same vi.fn assertions against
buildPromptBodyForStructured and runPromptStructured.
---
Outside diff comments:
In `@package.json`:
- Line 72: The package.json pins zod to "3.25.76", which has known
TypeScript-breaking files; update the dependency entry for "zod" to a safe
version: either downgrade to a stable 3.x release (e.g., "3.23.x") or upgrade to
the latest stable 4.x release depending on your TypeScript compatibility, then
run install and rebuild; specifically change the value for the "zod" dependency
(currently "3.25.76") in package.json and verify the project compiles and tests
pass after npm/yarn install.
In `@tests/perplexity-provider.test.ts`:
- Around line 15-28: The test mocks in tests/perplexity-provider.test.ts (e.g.,
the MOCK_RESULTS constant) don’t include the provider’s input fields `text` and
`publishedDate`, so the provider’s mapping (source.text → snippet and
source.publishedDate → date) isn’t exercised; update MOCK_RESULTS and the other
mock arrays used later in the file to include `text` (copy of `snippet`) and
`publishedDate` (copy of `date` or appropriate ISO string) for each item so the
normalization code paths in the provider (look for the provider
normalization/mapping functions) are triggered and assertions align with
expected output.
---
Nitpick comments:
In `@src/providers/vercel-ai-provider.ts`:
- Around line 103-113: The catch block that handles NoObjectGeneratedError and
other errors (involving NoObjectGeneratedError.isInstance(e), the local e
variable, and the creation of err) is re-wrapping errors into new Error objects
and losing original stack/cause; update the throws to chain the original error
via the Error constructor's cause option (e.g., throw new Error(message, {
cause: e })) or rethrow the original Error instance when appropriate (use the
err variable you already build for the generic branch), ensuring the original
stack and error type are preserved while keeping the descriptive messages.
- Around line 128-171: The convertSchemaNode function currently ignores
node.additionalProperties so object schemas like those produced by
buildJudgeLLMSchema, buildCheckLLMSchema, and claimSchema (which set
additionalProperties: false) still allow extra keys; update convertSchemaNode
(object case) to inspect node.additionalProperties and, when it's exactly false,
return the Zod object as strict (e.g., call .strict() on the z.object created
from zodFields or on an empty object if no properties), and if
additionalProperties is true or undefined keep the existing behavior
(z.record(z.unknown()) or non-strict z.object) so the Zod shape enforces the
same additionalProperties semantics as the JSON schema.
In `@src/schemas/env-schemas.ts`:
- Around line 4-20: The default model/version strings are duplicated between the
Zod schema defaults (AZURE_OPENAI_DEFAULT_CONFIG, ANTHROPIC_DEFAULT_CONFIG,
OPENAI_DEFAULT_CONFIG, GEMINI_DEFAULT_CONFIG) and the fallback values in the
provider factory; extract those literal defaults into a single shared set of
constants (e.g., DEFAULT_MODELS or DEFAULT_PROVIDER_CONFIGS) and import/use them
from both the env-schemas definitions and the provider-factory fallbacks so the
Zod .default() values and the `??` fallbacks reference the same constants, then
remove the hard-coded literals from either place to avoid divergence.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (23)
package.jsonsrc/boundaries/api-client.tssrc/boundaries/index.tssrc/cli/commands.tssrc/providers/anthropic-provider.tssrc/providers/azure-openai-provider.tssrc/providers/gemini-provider.tssrc/providers/index.tssrc/providers/openai-provider.tssrc/providers/perplexity-provider.tssrc/providers/provider-factory.tssrc/providers/vercel-ai-provider.tssrc/schemas/anthropic-responses.tssrc/schemas/api-schemas.tssrc/schemas/env-schemas.tssrc/schemas/index.tssrc/schemas/openai-responses.tstests/anthropic-e2e.test.tstests/anthropic-provider.test.tstests/openai-provider.test.tstests/perplexity-provider.test.tstests/provider-factory.test.tstests/vercel-ai-provider.test.ts
💤 Files with no reviewable changes (12)
- src/providers/gemini-provider.ts
- src/boundaries/index.ts
- tests/anthropic-provider.test.ts
- src/providers/azure-openai-provider.ts
- src/providers/anthropic-provider.ts
- tests/openai-provider.test.ts
- src/schemas/openai-responses.ts
- src/boundaries/api-client.ts
- src/schemas/anthropic-responses.ts
- src/schemas/api-schemas.ts
- src/providers/openai-provider.ts
- tests/anthropic-e2e.test.ts
- Add existence check for meta.json in dev mode presets directory - Throw descriptive error if presets directory cannot be located - Improve error messaging to show both build and dev paths checked - Prevent silent failures when presets directory is misconfigured
- Export AZURE_OPENAI_DEFAULT_CONFIG as public constant - Export ANTHROPIC_DEFAULT_CONFIG as public constant - Export OPENAI_DEFAULT_CONFIG as public constant - Export GEMINI_DEFAULT_CONFIG as public constant - Fix indentation in GLOBAL_CONFIG_SCHEMA object definition - Enable reuse of default configurations across modules
- Add maxTokens configuration option to VercelAIConfig interface - Support maxTokens parameter for Anthropic provider when configured - Improve JSON Schema to Zod schema conversion with nullable type handling - Normalize type arrays and filter out null values for proper type detection - Replace nullable() with optional() for object properties to match Vercel AI SDK expectations - Add strict mode support for objects with additionalProperties: false - Add type casting for Azure OpenAI model and experimental_output to resolve type issues - Conditionally include temperature and maxTokens in generateText call only when defined
- Migrate mock declarations to vi.hoisted() for proper factory scope in Perplexity and Vercel AI provider tests - Consolidate default config imports from env-schemas in env-parser test - Rename MOCK_RESULTS to MOCK_SOURCES and EXPECTED_RESULTS for clarity on data transformation - Add non-null assertions (!) for array element access in edge case tests - Add type assertions for complex mock objects to satisfy TypeScript strict mode - Improve mock implementation formatting and add afterEach cleanup in debug tests - Update comments to clarify mock setup patterns and data shape expectations
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
tests/perplexity-provider.test.ts (1)
46-51:⚠️ Potential issue | 🟡 MinorRestore
process.envafter tests to avoid cross-file leakage.
process.env.PERPLEXITY_API_KEYis mutated inbeforeEachbut never restored. Add anafterEachto clean up.🧹 Proposed fix
+ const ORIGINAL_PERPLEXITY_KEY = process.env.PERPLEXITY_API_KEY; + beforeEach(() => { vi.clearAllMocks(); // Mock process.env.PERPLEXITY_API_KEY for tests process.env.PERPLEXITY_API_KEY = 'test-api-key'; }); + + afterEach(() => { + if (ORIGINAL_PERPLEXITY_KEY === undefined) { + delete process.env.PERPLEXITY_API_KEY; + } else { + process.env.PERPLEXITY_API_KEY = ORIGINAL_PERPLEXITY_KEY; + } + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/perplexity-provider.test.ts` around lines 46 - 51, The test mutates process.env.PERPLEXITY_API_KEY in the beforeEach inside the describe('PerplexitySearchProvider') block but never restores it; add an afterEach that restores the original environment (capture the original value before mutation or clone process.env at start of each test) and either delete or reset process.env.PERPLEXITY_API_KEY in the afterEach so the global env is not leaked to other tests.tests/vercel-ai-provider.test.ts (1)
335-340: Replaceas anymock builder with a properly typedRequestBuilder.The
eslint-disablecomment suppresses the type error but the mock object should implement theRequestBuilderinterface to maintain type safety.♻️ Proposed fix
+import type { RequestBuilder } from '../src/providers/request-builder'; ... - const mockBuilder = { + const mockBuilder: RequestBuilder = { buildPromptBodyForStructured: vi.fn().mockReturnValue('Built system prompt'), }; - // eslint-disable-next-line `@typescript-eslint/no-unsafe-argument` - const provider = new VercelAIProvider(config, mockBuilder as any); + const provider = new VercelAIProvider(config, mockBuilder);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/vercel-ai-provider.test.ts` around lines 335 - 340, Replace the any-typed mock with a properly typed RequestBuilder implementation: create a mock object typed as RequestBuilder that implements buildPromptBodyForStructured (and any other RequestBuilder methods used by VercelAIProvider) using vi.fn() to return 'Built system prompt', then pass that typed mock into the VercelAIProvider constructor instead of using "as any"; ensure the mock variable name mockBuilder remains and that buildPromptBodyForStructured signature matches the RequestBuilder interface so TypeScript and ESLint no longer require the disable comment.
🧹 Nitpick comments (2)
src/providers/provider-factory.ts (1)
55-55: Model fallback defaults duplicate the env schema defaults.
envConfig.ANTHROPIC_MODEL ?? 'claude-3-sonnet-20240229'(and similar for OpenAI/Gemini) duplicates the defaults already applied by the Zod schema inenv-schemas.ts. SinceENV_SCHEMAapplies.default(...)during parsing, these??fallbacks are unreachable for schema-validated input. This introduces a maintenance risk if defaults are updated in only one location.♻️ Proposed fix — import centralized defaults
+import { ANTHROPIC_DEFAULT_CONFIG, OPENAI_DEFAULT_CONFIG, GEMINI_DEFAULT_CONFIG, AZURE_OPENAI_DEFAULT_CONFIG } from '../schemas/env-schemas'; ... - model = anthropic(envConfig.ANTHROPIC_MODEL ?? 'claude-3-sonnet-20240229'); + model = anthropic(envConfig.ANTHROPIC_MODEL ?? ANTHROPIC_DEFAULT_CONFIG.model); ... - model = openai(envConfig.OPENAI_MODEL ?? 'gpt-4o'); + model = openai(envConfig.OPENAI_MODEL ?? OPENAI_DEFAULT_CONFIG.model); ... - model = google(envConfig.GEMINI_MODEL ?? 'gemini-2.5-flash'); + model = google(envConfig.GEMINI_MODEL ?? GEMINI_DEFAULT_CONFIG.model);Also applies to: 64-64, 73-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/provider-factory.ts` at line 55, The model selection code in provider-factory.ts currently uses inline fallbacks like envConfig.ANTHROPIC_MODEL ?? 'claude-3-sonnet-20240229' which duplicates defaults already set by ENV_SCHEMA in env-schemas.ts; remove these unreachable ?? fallbacks and instead import and use the centralized default constants (or the parsed envConfig value directly) so defaults live in one place—update the lines referencing envConfig.ANTHROPIC_MODEL, envConfig.OPENAI_MODEL, and envConfig.GOOGLE_MODEL to rely on the ENV_SCHEMA-provided values or imported default constants (matching the names used in env-schemas.ts) and delete the literal string fallbacks.src/providers/vercel-ai-provider.ts (1)
168-187:z.object(zodFields)is constructed twice whenadditionalProperties === false.Line 179 creates a
z.object(zodFields)that is immediately discarded on Line 181 whenadditionalPropertiesisfalse, since a newz.object(zodFields).strict()is built. Minor allocation waste and slightly confusing.♻️ Proposed fix
if (properties) { const zodFields: Record<string, z.ZodTypeAny> = {}; for (const [key, value] of Object.entries(properties)) { const fieldSchema = this.convertSchemaNode(value); zodFields[key] = required.includes(key) ? fieldSchema : fieldSchema.optional(); } - let objSchema: z.ZodType = z.object(zodFields); - if (additionalProperties === false) { - objSchema = z.object(zodFields).strict(); - } - schema = objSchema; + const baseObj = z.object(zodFields); + schema = additionalProperties === false ? baseObj.strict() : baseObj;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/vercel-ai-provider.ts` around lines 168 - 187, In the object-case of convertSchemaNode in VercelAIProvider (the switch branch building zodFields/objSchema), avoid constructing z.object(zodFields) twice: create objSchema once via z.object(zodFields) and then, if additionalProperties === false, call objSchema = objSchema.strict() instead of rebuilding z.object(zodFields). This removes the redundant allocation and clarifies intent while preserving existing behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/providers/provider-factory.ts`:
- Line 46: The azure(...) assignment in provider-factory.ts currently uses an
unsafe double cast to LanguageModel (model =
azure(envConfig.AZURE_OPENAI_DEPLOYMENT_NAME) as unknown as LanguageModel); add
an inline comment next to this line explaining why the cast is required (e.g., a
typing gap or version incompatibility in `@ai-sdk/azure` that prevents direct
assignment) and note the SDK version tested; alternatively, if an updated
`@ai-sdk/azure` type or adapter exists, replace the cast by using the correct
typed factory/adapter so azure(...) returns a LanguageModel-compatible type
instead of forcing as unknown as LanguageModel.
In `@src/providers/vercel-ai-provider.ts`:
- Around line 131-144: convertSchemaNode currently assumes node.enum is an array
of strings and calls z.enum(enumValues), which fails for numeric or mixed enums;
change the handling in convertSchemaNode so you first detect the actual types in
node.enum and, if all values are strings, keep z.enum(enumValues as
[string,...string[]]), otherwise build a Zod union of z.literal(...) for each
enum value (preserving numbers/booleans) and return that (apply .nullable() when
isNullable is true); reference convertSchemaNode, node.enum, enumSchema, and
z.enum in your change.
In `@tests/perplexity-provider.test.ts`:
- Around line 106-122: Mocks in the tests use output field names instead of the
provider's expected input names; update the mock sources passed to
MOCK_GENERATE_TEXT so each item uses text (not snippet) and publishedDate (not
date) to match how PerplexitySearchProvider maps source.text→snippet and
source.publishedDate→date; specifically fix the "limits results to maxResults"
test (replace snippet/date with text/publishedDate in the manyResults array),
and apply the same change to the mock data in the "handles missing fields" and
"Configuration" tests so all mocks align with the provider's expected source
shape.
---
Duplicate comments:
In `@tests/perplexity-provider.test.ts`:
- Around line 46-51: The test mutates process.env.PERPLEXITY_API_KEY in the
beforeEach inside the describe('PerplexitySearchProvider') block but never
restores it; add an afterEach that restores the original environment (capture
the original value before mutation or clone process.env at start of each test)
and either delete or reset process.env.PERPLEXITY_API_KEY in the afterEach so
the global env is not leaked to other tests.
In `@tests/vercel-ai-provider.test.ts`:
- Around line 335-340: Replace the any-typed mock with a properly typed
RequestBuilder implementation: create a mock object typed as RequestBuilder that
implements buildPromptBodyForStructured (and any other RequestBuilder methods
used by VercelAIProvider) using vi.fn() to return 'Built system prompt', then
pass that typed mock into the VercelAIProvider constructor instead of using "as
any"; ensure the mock variable name mockBuilder remains and that
buildPromptBodyForStructured signature matches the RequestBuilder interface so
TypeScript and ESLint no longer require the disable comment.
---
Nitpick comments:
In `@src/providers/provider-factory.ts`:
- Line 55: The model selection code in provider-factory.ts currently uses inline
fallbacks like envConfig.ANTHROPIC_MODEL ?? 'claude-3-sonnet-20240229' which
duplicates defaults already set by ENV_SCHEMA in env-schemas.ts; remove these
unreachable ?? fallbacks and instead import and use the centralized default
constants (or the parsed envConfig value directly) so defaults live in one
place—update the lines referencing envConfig.ANTHROPIC_MODEL,
envConfig.OPENAI_MODEL, and envConfig.GOOGLE_MODEL to rely on the
ENV_SCHEMA-provided values or imported default constants (matching the names
used in env-schemas.ts) and delete the literal string fallbacks.
In `@src/providers/vercel-ai-provider.ts`:
- Around line 168-187: In the object-case of convertSchemaNode in
VercelAIProvider (the switch branch building zodFields/objSchema), avoid
constructing z.object(zodFields) twice: create objSchema once via
z.object(zodFields) and then, if additionalProperties === false, call objSchema
= objSchema.strict() instead of rebuilding z.object(zodFields). This removes the
redundant allocation and clarifies intent while preserving existing behavior.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
src/cli/commands.tssrc/providers/perplexity-provider.tssrc/providers/provider-factory.tssrc/providers/vercel-ai-provider.tssrc/schemas/env-schemas.tstests/env-parser.test.tstests/perplexity-provider.test.tstests/vercel-ai-provider.test.ts
✅ Files skipped from review due to trivial changes (1)
- tests/env-parser.test.ts
- Remove hardcoded model defaults from provider factory (Azure, Anthropic, OpenAI, Google) to require explicit configuration - Add type-safe enum schema handling for mixed and non-string enum values using Zod union types - Fix Perplexity provider tests to use correct field names (text/publishedDate instead of snippet/date) - Improve test isolation by preserving and restoring process.env in Perplexity provider tests - Add RequestBuilder type import and remove unsafe type assertions in VercelAIProvider tests - Add explanatory comment for Azure provider type casting workaround
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (4)
tests/perplexity-provider.test.ts (1)
224-239:respects maxResults configurationduplicates coverage already provided bylimits results to maxResults.Both tests create 10-item arrays and assert slicing — only the limit value differs (3 vs 5). The
Configurationsuite's test adds no new behavioral coverage beyond what line 112–128 already proves. Consider removing one, or replacing this test with a scenario that is not already covered (e.g., verifyingmaxResults: 0ormaxResultsexceeding the source count).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/perplexity-provider.test.ts` around lines 224 - 239, The test "respects maxResults configuration" in the Configuration suite duplicates the existing "limits results to maxResults" coverage; update it instead to exercise a different edge-case for PerplexitySearchProvider: either assert behavior when maxResults is 0 (expect empty results) or when maxResults exceeds the available source count (e.g., provider returns all sources without error). Modify the test that currently sets PerplexitySearchProvider({ maxResults: 3 }) and MOCK_GENERATE_TEXT to return 10 items to use one of these new scenarios and update the assertion accordingly so the Configuration suite adds unique coverage.tests/vercel-ai-provider.test.ts (2)
63-70: Make the default-temperature test assert behavior, not just construction.This test currently duplicates the “constructor works” check and won’t catch a regression where default temperature stops being applied.
✅ Suggested test change
- it('applies default temperature when not provided', () => { + it('applies default temperature when not provided', async () => { const config: VercelAIConfig = { model: MOCK_MODEL, }; - const provider = new VercelAIProvider(config); - expect(provider).toBeInstanceOf(VercelAIProvider); + MOCK_GENERATE_TEXT.mockResolvedValue({ experimental_output: { ok: true } }); + await provider.runPromptStructured('Test content', 'Test prompt', { + name: 'test_schema', + schema: { + type: 'object', + properties: { ok: { type: 'boolean' } }, + required: ['ok'], + }, + }); + expect(MOCK_GENERATE_TEXT).toHaveBeenCalledWith( + expect.objectContaining({ temperature: 0.2 }) + ); });Based on learnings: "Applies to tests/**/*.test.ts : Focus tests on config parsing, file discovery, schema/structured output, and locator functionality".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/vercel-ai-provider.test.ts` around lines 63 - 70, The test only checks construction but must assert default temperature behavior: update the "applies default temperature when not provided" test to create VercelAIProvider with VercelAIConfig { model: MOCK_MODEL } and then assert the provider applies the default temperature (e.g., expect(provider.config.temperature).toEqual(<DEFAULT_TEMPERATURE>) or call the method that builds requests and assert the resulting request.temperature equals the default). Reference VercelAIProvider, VercelAIConfig, MOCK_MODEL and the default temperature constant or value used in the provider when making the assertion.
359-432: Add regression tests for union/null schema conversion paths.Given the recursive converter complexity, add cases for
type: ['string', 'number']and null-only/nullable combinations to prevent silent schema-conversion regressions.Based on learnings: "Applies to tests/**/*.test.ts : Focus tests on config parsing, file discovery, schema/structured output, and locator functionality".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/vercel-ai-provider.test.ts` around lines 359 - 432, The tests lack coverage for JSON Schema union and nullable conversions—add new unit cases in the same test suite that call VercelAIProvider.runPromptStructured (using MOCK_GENERATE_TEXT) to assert correct conversion for schema types like { type: ['string','number'] } and nullable patterns (e.g., { type: ['null','string'] } and { type: 'null' } combined with required/optional), verifying provider.runPromptStructured returns the expected experimental_output parsed to the correct JS values; implement one test that supplies a union value (string and number variants) and one that supplies null/nullable variants to catch regressions in the recursive converter logic.src/providers/vercel-ai-provider.ts (1)
119-125: Move schema-conversion logic out of the provider to keep transport thin.
VercelAIProvidernow owns recursive schema translation and boundary handling, which increases coupling in the transport layer. Consider movingjsonSchemaToZod/convertSchemaNodeinto a schema/boundary module and passing validated Zod schemas into the provider.As per coding guidelines: "
src/providers/**/*.ts: Depend onLLMProviderandSearchProviderinterfaces; keep providers thin (transport only)".Also applies to: 127-205
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/vercel-ai-provider.ts` around lines 119 - 125, The provider currently contains heavy schema-conversion logic (methods jsonSchemaToZod and convertSchemaNode inside VercelAIProvider) which couples transport to schema translation; extract that recursive JSON Schema → Zod logic into a new schema/boundary module (e.g., a SchemaConverter with functions jsonSchemaToZod and convertSchemaNode) and update VercelAIProvider to accept already-validated z.ZodType instances (or a small adapter that calls the converter outside the provider) so the provider only handles transport/LLMProvider concerns and depends only on interfaces; ensure the provider's constructor or method signatures now take zod schemas (or a SchemaConverter interface) and remove the internal conversion code from VercelAIProvider.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/providers/vercel-ai-provider.ts`:
- Around line 137-139: When normalizing the JSON Schema "type" value, don't
collapse multi-type arrays to a single entry; instead filter out only the 'null'
entry and preserve remaining variants (as an array if more than one) so schemas
like ['string','number'] are kept. Replace the current block that sets type =
type.filter(...)[0] with logic that: when Array.isArray(type) remove 'null'
entries into a new array (e.g., types), then if types.length === 0 set type =
'null', else if types.length === 1 set type = types[0], else set type = types
(preserving the array). This change should be applied to the code that currently
manipulates the variable named "type" in vercel-ai-provider.ts.
In `@tests/perplexity-provider.test.ts`:
- Around line 130-158: The test assumes PERPLEXITY_SOURCE_SCHEMA accepts objects
with all-optional fields, but if the zod schema requires fields the provider
(PerplexitySearchProvider) will zero-out sources after safeParse and the test
will fail; fix by either (A) making PERPLEXITY_SOURCE_SCHEMA mark all fields
optional so the existing incompleteResults pass validation, or (B) change the
test to provide minimal valid source objects (e.g., include required
title/snippet/url fields) or (C) decouple validation by mocking
z.array(PERPLEXITY_SOURCE_SCHEMA).safeParse to return a successful parse for
incompleteResults; update the test file (tests/perplexity-provider.test.ts)
accordingly and/or add a short comment noting the test’s reliance on
PERPLEXITY_SOURCE_SCHEMA being all-optional.
---
Nitpick comments:
In `@src/providers/vercel-ai-provider.ts`:
- Around line 119-125: The provider currently contains heavy schema-conversion
logic (methods jsonSchemaToZod and convertSchemaNode inside VercelAIProvider)
which couples transport to schema translation; extract that recursive JSON
Schema → Zod logic into a new schema/boundary module (e.g., a SchemaConverter
with functions jsonSchemaToZod and convertSchemaNode) and update
VercelAIProvider to accept already-validated z.ZodType instances (or a small
adapter that calls the converter outside the provider) so the provider only
handles transport/LLMProvider concerns and depends only on interfaces; ensure
the provider's constructor or method signatures now take zod schemas (or a
SchemaConverter interface) and remove the internal conversion code from
VercelAIProvider.
In `@tests/perplexity-provider.test.ts`:
- Around line 224-239: The test "respects maxResults configuration" in the
Configuration suite duplicates the existing "limits results to maxResults"
coverage; update it instead to exercise a different edge-case for
PerplexitySearchProvider: either assert behavior when maxResults is 0 (expect
empty results) or when maxResults exceeds the available source count (e.g.,
provider returns all sources without error). Modify the test that currently sets
PerplexitySearchProvider({ maxResults: 3 }) and MOCK_GENERATE_TEXT to return 10
items to use one of these new scenarios and update the assertion accordingly so
the Configuration suite adds unique coverage.
In `@tests/vercel-ai-provider.test.ts`:
- Around line 63-70: The test only checks construction but must assert default
temperature behavior: update the "applies default temperature when not provided"
test to create VercelAIProvider with VercelAIConfig { model: MOCK_MODEL } and
then assert the provider applies the default temperature (e.g.,
expect(provider.config.temperature).toEqual(<DEFAULT_TEMPERATURE>) or call the
method that builds requests and assert the resulting request.temperature equals
the default). Reference VercelAIProvider, VercelAIConfig, MOCK_MODEL and the
default temperature constant or value used in the provider when making the
assertion.
- Around line 359-432: The tests lack coverage for JSON Schema union and
nullable conversions—add new unit cases in the same test suite that call
VercelAIProvider.runPromptStructured (using MOCK_GENERATE_TEXT) to assert
correct conversion for schema types like { type: ['string','number'] } and
nullable patterns (e.g., { type: ['null','string'] } and { type: 'null' }
combined with required/optional), verifying provider.runPromptStructured returns
the expected experimental_output parsed to the correct JS values; implement one
test that supplies a union value (string and number variants) and one that
supplies null/nullable variants to catch regressions in the recursive converter
logic.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/providers/provider-factory.tssrc/providers/vercel-ai-provider.tstests/perplexity-provider.test.tstests/vercel-ai-provider.test.ts
- Handle multi-type unions (e.g. ['string', 'number']) by building Zod union schemas - Properly normalize type arrays by filtering 'null' and handling edge cases - Support nullable types by tracking nullability separately from type array - Add test coverage for union type arrays and nullable type handling - Clarify test expectations for maxResults configuration behavior - Add explanatory comments for schema validation edge cases
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
tests/perplexity-provider.test.ts (1)
113-118: Optional DRY cleanup: deduplicate repeated mock source builders.The repeated
Array.from(...source objects...)blocks can be centralized for easier maintenance.♻️ Optional refactor
+const buildMockSources = (count: number) => + Array.from({ length: count }, (_, i) => ({ + title: `Result ${i}`, + text: `Snippet ${i}`, + url: `https://example.com/${i}`, + publishedDate: '', + })); -const manyResults = Array.from({ length: 10 }, (_, i) => ({ - title: `Result ${i}`, - text: `Snippet ${i}`, - url: `https://example.com/${i}`, - publishedDate: '', -})); +const manyResults = buildMockSources(10); -const results = Array.from({ length: 3 }, (_, i) => ({ - title: `Result ${i}`, - text: `Snippet ${i}`, - url: `https://example.com/${i}`, - publishedDate: '', -})); +const results = buildMockSources(3); -const results = Array.from({ length: 10 }, (_, i) => ({ - title: `Result ${i}`, - text: `Snippet ${i}`, - url: `https://example.com/${i}`, - publishedDate: '', -})); +const results = buildMockSources(10);Also applies to: 228-233, 245-250
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/perplexity-provider.test.ts` around lines 113 - 118, The test repeats identical Array.from(...) mock builders (e.g., the manyResults constant) across multiple places; extract a small helper (like makeMockResults(count) or buildSearchResults(count, prefix)) or a shared constant (e.g., MANY_RESULTS) and replace the duplicated Array.from blocks in tests referencing manyResults to reuse that helper/constant, updating usages at the other occurrences (lines corresponding to the other repeated blocks) so the mock creation is centralized and DRY.tests/vercel-ai-provider.test.ts (3)
185-194: Unnecessary TypeScript casts inside Vitest asymmetric matchers.
expect.any(String) as string(Line 187) andas Record<string, unknown>(Line 192) have no runtime effect on the assertions — Vitest matchers are dynamic. They add noise and may signal a linter warning.✂️ Proposed cleanup
expect(MOCK_GENERATE_TEXT).toHaveBeenCalledWith( expect.objectContaining({ - system: expect.any(String) as string, + system: expect.any(String), prompt: 'Input:\n\nTest content', temperature: 0.2, - experimental_output: expect.objectContaining({ - _outputType: 'object', - }) as Record<string, unknown>, + experimental_output: expect.objectContaining({ + _outputType: 'object', + }), }) );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/vercel-ai-provider.test.ts` around lines 185 - 194, Remove the unnecessary TypeScript casts inside the Vitest asymmetric matchers in the test that asserts MOCK_GENERATE_TEXT calls: drop "as string" after expect.any(String) and remove "as Record<string, unknown>" after expect.objectContaining(...) so the matcher uses Vitest's dynamic matchers directly (locate the expectation that calls MOCK_GENERATE_TEXT with expect.objectContaining and update the system and experimental_output fields to rely on expect.any(String) and expect.objectContaining without TypeScript casts).
93-103:maxTokensis untested: both the config option and the conditional passthrough togenerateText.The
'accepts all configuration options'test doesn't includemaxTokensin the config despite it being defined inVercelAIConfig. More importantly, none of theStructured Response Handlingtests verify that whenmaxTokensis set it is conditionally passed through togenerateText— a distinct code path from thetemperaturetest already present.🧪 Suggested test to add under
Structured Response Handling+ it('includes maxTokens in API call when configured', async () => { + const config: VercelAIConfig = { + model: MOCK_MODEL, + maxTokens: 512, + }; + + const mockResult = { experimental_output: { result: 'ok' } }; + MOCK_GENERATE_TEXT.mockResolvedValue(mockResult); + + const provider = new VercelAIProvider(config); + const schema = { + name: 'test_schema', + schema: { properties: { result: { type: 'string' } }, type: 'object' }, + }; + + await provider.runPromptStructured('content', 'prompt', schema); + + expect(MOCK_GENERATE_TEXT).toHaveBeenCalledWith( + expect.objectContaining({ maxTokens: 512 }) + ); + }); + + it('omits maxTokens from API call when not configured', async () => { + const config: VercelAIConfig = { model: MOCK_MODEL }; + const mockResult = { experimental_output: { result: 'ok' } }; + MOCK_GENERATE_TEXT.mockResolvedValue(mockResult); + + const provider = new VercelAIProvider(config); + const schema = { + name: 'test_schema', + schema: { properties: { result: { type: 'string' } }, type: 'object' }, + }; + + await provider.runPromptStructured('content', 'prompt', schema); + + expect(MOCK_GENERATE_TEXT).toHaveBeenCalledWith( + expect.not.objectContaining({ maxTokens: expect.anything() }) + ); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/vercel-ai-provider.test.ts` around lines 93 - 103, Add coverage for maxTokens: update the 'accepts all configuration options' test to include maxTokens in the VercelAIConfig passed to new VercelAIProvider (e.g., maxTokens: 256) and add a Structured Response Handling unit test that sets maxTokens on the provider config, invokes the code path that calls generateText, and asserts (via a spy/mock) that generateText is invoked with the expected maxTokens value in its options; reference the VercelAIConfig type, VercelAIProvider constructor, and the generateText function when locating where to add the config and the assertion.
222-261: Missing test for theundefined/nullexperimental_outputerror path.The implementation summary states the provider throws a descriptive error when
experimental_outputis missing or undefined. This is a distinct, reachable error path that has no test coverage here.🧪 Suggested test to add under
Error Handling+ it('throws descriptive error when experimental_output is missing', async () => { + const config: VercelAIConfig = { model: MOCK_MODEL }; + // generateText returns a result with no experimental_output + MOCK_GENERATE_TEXT.mockResolvedValue({ text: 'some text' }); + + const provider = new VercelAIProvider(config); + const schema = { + name: 'test_schema', + schema: { properties: { result: { type: 'string' } }, type: 'object' }, + }; + + await expect( + provider.runPromptStructured('Test content', 'Test prompt', schema) + ).rejects.toThrow(); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/vercel-ai-provider.test.ts` around lines 222 - 261, Add a test under the "Error Handling" describe that covers the path where the Vercel AI response has no experimental_output (or it is null/undefined): mock MOCK_GENERATE_TEXT to resolve with a response object that omits experimental_output (and another case with experimental_output: null), instantiate VercelAIProvider with MOCK_MODEL, call provider.runPromptStructured('Test content', 'Test prompt', schema) and assert it rejects with the descriptive error the implementation throws for missing experimental_output (match the exact message the provider uses); reference the existing symbols MOCK_GENERATE_TEXT, VercelAIProvider, runPromptStructured, and experimental_output when locating where to add the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/vercel-ai-provider.test.ts`:
- Around line 8-21: The NoObjectGeneratedError mock inside ERROR_CLASSES must
match the real ai v4 SDK signature: change the constructor on class
NoObjectGeneratedError to accept a single options object (e.g., { message?:
string; text: string; response?: unknown; usage?: unknown, finishReason?: string
}) and set this.text, this.response, this.usage (and optional finishReason)
accordingly while still calling super(options.message ?? 'No object generated.')
and keeping static isInstance; then update any test instantiation that currently
calls new NoObjectGeneratedError(message, text) (referenced around the test
instantiation at line ~230) to pass the options object format so tests reflect
production usage.
---
Nitpick comments:
In `@tests/perplexity-provider.test.ts`:
- Around line 113-118: The test repeats identical Array.from(...) mock builders
(e.g., the manyResults constant) across multiple places; extract a small helper
(like makeMockResults(count) or buildSearchResults(count, prefix)) or a shared
constant (e.g., MANY_RESULTS) and replace the duplicated Array.from blocks in
tests referencing manyResults to reuse that helper/constant, updating usages at
the other occurrences (lines corresponding to the other repeated blocks) so the
mock creation is centralized and DRY.
In `@tests/vercel-ai-provider.test.ts`:
- Around line 185-194: Remove the unnecessary TypeScript casts inside the Vitest
asymmetric matchers in the test that asserts MOCK_GENERATE_TEXT calls: drop "as
string" after expect.any(String) and remove "as Record<string, unknown>" after
expect.objectContaining(...) so the matcher uses Vitest's dynamic matchers
directly (locate the expectation that calls MOCK_GENERATE_TEXT with
expect.objectContaining and update the system and experimental_output fields to
rely on expect.any(String) and expect.objectContaining without TypeScript
casts).
- Around line 93-103: Add coverage for maxTokens: update the 'accepts all
configuration options' test to include maxTokens in the VercelAIConfig passed to
new VercelAIProvider (e.g., maxTokens: 256) and add a Structured Response
Handling unit test that sets maxTokens on the provider config, invokes the code
path that calls generateText, and asserts (via a spy/mock) that generateText is
invoked with the expected maxTokens value in its options; reference the
VercelAIConfig type, VercelAIProvider constructor, and the generateText function
when locating where to add the config and the assertion.
- Around line 222-261: Add a test under the "Error Handling" describe that
covers the path where the Vercel AI response has no experimental_output (or it
is null/undefined): mock MOCK_GENERATE_TEXT to resolve with a response object
that omits experimental_output (and another case with experimental_output:
null), instantiate VercelAIProvider with MOCK_MODEL, call
provider.runPromptStructured('Test content', 'Test prompt', schema) and assert
it rejects with the descriptive error the implementation throws for missing
experimental_output (match the exact message the provider uses); reference the
existing symbols MOCK_GENERATE_TEXT, VercelAIProvider, runPromptStructured, and
experimental_output when locating where to add the test.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/providers/vercel-ai-provider.tstests/perplexity-provider.test.tstests/vercel-ai-provider.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/providers/vercel-ai-provider.ts
…er interface (#62) * chore: migrate to Vercel AI SDK and update dependencies - Replace individual AI provider SDKs (@anthropic-ai/sdk, @google/generative-ai, @perplexity-ai/perplexity_ai, openai) with unified @AI-SDK packages - Add @ai-sdk/anthropic, @ai-sdk/google, @ai-sdk/openai, @ai-sdk/perplexity, and @ai-sdk/azure as dependencies - Add ai package (^4.0.0) as core dependency for unified AI SDK - Remove direct openai dependency in favor of @ai-sdk/openai - Update package-lock.json with new dependency tree and resolved versions - Bump version to 2.3.0 * feat(providers): migrate to Vercel AI SDK with unified provider interface - Replace individual provider implementations (OpenAI, Azure, Anthropic, Gemini) with unified VercelAIProvider using @AI-SDK packages - Migrate Perplexity search provider to use Vercel AI SDK's generateText with boundary validation for source data - Update provider factory to instantiate models through Vercel AI SDK factory functions instead of provider-specific classes - Add VercelAIConfig interface and remove provider-specific config types (AzureOpenAIConfig, AnthropicConfig, OpenAIConfig, GeminiConfig) - Export LLMResult, SearchProvider, PerplexitySearchProvider, TokenUsage utilities, and ProviderType from providers index - Remove api-client export from boundaries index as it's no longer needed - Simplify provider instantiation by consolidating configuration handling into VercelAIProvider - Add Zod schema for Perplexity source boundary validation to safely extract provider-specific fields * refactor: remove legacy provider implementations and migrate to Vercel AI SDK - Remove custom provider implementations (Anthropic, OpenAI, Azure OpenAI, Gemini) - Delete API client boundary layer and response validation schemas - Remove provider-specific test files (anthropic-e2e, anthropic-provider, openai-provider) - Consolidate to unified Vercel AI SDK interface for all LLM providers - Simplify codebase by eliminating duplicate validation and schema logic * test: migrate Perplexity provider tests to Vercel AI SDK - Update PerplexitySearchProvider tests to use Vercel AI SDK's generateText instead of native Perplexity SDK - Replace @perplexity-ai/perplexity_ai mocks with @ai-sdk/perplexity and ai package mocks - Add API key validation tests and environment variable handling - Expand test coverage for edge cases including empty queries, missing fields, and result limiting - Update provider-factory tests to reflect new SDK integration - Add comprehensive Vercel AI provider tests for unified interface compatibility - Improve test assertions and error handling validation * fix(cli): resolve presets directory for both dev and built modes - Add resolvePresetsDir() function to handle dual path resolution - Check built mode path first (dist/ → ../presets) - Fall back to dev mode path if meta.json not found (src/cli/ → ../../presets) - Update registerMainCommand to use new resolver function - Fixes preset loading failures when running from different build contexts * fix(cli): validate presets directory exists before returning path - Add existence check for meta.json in dev mode presets directory - Throw descriptive error if presets directory cannot be located - Improve error messaging to show both build and dev paths checked - Prevent silent failures when presets directory is misconfigured * fix(providers): add debug logging for Perplexity source validation failures * refactor(schemas): export default provider configurations - Export AZURE_OPENAI_DEFAULT_CONFIG as public constant - Export ANTHROPIC_DEFAULT_CONFIG as public constant - Export OPENAI_DEFAULT_CONFIG as public constant - Export GEMINI_DEFAULT_CONFIG as public constant - Fix indentation in GLOBAL_CONFIG_SCHEMA object definition - Enable reuse of default configurations across modules * feat(providers): add maxTokens support and improve schema conversion - Add maxTokens configuration option to VercelAIConfig interface - Support maxTokens parameter for Anthropic provider when configured - Improve JSON Schema to Zod schema conversion with nullable type handling - Normalize type arrays and filter out null values for proper type detection - Replace nullable() with optional() for object properties to match Vercel AI SDK expectations - Add strict mode support for objects with additionalProperties: false - Add type casting for Azure OpenAI model and experimental_output to resolve type issues - Conditionally include temperature and maxTokens in generateText call only when defined * test: improve mock setup and type safety in provider tests - Migrate mock declarations to vi.hoisted() for proper factory scope in Perplexity and Vercel AI provider tests - Consolidate default config imports from env-schemas in env-parser test - Rename MOCK_RESULTS to MOCK_SOURCES and EXPECTED_RESULTS for clarity on data transformation - Add non-null assertions (!) for array element access in edge case tests - Add type assertions for complex mock objects to satisfy TypeScript strict mode - Improve mock implementation formatting and add afterEach cleanup in debug tests - Update comments to clarify mock setup patterns and data shape expectations * fix(providers): remove model defaults and improve enum schema handling - Remove hardcoded model defaults from provider factory (Azure, Anthropic, OpenAI, Google) to require explicit configuration - Add type-safe enum schema handling for mixed and non-string enum values using Zod union types - Fix Perplexity provider tests to use correct field names (text/publishedDate instead of snippet/date) - Improve test isolation by preserving and restoring process.env in Perplexity provider tests - Add RequestBuilder type import and remove unsafe type assertions in VercelAIProvider tests - Add explanatory comment for Azure provider type casting workaround * fix(providers): improve schema conversion for union and nullable types - Handle multi-type unions (e.g. ['string', 'number']) by building Zod union schemas - Properly normalize type arrays by filtering 'null' and handling edge cases - Support nullable types by tracking nullability separately from type array - Add test coverage for union type arrays and nullable type handling - Clarify test expectations for maxResults configuration behavior - Add explanatory comments for schema validation edge cases
Replaces individual LLM provider implementations (OpenAI, Anthropic, Azure, Gemini) with a unified
VercelAIProviderusing the Vercel AI SDK.
Changes
VercelAIProviderwith model-specific instances from@ai-sdk/*packages@anthropic-ai/sdk,openai,@google/generative-ai) to@ai-sdk/anthropic,@ai-sdk/openai,@ai-sdk/google,@ai-sdk/azure, and@ai-sdk/perplexityresponse validation schemas
Output.object()Benefits
tracking
Migration Notes
Users should see no functional changes. All existing environment variables and configuration remain supported.
Summary by CodeRabbit
New Features
Refactor
Tests