Skip to content

feat(providers): add Amazon Bedrock LLM provider support#63

Merged
oshorefueled merged 6 commits intomainfrom
ft/amazonbedrock
Feb 26, 2026
Merged

feat(providers): add Amazon Bedrock LLM provider support#63
oshorefueled merged 6 commits intomainfrom
ft/amazonbedrock

Conversation

@ayo6706
Copy link
Collaborator

@ayo6706 ayo6706 commented Feb 25, 2026

  • Add Amazon Bedrock as a new LLM provider option via @ai-sdk/amazon-bedrock
  • Support AWS credentials (access key/secret) or native credential providers
  • Upgrade all Vercel AI SDK packages to latest stable versions (ai: v4 → v6)

Configuration

Users can now configure Amazon Bedrock in ~/.vectorlint/config.toml:

LLM_PROVIDER = "amazon-bedrock"
# Credentials optional if running in AWS environment with native credential provider
# AWS_ACCESS_KEY_ID = "your-aws-access-key-id"
# AWS_SECRET_ACCESS_KEY = "your-aws-secret-access-key"
AWS_REGION = "us-east-1"
BEDROCK_MODEL = "global.anthropic.claude-sonnet-4-5-20250929-v1:0"
BEDROCK_TEMPERATURE = "0.2"

Changes

- Add ProviderType.AmazonBedrock enum and factory logic
- Add BEDROCK_CONFIG_SCHEMA with environment validation
- Add @ai-sdk/amazon-bedrock package dependency
- Upgrade ai package from v4.0.0 to v6.0.99
- Upgrade AI SDK provider packages to latest stable (v1.x → v3.x)

Testing

- Added tests for Amazon Bedrock provider creation
- Updated existing provider tests for SDK changes


<!-- This is an auto-generated comment: release notes by coderabbit.ai -->
## Summary by CodeRabbit

* **New Features**
* Added support for Amazon Bedrock as a new LLM provider option, configurable alongside existing providers.

* **Improvements**
* Upgraded AI SDK dependencies to newer major versions for improved compatibility and performance.
* Standardized structured-output handling and updated token usage fields (input/output tokens) for more accurate tracking.

* **Documentation**
* Added example Bedrock configuration to environment template and sample .env.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

📝 Walkthrough

Walkthrough

Adds Amazon Bedrock as a new LLM provider, upgrades AI SDK dependencies, updates structured-output token fields from promptTokens/completionTokens to inputTokens/outputTokens, extends environment/config schemas and templates for Bedrock credentials, and adds/updates tests for Bedrock and output-based responses.

Changes

Cohort / File(s) Summary
Dependency Updates
package.json
Upgraded AI SDK packages and added @ai-sdk/amazon-bedrock; bumped ai package to ^6.0.99.
Provider Integration & Factory
src/providers/provider-factory.ts, src/providers/vercel-ai-provider.ts
Added Amazon Bedrock provider type and instantiation path; adjusted vercel provider to use result.output and map token fields to inputTokens/outputTokens.
Schemas & Config Templates
src/schemas/env-schemas.ts, src/config/global-config.ts, .env.example
Added BEDROCK config schema, default config, and validation (require paired AWS credentials); added Bedrock example and env entries (model, region, temp) in templates.
Tests
tests/provider-factory.test.ts, tests/vercel-ai-provider.test.ts
Added Bedrock mocks and provider tests; updated vercel-ai-provider tests to expect output structure and inputTokens/outputTokens.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client as Client
participant Factory as ProviderFactory
participant Bedrock as AmazonBedrockSDK
participant App as App (parser/logger)
Client->>Factory: request provider for LLM_PROVIDER=amazon-bedrock
Factory->>Bedrock: instantiate with region/credentials/model/temperature
Client->>Factory: send prompt / structured-output request
Factory->>Bedrock: generateText(call) -> returns result.output + usage (inputTokens/outputTokens)
Factory->>App: deliver result.output and mapped token usage
App->>Client: parsed structured object or error

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 A curious rabbit hops to the cloud so wide,
Bedrock brought along—credentials at my side,
Inputs and outputs now neatly aligned,
Tests bound their paws, configs clearly signed,
Hooray—new provider hops in stride!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat(providers): add Amazon Bedrock LLM provider support' accurately and concisely describes the primary change: adding Amazon Bedrock as a new LLM provider with all supporting infrastructure.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ft/amazonbedrock

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/provider-factory.test.ts (1)

157-172: Duplicate test cases — two tests with identical setup and assertion.

"throws error for unsupported provider type" (Line 158) and "throws descriptive error for invalid provider type" (Line 166) use the exact same envConfig and expect(...).toThrow(...) assertion. One of them should be removed or differentiated (e.g., to test null or undefined separately, which Line 174 already covers).

🧹 Proposed fix — remove the redundant test
-    it('throws descriptive error for invalid provider type', () => {
-      const envConfig = {
-        LLM_PROVIDER: 'unsupported-provider',
-      } as unknown as EnvConfig;
-
-      expect(() => createProvider(envConfig)).toThrow('Unsupported provider type: unsupported-provider');
-    });
-
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/provider-factory.test.ts` around lines 157 - 172, Two tests ("throws
error for unsupported provider type" and "throws descriptive error for invalid
provider type") duplicate the same setup and assertion; remove one of them or
change the second to a distinct scenario. Update the test suite around
createProvider to either delete the redundant "throws descriptive error for
invalid provider type" case or modify it to supply a different envConfig (e.g.,
LLM_PROVIDER: null/undefined or an empty string) and assert the appropriate
error message so each test verifies a unique failure path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/schemas/env-schemas.ts`:
- Around line 58-64: BEDROCK_CONFIG_SCHEMA currently allows AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY to be optional and empty strings which permits one key
without the other; update BEDROCK_CONFIG_SCHEMA to require non-empty strings
(use z.string().min(1).optional() or similar) for both AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY and add a .refine() on BEDROCK_CONFIG_SCHEMA to enforce
the paired-credential rule (either both present/non-empty or both
absent/undefined) so that createAmazonBedrock receives either a complete
credential pair or none.

---

Nitpick comments:
In `@tests/provider-factory.test.ts`:
- Around line 157-172: Two tests ("throws error for unsupported provider type"
and "throws descriptive error for invalid provider type") duplicate the same
setup and assertion; remove one of them or change the second to a distinct
scenario. Update the test suite around createProvider to either delete the
redundant "throws descriptive error for invalid provider type" case or modify it
to supply a different envConfig (e.g., LLM_PROVIDER: null/undefined or an empty
string) and assert the appropriate error message so each test verifies a unique
failure path.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 84fd292 and 86b493f.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (7)
  • package.json
  • src/config/global-config.ts
  • src/providers/provider-factory.ts
  • src/providers/vercel-ai-provider.ts
  • src/schemas/env-schemas.ts
  • tests/provider-factory.test.ts
  • tests/vercel-ai-provider.test.ts

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/schemas/env-schemas.ts (1)

79-90: Consider adding a path to the Zod issue for better error targeting.

Without a path, the validation error is reported at the root object level. Adding a path helps downstream error handling (e.g., field-level error display) pinpoint the problematic field.

💡 Suggested improvement
       ctx.addIssue({
         code: z.ZodIssueCode.custom,
         message: 'AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY must both be provided or both be omitted',
+        path: ['AWS_ACCESS_KEY_ID'],
       });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/schemas/env-schemas.ts` around lines 79 - 90, The superRefine block for
ProviderType.AmazonBedrock currently adds a root-level issue via ctx.addIssue;
update that call to include a path so the error targets the relevant fields
(e.g., set path to ['AWS_ACCESS_KEY_ID','AWS_SECRET_ACCESS_KEY'] or to each
field as appropriate) when detecting the mismatched presence of
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in .superRefine (referencing
data.LLM_PROVIDER, ProviderType.AmazonBedrock, ctx.addIssue).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/schemas/env-schemas.ts`:
- Around line 79-90: The superRefine block for ProviderType.AmazonBedrock
currently adds a root-level issue via ctx.addIssue; update that call to include
a path so the error targets the relevant fields (e.g., set path to
['AWS_ACCESS_KEY_ID','AWS_SECRET_ACCESS_KEY'] or to each field as appropriate)
when detecting the mismatched presence of AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY in .superRefine (referencing data.LLM_PROVIDER,
ProviderType.AmazonBedrock, ctx.addIssue).

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86b493f and 819397a.

📒 Files selected for processing (2)
  • .env.example
  • src/schemas/env-schemas.ts
✅ Files skipped from review due to trivial changes (1)
  • .env.example

@oshorefueled oshorefueled merged commit f4c55ab into main Feb 26, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants